Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I am writing a selenium script by python, but I think I don't see any information about:
How to get http status code from selenium Python code.
Or I missing something. If anyone found that, please feel free to post. | false | 5,799,228 | 0.016665 | 0 | 0 | 1 | You can also inspect the last message in the log for an error status code:
print browser.get_log('browser')[-1]['message'] | 0 | 76,570 | 0 | 43 | 2011-04-27T04:08:00.000 | python,selenium | How to get status code by using selenium.py (python code) | 1 | 1 | 12 | 53,767,968 | 0 |
0 | 0 | I'm writing a selenium script by python. Something that I found out, is that when selenium gets 404 status code. it crashes. What is the best way to deal with it? | false | 5,799,512 | 0.197375 | 0 | 0 | 1 | I had a similar problem. Sometimes a server we were using (i.e., not the main server we were testing, only a "sub-server") throughout our tests would crash. I added a minor sanity test to see if the server is up or not before the main tests ran. That is, I performed a simple GET request to the server, surrounded it with try-catch and if that passed I continue with the tests. Let me stress out this point- before i even started selenium i would perform a GET request using python's urllib2. It's not the best of solutions but it's fast it was enough for me. | 0 | 618 | 0 | 2 | 2011-04-27T04:51:00.000 | python,selenium | In selenium.py, How to deal with 404 status code | 1 | 1 | 1 | 6,363,257 | 0 |
1 | 0 | I am not sure how to accurately describe my problem , and right now i have a total mess in my head , so please deal with it and correct me if i am wrong , and i will be for sure.
MAIN GOAL:
Is to built a real time line chart , which updates itself without web page reloading based on data , which comes from stdout . So it basically must be python script , which gets every second value , and based on this values continues to draw line in a line chart .
1) Well basic tags in my head right now are : Javascript / AJAX , cgi , python http web server , SVG (Vector graphics).
So basically the biggest problem , which i dont understand is how to continuesly transfer stdout values to the webpage . Should i write my own python http web server , somehow pass values and with javascript or ajax + SVG draw the chart .
Or writing a http web server is a stupid idea and i can somehow make it work without it ?
Any other suggestions , or pointing me to some tutorials or arcticles are welcome . Because right now i am very confused , especially on the part of continuesly passing values to webpage.
Thx in advance. Hope you will be able to point me somewhere =) | false | 5,808,819 | 0 | 0 | 0 | 0 | I would use JavaScript to create or manipulate an SVG document, with AJAX requests polling the server and getting data back. | 0 | 5,305 | 0 | 1 | 2011-04-27T18:33:00.000 | python,ajax,svg,cgi,charts | Real time chart (SVG) + AJAX/Javascript/Jquery + StdOut + Python + I dont really know myself | 1 | 1 | 4 | 5,809,518 | 0 |
1 | 0 | I have a requirement wherein I have to extract content inside <raw> tag. For example I need to extract abcd and efgh from this html snippet:
<html><body><raw somestuff>abcd</raw><raw somesuff>efgh</raw></body></html>
I used this code in my python
re.match(r'.*raw.*(.*)/raw.*', DATA)
But this is not returning any substring. I'm not good at regex. So a correction to this or a new solution would help me a great deal.
I am not supposed to use external libs (due to some restriction in my company). | false | 5,814,493 | 0 | 0 | 0 | 0 | Using non greedy matching (*?) can do this easily, at least for your example.
re.findall(r'<raw[^>]*?>(.*?)</raw>', DATA) | 0 | 2,535 | 0 | 0 | 2011-04-28T06:25:00.000 | python,html,regex,tags,substring | Python: Need to extract tag content from html page using regex, but not BeautifulSoup | 1 | 1 | 2 | 5,814,571 | 0 |
1 | 0 | I'm trying to make a website that allows you to setup a photo gallery with a custom domain name.
Users of the site don't know how to register or configure domain names, so this has to be done for them.
User enters desired domain name in a box (we check if it's available) and click 'register' button and our website registers the domain name for them and sets up the website automatically (when user goes to that domain name, the photo gallery just works, no technical skills needed). | false | 5,814,837 | 0.132549 | 0 | 0 | 2 | There is no general API for this. You have to check back with your own domain registration organization. This is specific to the related domain provider. | 0 | 4,713 | 0 | 6 | 2011-04-28T06:59:00.000 | python,api,hosting,dns,registration | What is the best API for registering and configuring domain names? | 1 | 1 | 3 | 5,814,951 | 0 |
0 | 0 | So, I need to connect to SSH server through proxy socks.
I read paramiko and twisted.conch docs, but didn`t found proxy socks support there. | false | 5,816,070 | 0.379949 | 0 | 0 | 4 | This socket-wrapper allows you to use static ssh-tunnels. I found a common solution for my problem:
Use paramiko SSHClient class
Extend SSHClient with your own class
Re-implement the connect() method:
Instead of using a standard socket object we pass to it a fixed proxied socket from the python package sockipy | 0 | 3,885 | 0 | 3 | 2011-04-28T08:53:00.000 | python,ssh,twisted,socks,paramiko | Python ssh client over socks (proxy) | 1 | 1 | 2 | 5,823,383 | 0 |
1 | 0 | I'm trying to read in info that is constantly changing from a website.
For example, say I wanted to read in the artist name that is playing on an online radio site.
I can grab the current artist's name but when the song changes, the HTML updates itself and I've already opened the file via:
f = urllib.urlopen("SITE")
So I can't see the updated artist name for the new song.
Can I keep closing and opening the URL in a while(1) loop to get the updated HTML code or is there a better way to do this? Thanks! | false | 5,844,870 | 0.066568 | 0 | 0 | 1 | You'll have to periodically re-download the website. Don't do it constantly because that will be too hard on the server.
This is because HTTP, by nature, is not a streaming protocol. Once you connect to the server, it expects you to throw an HTTP request at it, then it will throw an HTTP response back at you containing the page. If your initial request is keep-alive (default as of HTTP/1.1,) you can throw the same request again and get the page up to date.
What I'd recommend? Depending on your needs, get the page every n seconds, get the data you need. If the site provides an API, you can possibly capitalize on that. Also, if it's your own site, you might be able to implement comet-style Ajax over HTTP and get a true stream.
Also note if it's someone else's page, it's possible the site uses Ajax via Javascript to make it up to date; this means there's other requests causing the update and you may need to dissect the website to figure out what requests you need to make to get the data. | 0 | 858 | 0 | 1 | 2011-04-30T21:56:00.000 | python,stream,live,urllib | Parsing lines from a live streaming website in Python | 1 | 3 | 3 | 5,844,901 | 0 |
1 | 0 | I'm trying to read in info that is constantly changing from a website.
For example, say I wanted to read in the artist name that is playing on an online radio site.
I can grab the current artist's name but when the song changes, the HTML updates itself and I've already opened the file via:
f = urllib.urlopen("SITE")
So I can't see the updated artist name for the new song.
Can I keep closing and opening the URL in a while(1) loop to get the updated HTML code or is there a better way to do this? Thanks! | false | 5,844,870 | 0 | 0 | 0 | 0 | Yes, this is correct approach. To get changes in web, you have to send new query each time. Live AJAX sites do exactly same internally.
Some sites provide additional API, including long polling. Look for documentation on the site or ask their developers whether there is some. | 0 | 858 | 0 | 1 | 2011-04-30T21:56:00.000 | python,stream,live,urllib | Parsing lines from a live streaming website in Python | 1 | 3 | 3 | 5,844,908 | 0 |
1 | 0 | I'm trying to read in info that is constantly changing from a website.
For example, say I wanted to read in the artist name that is playing on an online radio site.
I can grab the current artist's name but when the song changes, the HTML updates itself and I've already opened the file via:
f = urllib.urlopen("SITE")
So I can't see the updated artist name for the new song.
Can I keep closing and opening the URL in a while(1) loop to get the updated HTML code or is there a better way to do this? Thanks! | false | 5,844,870 | 0.066568 | 0 | 0 | 1 | If you use urllib2 you can read the headers when you make the request. If the server sends back a "304 Not Modified" in the headers then the content hasn't changed. | 0 | 858 | 0 | 1 | 2011-04-30T21:56:00.000 | python,stream,live,urllib | Parsing lines from a live streaming website in Python | 1 | 3 | 3 | 5,844,912 | 0 |
1 | 0 | Suppose there is a web app name thesite.com. I need to give every user
a url of his own. For eg- if alice signs up, she gets a space of her
own at the url "alice.thesite.com".. How do I achieve this.
Thanks
Alice | false | 5,848,496 | 0 | 0 | 0 | 0 | By pointing all the subdomains of that domain to the same website via DNS, and then inspecting the HTTP 1.1 Host header to determine which user website is being viewed. | 0 | 164 | 0 | 2 | 2011-05-01T12:59:00.000 | python,url,flask | how to map url with usernames prefixed? | 1 | 1 | 2 | 5,848,509 | 0 |
0 | 0 | Is there a good high level library that can be used for IP address manipulation? I need to do things like:
Given a string find out if it is a valid IPv4/IPv6 address.
Have functionality like ntop and pton
etc
I can use the low level inet_ntop() etc. But is there a better library that handles these better and fast (c/c++/python)? | false | 5,857,320 | 0 | 1 | 0 | 0 | I have the mind boogling ipv4 / ipv6 validating regexps around, which are quite long and non-trivial to produce. I can share if you want. | 0 | 1,307 | 0 | 2 | 2011-05-02T12:50:00.000 | c++,python,c,freebsd,ipv6 | Efficient IP address c/c++ library on unix | 1 | 2 | 5 | 5,857,748 | 0 |
0 | 0 | Is there a good high level library that can be used for IP address manipulation? I need to do things like:
Given a string find out if it is a valid IPv4/IPv6 address.
Have functionality like ntop and pton
etc
I can use the low level inet_ntop() etc. But is there a better library that handles these better and fast (c/c++/python)? | false | 5,857,320 | 0.039979 | 1 | 0 | 1 | If you are writing a sockets app it's highly unlikely that address manipulation is going to be your most important consideration. Don't waste time on this when you have network I/O to worry about. | 0 | 1,307 | 0 | 2 | 2011-05-02T12:50:00.000 | c++,python,c,freebsd,ipv6 | Efficient IP address c/c++ library on unix | 1 | 2 | 5 | 5,857,539 | 0 |
0 | 0 | Some Python methods work on various input sources. For example, the XML element tree parse method takes an object which can either be a string, (in which case the API treats it like a filename), or an object that supports the IO interface, like a file object or io.StringIO.
So, obviously the parse method is doing some kind of interface sniffing to figure out which course of action to take. I guess the simplest way to achieve this would be to check if the input parameter is a string by saying isinstance(x, str), and if so treat it as a file name, else treat it as an IO object.
But for better error-checking, I would think it would be best to check if x supports the IO interface. What is the standard, idiomatic way to check if an object supports a specified interface?
One way, I suppose, would be to just say:
if "read" in x.__class__.__dict__: # check if object has a read method
But just because x has a "read" method doesn't necessarily mean it supports the IO interface, so I assume I should also check for every method in the IO interface. Is this usually the best way to go about doing this? Or should I just forget about checking the interface, and just let a possible AttributeError get handled further up the stack? | false | 5,857,951 | 0 | 0 | 0 | 0 | Yeah, python is all about duck typing, and it's perfectly acceptable to check for a few methods to decide whether an object supports the IO interface. Sometimes it even makes sense to just try calling your methods in a try/except block and catch TypeError or ValueError so you know if it really supports the same interface (but use this sparingly). I'd say use hasattr instead of looking at __class__.__dict__, but otherwise that's the approach I would take.
(In general, I'd check first if there wasn't already a method somewhere in the standard library to handle stuff like this, since it can be error-prone to decide what constitutes the "IO interface" yourself. For example, there a few handy gems in the types and inspect modules for related interface checking.) | 1 | 183 | 0 | 2 | 2011-05-02T13:49:00.000 | python,interface,python-3.x | Python 3: Determine if object supports IO | 1 | 2 | 3 | 5,858,213 | 0 |
0 | 0 | Some Python methods work on various input sources. For example, the XML element tree parse method takes an object which can either be a string, (in which case the API treats it like a filename), or an object that supports the IO interface, like a file object or io.StringIO.
So, obviously the parse method is doing some kind of interface sniffing to figure out which course of action to take. I guess the simplest way to achieve this would be to check if the input parameter is a string by saying isinstance(x, str), and if so treat it as a file name, else treat it as an IO object.
But for better error-checking, I would think it would be best to check if x supports the IO interface. What is the standard, idiomatic way to check if an object supports a specified interface?
One way, I suppose, would be to just say:
if "read" in x.__class__.__dict__: # check if object has a read method
But just because x has a "read" method doesn't necessarily mean it supports the IO interface, so I assume I should also check for every method in the IO interface. Is this usually the best way to go about doing this? Or should I just forget about checking the interface, and just let a possible AttributeError get handled further up the stack? | false | 5,857,951 | 0.066568 | 0 | 0 | 1 | Or should I just forget about checking the interface, and just let a possible AttributeError get handled further up the stack?
The general pythonic principle seems to be doing whatever you want to do with the object you get and just capture any exception it might cause. This is the so-called duck typing. It does not necessarily mean you should let those exception slip from your function to the calling code, though. You can handle them in the function itself if it's capable of doing so in meaningful way. | 1 | 183 | 0 | 2 | 2011-05-02T13:49:00.000 | python,interface,python-3.x | Python 3: Determine if object supports IO | 1 | 2 | 3 | 5,858,102 | 0 |
0 | 0 | So far my networking code works fine, but I'm a bit worried about something I hid under the carpet:
The man pages for accept, close, connect, recv and send mention that errno.EINTR can show up when a system call was interrupted by a signal.
I am quite clueless here.
What does python do with that ? Does it automatically retry the call, does it raise a socket.error with that errno ? What is the appropriate thing I should do if that exception is raised ? Can I generate these signals myself in my unittests ? | true | 5,858,460 | 1.2 | 0 | 0 | 2 | Python simply retries the call and hides the signal from the user (helps with cross-platform consistency where -EINTR doesn't exist). You can safely ignore the EINTR issue but if you'd like to test it anyway, it's easy to do. Just set up a blocking operation that will not return (such as a socket.accept with no incoming connection) and send the process a signal. | 0 | 1,281 | 0 | 3 | 2011-05-02T14:35:00.000 | python,sockets | What do python sockets do with EINTR? | 1 | 1 | 1 | 5,858,847 | 0 |
0 | 0 | Is it possible to use the S3 multipart upload with strings instead of a file? The strings i am using are being generated during the upload process and so the size and the exact content is unknown the time the multipart upload starts. | true | 5,863,516 | 1.2 | 0 | 0 | 5 | There is no explicit "upload_part_from_string" method available. You could probably use StringIO to wrap a file-like object around the string and then pass that to "upload_part_from_file" but I haven't tested that.
It would be easy to add this method. At the time I didn't think it would be all that useful since each of the parts has to be a minimum of 5MB and that's a pretty big string. But if you have a use case for it, let me know. Or, fork boto on github, add the method and send a pull request. | 0 | 858 | 0 | 3 | 2011-05-02T23:19:00.000 | python,amazon-s3,multipart,boto | Can you use multipart upload in boto with strings instead of a file handler? | 1 | 1 | 1 | 5,909,965 | 0 |
0 | 0 | I am a newbie to Selenium and is implementing selenium-rc with Python client library. I tried traversing through my page's div using xpath(s) elements using the command "sel.get_xpath_count(xpath)".
It gives a count of 20, but when I iterate through every div using for statement and command "sel.get_text(xpath='%s[%d]'%(xpath, i))", but it only finds the first element and give a error on the remaining 19 saying divs not found. | true | 5,866,496 | 1.2 | 0 | 0 | 2 | Your second XPath expression is wrong. Programmers trained in C-style languages frequently make this mistake, because they see [...] and think "index into an array", but that's not what brackets do in XPath.
If you use sel.get_xpath_count(something), then you need to use sel.get_text("xpath=(something)[item_number]"). Note the use of parentheses around the original XPath expression in the second use.
The reason behind this is that something[item_count] is short-hand for something AND position() = item_count - thus you wind up adding another predicate to the "something" expression, instead of selecting one of the nodes selected by the expression. (something)[item_count] works because the value of (something) is a list of nodes, and adding a position() = item_count selects the node from the list with the specified position. That's more like a C-style array. | 0 | 396 | 0 | 1 | 2011-05-03T07:36:00.000 | python,selenium-rc | Selenium-rc Python client : Not able to iterate through the xpath? | 1 | 1 | 1 | 5,871,749 | 0 |
0 | 0 | I would like to write a script to do an heavy network upload, in the background.
However, I would like it to pause when I am using my computer (either by detecting network activity or keyboard activity or that I am not idle).
What is the best way to detect that I am using the computer, on Python on Unix? | false | 5,914,506 | 0 | 0 | 0 | 0 | Stick a webcam on your computer that grabs an image every five seconds, then there's some python modules for image analysis that can check if you are still sitting in your seat.
Or get a microswitch wired into your chair, connect that to your PC serial port (or one of those modern USB ports) and read that from Python... | 0 | 2,328 | 1 | 8 | 2011-05-06T16:40:00.000 | python,networking,keyboard,python-idle | In Python on Unix, determine if I am using my computer? or idle? | 1 | 1 | 5 | 5,914,591 | 0 |
0 | 0 | I created a simple server and client using Python. When i run it on my computer i works fine. But if i run the server on my comupter and try to open client on the other computer, it cant find my server.
Thanks for helping. | false | 5,926,773 | 0.197375 | 0 | 0 | 2 | Do you know the IP for your server computer? Just making sure you know that 127.0.0.1 (or localhost) no longer works in this setup.
Are your computers behind a NAT?
Can you ping from one computer to another?
Do you have a firewall? | 0 | 55 | 0 | 0 | 2011-05-08T10:03:00.000 | python,networking,network-programming | Python server problems | 1 | 1 | 2 | 5,926,779 | 0 |
0 | 0 | i have a question because i am fairly new to python, socket programming and signals. i have written a python socket server that forks a new process to handle requests for every client that connects to a certain port. I have caught the ctrl+c signal and closed my database connections should the the server get such signal.
Since i am testing my server using the netcat command my question is: if one uses CTRL+C to terminate the client connection that is running in a bash window, can the server catch that SIGINT signal and act upon it ?
or is it that because the client being a totally different program (my case is the netcat command) is the only one that can receive the SIGINT signal ?
Can the server receive a SIGINT command should the client be keyboardInterrupted ?
Thank you in advance for any information about this. | true | 5,926,908 | 1.2 | 0 | 0 | 2 | Signals exist in their own process: a signal raised in a client process won't be known about by the server process and vice versa. Your options are either to have one process tell the other that it is being terminated, or just wait for the other side to notice that the connection has been dropped (which, if you're using TCP/IP, will come from a failed socket operation on the given socket). | 0 | 1,001 | 0 | 1 | 2011-05-08T10:34:00.000 | python,linux,sockets,signals | python sockets, request.recv and signals | 1 | 1 | 1 | 5,926,941 | 0 |
0 | 0 | I'm working in python with os.path.splitext() and curious if it is possible to separate filenames from extensions with multiple "."? e.g. "foobar.aux.xml" using splitext. Filenames vary from [foobar, foobar.xml, foobar.aux.xml]. Is there a better way? | false | 5,930,036 | 0.066568 | 0 | 0 | 2 | From the help of the function:
Extension is everything from the last
dot to the end, ignoring leading dots.
So the answer is no, you can't do it with this function. | 1 | 18,511 | 0 | 18 | 2011-05-08T20:11:00.000 | python | Separating file extensions using python os.path module | 1 | 1 | 6 | 5,930,076 | 0 |
0 | 0 | I am looking to create a simple graph showing 2 numbers of time for my personal twitter. They are:
Number of followers per day
Number of mentions per day
From my research so far, the search API does not provide a date so I am not about to do a GROUP BY. The only way I can have access to dates is through the OAuth Api but that requires interaction from the end user which I am trying to avoid.
Can someone point me in the right direction in order to achieve this? Thanks. | false | 5,932,111 | 0 | 1 | 0 | 0 | The best way is to use a cron to record the data daily.
However, you can query the mentions using the search api with a untill tag. Which should do the trick. | 0 | 438 | 0 | 1 | 2011-05-09T03:12:00.000 | python,twitter | Twitter API: Getting Data for Analytics | 1 | 2 | 2 | 7,361,262 | 0 |
0 | 0 | I am looking to create a simple graph showing 2 numbers of time for my personal twitter. They are:
Number of followers per day
Number of mentions per day
From my research so far, the search API does not provide a date so I am not about to do a GROUP BY. The only way I can have access to dates is through the OAuth Api but that requires interaction from the end user which I am trying to avoid.
Can someone point me in the right direction in order to achieve this? Thanks. | false | 5,932,111 | 0 | 1 | 0 | 0 | We can although use the search api to fetch mentions but there is a limit in it.
At a given point of time you can only fetch 200 mentions.
Any one knows how to get total mentions count? | 0 | 438 | 0 | 1 | 2011-05-09T03:12:00.000 | python,twitter | Twitter API: Getting Data for Analytics | 1 | 2 | 2 | 8,756,753 | 0 |
0 | 0 | I created a python software install with setup.py . In this software I use data files (XML file) when I install these xml file using setup.py then these files save with other files in /usr/lib/python2.7/site_packages/XYZ . But file permission set to these files (XML Files) rwx------ means only super user(root) can read these file I want to change the file permission of XML files as rwxr----- means current user can also read that file. How do I change the data files permission. | false | 5,932,804 | -1 | 0 | 0 | -7 | Login as root, and in the shell type :
chmod 744 yourfilename | 0 | 5,700 | 0 | 10 | 2011-05-09T05:16:00.000 | python,setup.py | set file permissions in setup.py file | 1 | 1 | 4 | 5,932,879 | 0 |
0 | 0 | urllib.urlencode could encode url's params. It seems no likely function in Mechanize.
So, I have to use urllib and Mechanize, because I only need urlencode.
Any function could implement the same task like urllib.urlencode in Mechanize | false | 5,940,520 | 0.291313 | 1 | 0 | 3 | Why would mechanize have it? It's already in urllib, which comes with Python. | 0 | 612 | 0 | 0 | 2011-05-09T17:45:00.000 | python,mechanize,urllib | which function of mechanize is equal with urllib.urlencode | 1 | 2 | 2 | 5,940,566 | 0 |
0 | 0 | urllib.urlencode could encode url's params. It seems no likely function in Mechanize.
So, I have to use urllib and Mechanize, because I only need urlencode.
Any function could implement the same task like urllib.urlencode in Mechanize | false | 5,940,520 | 0 | 1 | 0 | 0 | mechanize actually uses urllib and urllib2 for most tasks that involve urls.
Since this functionality already exists in urllib/2 (as mentioned by Ignacio Vazquez-Abrams) there's no need for it to be implemented elsewhere. When coding you import all the libraries that have functionality you need to use. | 0 | 612 | 0 | 0 | 2011-05-09T17:45:00.000 | python,mechanize,urllib | which function of mechanize is equal with urllib.urlencode | 1 | 2 | 2 | 5,992,039 | 0 |
1 | 0 | I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out? | false | 5,973,245 | 0.099668 | 1 | 0 | 1 | When scraping a web application, I use either:
1) WireShark ... or...
2) A logging proxy server (that logs headers as well as payload)
I then compare what the real application does (in this case, how your browser interacts with the site) with the scraper's logs. Working through the differences will bring you to a working solution. | 0 | 1,706 | 0 | 2 | 2011-05-12T04:17:00.000 | asp.net,python,asp.net-ajax,screen-scraping,urllib2 | Scraping ASP.NET with Python and urllib2 | 1 | 2 | 2 | 5,974,002 | 0 |
1 | 0 | I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out? | true | 5,973,245 | 1.2 | 1 | 0 | 2 | For anyone else that might be in a similar predicament in the future:
I'd just like to note that I've had a lot of success with a Greasemonkey user script in Chrome to do all of my scraping and automation. I found it to be a lot easier than Python + urllib2 (at least for this particular case). The user scripts are written in 100% Javascript. | 0 | 1,706 | 0 | 2 | 2011-05-12T04:17:00.000 | asp.net,python,asp.net-ajax,screen-scraping,urllib2 | Scraping ASP.NET with Python and urllib2 | 1 | 2 | 2 | 6,035,498 | 0 |
0 | 0 | I want to write a program that sends an e-mail to one or more specified recipients when a certain event occurs. For this I need the user to write the parameters for the mail server into a config. Possible values are for example: serveradress, ports, ssl(true/false) and a list of desired recipients.
Whats the user-friendliest/best-practice way to do this?
I could of course use a python file with the correct parameters and the user has to fill it out, but I wouldn't consider this user friendly. I also read about the 'config' module in python, but it seems to me that it's made for creating config files on its own, and not to have users fill the files out themselves. | false | 5,980,101 | 0.099668 | 1 | 0 | 2 | I doesn't matter technically proficient your users are; you can count on them to screw up editing a text file. (They'll save it in the wrong place. They'll use MS Word to edit a text file. They'll make typos.) I suggest making a gui that validates the input and creates the configuration file in the correct format and location. A simple gui created in Tkinter would probably fit your needs. | 0 | 1,174 | 0 | 2 | 2011-05-12T15:05:00.000 | python,configuration | Userfriendly way of handling config files in python? | 1 | 1 | 4 | 5,980,640 | 0 |
1 | 0 | given that these all have different values:
HTTP browser accept language header
parameter
HTTP GET human language parameter eg. hl=en or hl=fr
Cookie value for language choice
How should we decide which language to display pages in if deciding based on these values? It's also thinkable saving user's preferred language to the data layer for a fourth way to let agents and users decide language.
Thanks in advance for answers and comments | true | 6,006,839 | 1.2 | 0 | 0 | 5 | If you have a saved preference somewhere, then that would be the first choice.
The cookie value is, presumably, what they chose last time they were around so that would be the first thing to check.
The hl parameter is something that Google has figured out and they probably know what they're doing so that seems like a sensible third choice.
Then we have the HTTP headers or a final default so check the accept language header next. And finally, have a default language in place just in case all else fails.
So, in order:
Saved preference.
Cookie.
hl parameter.
HTTP accept language header.
Built in default.
Ideally you'd backtrack up the list once you get a language from somewhere so that you'd have less work to do on the next request. For example, if you ended up getting the language from the accept language header, you'd want to: set hl (possibly redirecting), store it in the cookie, and save the preference in their user settings (if you have such a permanent store and a signed in person). | 0 | 140 | 0 | 3 | 2011-05-15T06:19:00.000 | python,django,google-app-engine,internationalization,nlp | How to prioritize internationalization parameters | 1 | 1 | 1 | 6,007,173 | 0 |
0 | 0 | I'm using gevent patched socket to connect to a streaming server and I'm using an adsl connection.
I don't control the server but in my tests, if I stop the server I can detect the disconnection by just checking if the result from recv is an empty string, but if I turn off my adsl modem recv never exits. If I just disconnect my computer's network cable it doesn't return an empty string either, but when I reconnect it, it returns everything the server sent in the meantime, so I'm guessing the router or modem is keeping the connection open for me and buffering the stream while my network cable is disconnected.
I tried setting socket.SO_RCVTIMEO to a few seconds but it didn't detect the disconnection, recv continues to "block" forever. This is gevent, so it only blocks the greenthread, but I need to detect this disconnection as soon as possible so I can try to reconnect. | false | 6,009,365 | 0.197375 | 0 | 0 | 2 | it's not detecting the disconnection because there wasn't any disconnection, the TCP "connection" is still alive and suppose to be reliable. if for example you unplug your LAN cable, and the re-plug it, the connection will still work.
if you really want to detect the disconnection ASAP, then i guess you should just parse every second the os network status (ifconfig/ipconfig)/ or use os events, and do what you want when you detect network disconnection. | 0 | 5,998 | 1 | 4 | 2011-05-15T15:35:00.000 | python,sockets,networking,gevent | How do I detect a socket disconnection? / How do I call socket.recv with a timeout? | 1 | 1 | 2 | 6,010,876 | 0 |
1 | 0 | Occasionally I'm querying a server for JSON and receiving a 404 HTML page when the requested data is not available.
Thus, I must have a check to ensure that the JSON I'm expecting, is actually json rather than HTML. I'm accomplishing this now by checking for a string that I can expect to be in the HTML is contained in the response, but I think there has to be a better way to do this. | false | 6,011,596 | 0.197375 | 0 | 0 | 2 | Find the first non-whitespace character. If it's "<" then you have HTML.
Also, check the content type header and HTTP status code. | 1 | 817 | 0 | 1 | 2011-05-15T22:08:00.000 | python,django,json,simplejson | How to validate JSON with simplejson | 1 | 1 | 2 | 6,011,614 | 0 |
0 | 0 | From the python docs on urllib.urlopen(), talking about the file-like object the function returns on success:
(It is not a built-in file object, however, so it can’t be used at those few places where a true built-in file object is required.)
What are those few places where a true built-in file object is required?
NB: This is purely out of curiosity... no practical problem to be solved here. | false | 6,012,043 | 0 | 0 | 0 | 0 | For example, f.fileno() doesn't necessarily return a true OS-level file descriptor that you could use with os.read(). | 1 | 177 | 0 | 5 | 2011-05-15T23:48:00.000 | python,api,file,interface,standard-library | python: where is a true built-in file object required? | 1 | 2 | 3 | 6,012,172 | 0 |
0 | 0 | From the python docs on urllib.urlopen(), talking about the file-like object the function returns on success:
(It is not a built-in file object, however, so it can’t be used at those few places where a true built-in file object is required.)
What are those few places where a true built-in file object is required?
NB: This is purely out of curiosity... no practical problem to be solved here. | true | 6,012,043 | 1.2 | 0 | 0 | 3 | As the other answers have noted, there isn't really anywhere that specifically needs a file object, but there are interfaces that need real OS level file descriptors, which file-like objects like StringIO can't provide.
The os module has several methods that operate directly on file descriptors, as do the select and mmap modules. Some higher level modules rely on those under the hood, so may exhibit some limitations when working with file-like objects that don't support the fileno() method.
I'm not aware of any consistent documentation of these limitations, though (aside from the obvious one of APIs that accept numeric file descriptors rather than objects). It's more a matter of "try it and see if it works". If things don't work, then this is something to keep in the back of your mind to check as a possible culprit (especially if phrases like "no attribute named 'fileno'" or "invalid file descriptor" appear in any relevant error messages). | 1 | 177 | 0 | 5 | 2011-05-15T23:48:00.000 | python,api,file,interface,standard-library | python: where is a true built-in file object required? | 1 | 2 | 3 | 6,016,974 | 0 |
1 | 0 | SHORT: my python code generates a webpage with a table. i'm considering rewriting it to generate a js file instead, that holds the table contents in an array ... and then let the table be generated client-side. I am not sure of the pros and cons. Anyone care to offer their experience/insight? Are there other solutions?
LONG: the web page contains a single table and an embedded gmap. the table is a set of locations with several columns of location-stats and also two navigation columns. one nav column consists of onclicks that will recenter embedded gmap to the lat,lon of the location. the other nav column consists of hrefs that open a new window with a gmap centered on the lat,lon.
Until recently, my python code would do some number crunching on a list of files, and then generate the html file. also i wrote a js file that keeps the webpage liquid upon browser window resizing.
Recently, I modified my python code so that it:
placed the lat,lon info in a custom attribute of the tr elements
no longer produced the nav column tds
and then wrote a js function that
loops through the trs onLoad
reads the lat,lon from the custom attribute
inserts the nav tds
fwiw, this reduced the size of the html file by 70% while increasing the js by 10%.
ok, so now I am debating if I should go all the way and write my python code to generate 2 files
an essentially abstract html file
a js file containing a js array of the locations and their stats | true | 6,022,119 | 1.2 | 0 | 0 | 4 | If your API can output a JSON document with your data, you gain significant flexibility and future-proofing. This might even be something your users will want to access directly for their own external consumption. And of course your JS code can easily generate a table from this data.
However nobody here can tell you whether this is worth doing or not, as that depends entirely on the scope of your project and opportunity cost of time spent re-architecting. | 0 | 77 | 0 | 0 | 2011-05-16T19:07:00.000 | javascript,python,html,dhtml | Advice: where to situate html table content: in JS or HTML | 1 | 1 | 1 | 6,022,254 | 0 |
0 | 0 | How I should call webbrowser.get() function so I open the chrome web browser? I'm running Ubuntu 11.04 and Python version 2.7.
Using webbrowser.get('chrome') yields an error. | false | 6,042,335 | 0.291313 | 0 | 0 | 3 | for mac, do this
webbrowser.get("open -a /Applications/Google\ Chrome.app %s").open("http://google.com") | 0 | 20,001 | 0 | 9 | 2011-05-18T09:12:00.000 | python,google-chrome,browser,ubuntu-11.04 | Calling Chrome web browser from the webbrowser.get() in Python | 1 | 1 | 2 | 22,594,464 | 0 |
1 | 0 | Is there a API or systematic way of stripping irrelevant parts of a web page while scraping it via Python? For instance, take this very page -- the only important part is the question and the answers, not the side bar column, header, etc. One can guess things like that, but is there any smart way of doing it? | false | 6,051,175 | 0.049958 | 0 | 0 | 1 | One approach is to compare the structure of multiple webpages that share the same template. In this case you would compare multiple SO questions. Then you can determine which content is static (useless) or dynamic (useful).
This field is known as wrapper induction. Unfortunately it is harder than it sounds! | 0 | 171 | 0 | 2 | 2011-05-18T21:19:00.000 | python,screen-scraping,web-scraping | Stripping irrelevant parts of a web page | 1 | 1 | 4 | 6,097,514 | 0 |
0 | 0 | Is there any method to connect to vpn through python and have that traffic of that application only route through the said VPN? | false | 6,058,818 | 0 | 0 | 0 | 0 | Python itself can't be used to route traffic; though you can use it to execute system commands to change your routing table. If you're on Linux, you need to use the ip command from the iproute2 and iptables from netfilter to change the routing behavior of specific traffic. | 0 | 2,644 | 1 | 4 | 2011-05-19T12:47:00.000 | python,vpn | routing a only specific traffic through a vpn connection via python | 1 | 2 | 2 | 6,058,965 | 0 |
0 | 0 | Is there any method to connect to vpn through python and have that traffic of that application only route through the said VPN? | false | 6,058,818 | 0 | 0 | 0 | 0 | Please, be more specific in your question. Generally, yes, it is possible.
If you use python 2.7 or newer, you can use source_address option for http connections (see reference for libraries you use) as tuple ('interface address', port).
If you use sockets in your app, use socket.bind(('interface address', port)) on created socket before socket.connect(). | 0 | 2,644 | 1 | 4 | 2011-05-19T12:47:00.000 | python,vpn | routing a only specific traffic through a vpn connection via python | 1 | 2 | 2 | 41,357,041 | 0 |
0 | 0 | Is there any method in selenium to get that text is a Link or simple text | false | 6,067,613 | 0 | 0 | 0 | 0 | If you are looking for something like isThisLink(String textToBeValidated), then no. Selenium doesn't have such a method. You will have to write custom code to validate that. | 0 | 872 | 0 | 1 | 2011-05-20T04:54:00.000 | python,selenium | Is there any method in selenium to get that text is a "Link" or simple text | 1 | 3 | 4 | 6,068,761 | 0 |
0 | 0 | Is there any method in selenium to get that text is a Link or simple text | false | 6,067,613 | 0 | 0 | 0 | 0 | You can use is_element_present('link=' + text). This will return true if text is a link otherwise false. Caution: You need to escape special characters in text. While testing this I was looking for a link that had a question mark in it and it was not found. | 0 | 872 | 0 | 1 | 2011-05-20T04:54:00.000 | python,selenium | Is there any method in selenium to get that text is a "Link" or simple text | 1 | 3 | 4 | 6,082,293 | 0 |
0 | 0 | Is there any method in selenium to get that text is a Link or simple text | false | 6,067,613 | 0 | 0 | 0 | 0 | You can user getText() method in the below way:
selenium.getText("ID or xpath");
Happy Learning , Happy Sharing... | 0 | 872 | 0 | 1 | 2011-05-20T04:54:00.000 | python,selenium | Is there any method in selenium to get that text is a "Link" or simple text | 1 | 3 | 4 | 6,087,908 | 0 |
0 | 0 | I am looking for a vxml (voicexml) parser in python language. Need to use the parsed vxml tags and interact with freeswitch to run IVR. Can anyone help me with any kind of opensource vxml parsers? | false | 6,071,395 | 0.197375 | 0 | 0 | 1 | Isn't any standard XML parser enough? You could just write a quick wrapper to get the elemenents you're interested in (although xpath will also do that for you).
To verify correctness, the VXML schema / definition should be enough (as long as your chosen XML parser supports them)
If XML parser is too low level, let us know what do you expect from that kind of library. | 1 | 1,073 | 0 | 0 | 2011-05-20T11:41:00.000 | python,parsing,ivr,vxml,freeswitch | I am looking for a vxml (voicexml) parser in python language | 1 | 1 | 1 | 6,071,833 | 0 |
0 | 0 | It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response? | false | 6,079,020 | 0 | 1 | 0 | 0 | If you're using tomcat search for "Crawler Session Manager Valve" | 0 | 210 | 0 | 0 | 2011-05-21T01:26:00.000 | java,c++,python,performance,web-crawler | In what scenarios might a web crawler be CPU limited as opposed to IO limited? | 1 | 4 | 4 | 6,081,622 | 0 |
0 | 0 | It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response? | true | 6,079,020 | 1.2 | 1 | 0 | 2 | Only when you are doing extensive processing on each page. eg if you are running some sort of AI to try to guess the semantics of the page.
Even if your crawler is running on a really fast connection, there is still overhead creating connections, and you may also be limited by the bandwidth of the target machines | 0 | 210 | 0 | 0 | 2011-05-21T01:26:00.000 | java,c++,python,performance,web-crawler | In what scenarios might a web crawler be CPU limited as opposed to IO limited? | 1 | 4 | 4 | 6,079,060 | 0 |
0 | 0 | It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response? | false | 6,079,020 | 0.049958 | 1 | 0 | 1 | If the page contains pictures and you are trying to do face recognition on the pictures (ie to form a map of pages that have pictures of each person). That may be CPU bound because of the processing involved. | 0 | 210 | 0 | 0 | 2011-05-21T01:26:00.000 | java,c++,python,performance,web-crawler | In what scenarios might a web crawler be CPU limited as opposed to IO limited? | 1 | 4 | 4 | 6,080,282 | 0 |
0 | 0 | It seems like typical crawlers that just download a small number of pages or do very little processing to decide what pages to download are IO limited.
I am curious as to what order of magnitude estimates of sizes relevant data structures, number of stored pages, indexing requirements etc that might actually make CPU the bottleneck?
For example an application might want to calculate some probabilities based on the links found on a page in order to decide what page to crawl next. This function takes O(noOfLinks) and is evaluated N times (at each step)...where N is the number of pages I want to download in one round of crawling.I have to sort and keep track of these probabilities and i have to keep track of a list of O(N) that will eventually be dumped into disk and the index of a search engine. Is it not possible (assuming one machine) that N grows large enough and that storing the pages and manipulating the links gets expensive enough to compete with the IO response? | false | 6,079,020 | 0 | 1 | 0 | 0 | Not really. It takes I/O to download these additional links, and you're right back to I/O-limited again. | 0 | 210 | 0 | 0 | 2011-05-21T01:26:00.000 | java,c++,python,performance,web-crawler | In what scenarios might a web crawler be CPU limited as opposed to IO limited? | 1 | 4 | 4 | 6,079,035 | 0 |
0 | 0 | I was wondering is there any tutorial out there that can teach you how to push multiple files from desktop to a PHP based web server with use of Python application?
Edited
I am going to be writing this so I am wondering in general what would be the best method to push files from my desktop to web server. As read from some responses about FTP so I will look into that (no sFTP support sadly) so just old plain FTP, or my other option is to push the data and have PHP read the data thats being send to it pretty much like Action Script + Flash file unloader I made which pushes the files to the server and they are then fetched by PHP and it goes on from that point on. | false | 6,085,280 | 0.099668 | 1 | 0 | 1 | I think you're referring to a application made in php running on some website in which case thats just normal HTTP stuff.
So just look at what name the file field has on the html form generated by that php script and then do a normal post. (urllib2 or whatever you use) | 0 | 2,176 | 0 | 2 | 2011-05-22T00:27:00.000 | php,python,file-upload | how to upload files to PHP server with use of Python? | 1 | 1 | 2 | 6,085,309 | 0 |
1 | 0 | I have an application that has a "private" REST API; I use RESTful URLs when making Ajax calls from my own webpages. However, this is unsecure, and anyone could make those same calls if they knew the URL patterns.
What's the best (or standard) way to secure these calls? Is it worth looking at something like OAuth now if I intend to release an API in the future, or am I mixing two separate strategies together?
I am using Google App Engine for Python and Tipfy. | false | 6,091,784 | 0.26052 | 0 | 0 | 4 | Securing a javascript client is nearly impossible; at the server, you have no fool-proof way to differentiate between a human using a web browser and a well-crafted script.
SSL encrypts data over the wire but decrypts at the edges, so that's no help. It prevents man-in-the-middle attacks, but does nothing to verify the legitimacy of the original client.
OAuth is good for securing requests between two servers, but for a Javascript client, it doesn't really help: anyone reading your javascript code can find your consumer key/secret, and then they can forge signed requests.
Some things you can do to mitigate API scraping:
Generate short-lived session cookies when someone visits your website. Require a valid session cookie to invoke the REST API.
Generate short-lived request tokens and include them in your website HTML; require a valid request token inside each API request.
Require users of your website to log in (Google Accounts / OpenID); check auth cookie before handling API requests.
Rate-limit API requests. If you see too many requests from one client in a short time frame, block them. | 0 | 4,137 | 0 | 11 | 2011-05-23T00:36:00.000 | python,google-app-engine,rest,restful-authentication,tipfy | How do I secure REST calls I am making in-app? | 1 | 1 | 3 | 6,098,282 | 0 |
0 | 0 | As asked in the title, are open('...','w').write('...') and urllib.urlopen('..') asynchronous calls? | true | 6,100,934 | 1.2 | 0 | 0 | 7 | No. If you need them to be asynchronous then consider looking at event frameworks such as Twisted, glib, or Qt. | 1 | 428 | 0 | 6 | 2011-05-23T17:52:00.000 | python,file-io,urllib | Are python's file write() and urlopen() methods asynchronous? | 1 | 1 | 1 | 6,100,976 | 0 |
0 | 0 | We have two Python programs running on two linux servers. Now we want to send messages between these Python programs. The best idea so far is to create a TCP/IP server and client architecture, but this seems like a very complicate way to do it. Is this really best practice for doing such a thing? | false | 6,121,180 | 0.197375 | 0 | 0 | 3 | This really depends on the kind of messaging you want and the roles of the two processes. If it's proper "client/server", I would probably create a SimpleHTTPServer and then use HTTP to communicate between the two. You can also use XMLRPCLib and the client to talk between them. Manually creating a TCP server with your own custom protocol sounds like a bad idea to me. You might also consider using a message queue system to communicate between them. | 0 | 1,949 | 1 | 4 | 2011-05-25T07:49:00.000 | python,linux,python-3.x | Interprocess messaging between two Python programs | 1 | 1 | 3 | 6,121,275 | 0 |
0 | 0 | I'm using imaplib for my project because I need to access gmails accounts.
Fact: With gmail's labels each message may be on an arbitrary number of folders/boxes/labels.
The problem is that I would like to get every single label from every single message.
The first solution it cames to my mind is to use "All Mail" folder to get all messages and then, for each message, check if that message is in each one of the available folders.
However, I find this solution heavy and I was wondering if there's a better way to do this.
Thanks! | false | 6,123,164 | 0 | 1 | 0 | 0 | in imap you don't have labels, gmail 'emulates' them on imap, you can low at the raw source of a message picked from imap an check if it has some custom header with the label | 0 | 5,550 | 0 | 9 | 2011-05-25T10:41:00.000 | python,gmail,imaplib | Python/imaplib - How to get messages' labels? | 1 | 1 | 2 | 6,128,926 | 0 |
0 | 0 | I've been thinking about how to implement mirror picking in Python. When I call on service API I get response with IP address. Now I want to take that address and check if it's close to me or not. If not, retry. I thought about pinging, as I have only ~1ms ping to the IP addresses hosted in same data center, but much higher across the world. I looked up some examples of how to implement pinging in Python, but it seems fairly complicated and feels a bit hackish (like checking if target IP is less than 10ms). There may be better ways to tackle this issue, that I may not be aware of.
What are your ideas? I can't download any test file each time to test speed. GeoIP or ping? Or something else? | false | 6,159,173 | 0.099668 | 1 | 0 | 1 | Call all the service API instances and use which ever responds quickest. | 0 | 881 | 0 | 2 | 2011-05-28T01:47:00.000 | python,ping,geo,geoip | How to choose closest/fastest mirror in Python? | 1 | 1 | 2 | 6,159,184 | 0 |
0 | 0 | I have a client and a server, both written in Python 2.7.
Lets say I wanted to make a multiplayer game server (which I don't at the moment but I'm working towards it). I would need to keep the server up to date (and other clients) on my characters whereabouts, correct?
How would I do this with sockets? Send or request information only when it is needed (e.g the character moves, or another players character moves and the server sends the information to other clients) or would I keep a constant socket open to send data real-time of EVERYBODY's movement regardless of if they have actually done something since the last piece of data was sent or not.
I won't struggle coding it, I just need help with the concept of how I would actually do it. | false | 6,171,112 | 0.099668 | 0 | 0 | 1 | With TCP sockets it is more typical to leave the connections open, given the teardown & rebuild cost.
Eventually when scaling you will do look into NewIO\RawIO.
If you do not, imagine that the game client might take a step & not get confirmation if sending it to the server & other players. | 0 | 83 | 0 | 0 | 2011-05-29T23:29:00.000 | python,sockets | How would I keep a constant piece of data updated through a socket in Python? | 1 | 1 | 2 | 6,171,227 | 0 |
0 | 0 | I have a troublesome problem socket.error error: [Errno 10048]: Address already in use. Only one usage of each socket address (protocol/IP address/port) is normally permitted during automated tests using Selenium with Python. The problem is so interesting that it runs on one machine (Linux) works correctly, but on another machine (WindowsXP) generates this error.
I would add that the problem arose after the reinstallation of the system and set up all over again - with the previous configuration everything worked properly.
Is there maybe something I forgot? Has anyone come up with such a problem before?
Does anyone have an idea of how to deal with this problem?
The current configuration / libraries:
python 2.7, numpy, selenium.py | false | 6,176,445 | 0 | 0 | 0 | 0 | There are several possibilities. If none of your tests can listen on some port (you don't say what port) then perhaps your Windows machine is running something on a port that you previously had open; this new service may have appeared during the reinstall. If, on the other hand, it's only a problem for some tests, or it's a little sporadic, then it may be either a programming issue (forgetting to close a socket in an early test which interferes with a later one) or a timing issue (the earlier test's socket isn't quite through closing before the new one tries to open up). Obviously there are different ways to address each of these problems, but I don't think we can help more than this without more details. | 0 | 15,849 | 0 | 2 | 2011-05-30T12:44:00.000 | python,selenium-rc,socketexception | problem: Socket error [Address already in use] in python/selenium | 1 | 2 | 4 | 6,176,514 | 0 |
0 | 0 | I have a troublesome problem socket.error error: [Errno 10048]: Address already in use. Only one usage of each socket address (protocol/IP address/port) is normally permitted during automated tests using Selenium with Python. The problem is so interesting that it runs on one machine (Linux) works correctly, but on another machine (WindowsXP) generates this error.
I would add that the problem arose after the reinstallation of the system and set up all over again - with the previous configuration everything worked properly.
Is there maybe something I forgot? Has anyone come up with such a problem before?
Does anyone have an idea of how to deal with this problem?
The current configuration / libraries:
python 2.7, numpy, selenium.py | false | 6,176,445 | 0 | 0 | 0 | 0 | Maybe there is a software on your Windows that already use port 4444, can you try set Selenium to another port and try again? | 0 | 15,849 | 0 | 2 | 2011-05-30T12:44:00.000 | python,selenium-rc,socketexception | problem: Socket error [Address already in use] in python/selenium | 1 | 2 | 4 | 6,176,956 | 0 |
1 | 0 | i have to retrieve some text from a website called morningstar.com . To access that data i have to log in. Once i log in and provide the url of the web page , i get the HTML text of a normal user (not logged in).As a result am not able to accees that information . ANy solutions ? | true | 6,215,808 | 1.2 | 0 | 0 | 3 | BeautifulSoup is for parsing html once you've already fetched it. You can fetch the html using any standard url fetching library. I prefer curl, as you tagged your post, python's built-in urllib2 also works well.
If you're saying that after logging in the response html is the same as for those who are not logged in, I'm gonna guess that your login is failing for some reason. If you are using urllib2, are are you making sure to store the cookie properly after your first login and then passing this cookie to urllib2 when you are sending the request for the data?
It would help if you posted the code you are using to make the two requests (the initial login, and the attempt to fetch the data). | 0 | 563 | 0 | 0 | 2011-06-02T14:21:00.000 | python,urllib2,beautifulsoup | How to extract text from a web page that requires logging in using python and beautiful soup? | 1 | 1 | 1 | 6,216,054 | 0 |
0 | 0 | I've setup an Amazon EC2 server. I have a Python script that is supposed to download large amounts of data from the web onto the server. I can run the script from the terminal through ssh, however very often I loose the ssh connection. When I loose the connection, the script stops.
Is there a method where I tell the script to run from terminal and when I disconnect, the script is still running on the server? | true | 6,232,564 | 1.2 | 1 | 0 | 40 | You have a few options.
You can add your script to cron to be run regularly.
You can run your script manually, and detach+background it using nohup.
You can run a tool such as GNU Screen, and detach your terminal and log out, only to continue where you left off later. I use this a lot.
For example:
Log in to your machine, run: screen.
Start your script and either just close your terminal or properly detach your session with: Ctrl+A, D, D.
Disconnect from your terminal.
Reconnect at some later time, and run screen -rD. You should see your stuff just as you left it.
You can also add your script to /etc/rc.d/ to be invoked on book and always be running. | 0 | 15,157 | 1 | 23 | 2011-06-03T20:53:00.000 | python,amazon-ec2 | How to continuously run a Python script on an EC2 server? | 1 | 1 | 3 | 6,232,612 | 0 |
1 | 0 | I tried using mechanize to see the URL of the image, but its a dynamic page generating a different image each time. I was wondering if there was any way to "capture" this image for viewing/saving.
Thanks! | true | 6,232,780 | 1.2 | 1 | 0 | 4 | The only way to save the image would be to make a single call to the CATPCHA URL programatically, save the result, and then present that saved result to the user. The whole point of CAPTCHA is that each request generates a unique/different reponse/image. | 0 | 540 | 0 | 0 | 2011-06-03T21:17:00.000 | python | Capturing CAPTCHA image with Python | 1 | 1 | 1 | 6,232,816 | 0 |
1 | 0 | I need to grab some data from websites in my django website.
Now i am confused whether i should use python parsing libraries or web crawling libraries. Does search engine libraries also fall in same category
I want to know how much is the difference between the two and if i want to use those functions inside my website which should i use | false | 6,236,794 | 0.066568 | 0 | 0 | 1 | HTML parse will parse the page and you can collect the links present in it. These links you can add to queue and visit these pages. Combine these steps in a loop and you made a basic crawler.
Crawling libraries are the ready to use solutions which do the crawling. They provide more features like detection of recursive links, cycles etc. A lot of features you would want to code would have already been done within these libraries.
However first option is preferred if you have some special requirements which libraries do not satisfy. | 0 | 2,480 | 0 | 4 | 2011-06-04T12:41:00.000 | python,django,web-crawler | How much is the difference between html parsing and web crawling in python | 1 | 1 | 3 | 6,236,831 | 0 |
0 | 0 | I have a method in my script that pulls a Twitter RSS feed, parses it with FeedPharser, wraps it in TwiML (Twilio-flavored XML) using the twilio module, and returns the resulting response in a CherryPy method via str(). This works my fine in development environment (Kubuntu 10.10); I have had mixed results on my server (Ubuntu Server 10.10 on Linode).
For the first few months, all was well. Then, the method described above began to fail with something like:
UnicodeEncodeError: 'ascii' codec
can't encode character u'\u2019' in
position 259: ordinal not in
range(128)
But, when I run the exact same code on the same feed, with the same python version, on the the same OS, on my development box, the code executes fine. However, I should note that even when it does work properly, some characters aren't outputted right. For example:
’
rather than
'
To solve this anomaly, I simply rebuilt my VPS from scratch, which worked for a few more months, and then the error came back.
The server automatically installs updated Ubuntu packages, but so does my development box. I can't think of anything that could cause this. Any help is appreciated. | true | 6,246,850 | 1.2 | 1 | 0 | 0 | A few reboots later (for unrelated reasons) and it's working again. How odd.... | 0 | 238 | 0 | 2 | 2011-06-06T00:22:00.000 | python,cherrypy,feedparser | What could cause a UnicodeEncodeError exception to creep into a working Python environment? | 1 | 1 | 2 | 6,279,095 | 0 |
1 | 0 | I need to do lot html parsing / scraping /search engine /crawling.
There are many libraries currently like Scrapy, Beautiful Soup, lxml , lxml2 requests, pyquery.
Now i don't want to try each of these and then decide. basically i want to follow on one and then study in detail and then use that most often.
So which library should i go for which can perform all function mentioned above. Even though there may be diff solutions for diff problems. But i want onelibrary which could do all things even though it takes time to code but should be possible
Is it possible to do indexing in lxml? Is PyQuery same as lxml or its different? | true | 6,248,424 | 1.2 | 0 | 0 | 1 | I'm using Beautiful Soup and am very happy with it. So far it answered all my scraping needs. Two main benefits:
It's pretty good at handling non-perfect HTML. Since browsers are quite lax, many HTML documents aren't 100% well-formed
In addition to high-level access APIs, it has low-level APIs which make it extendible if some specific scraping need isn't directly provided | 0 | 162 | 0 | 1 | 2011-06-06T06:21:00.000 | python,parsing,search,web-crawler | If i have to choose only one html scraping library for python, which should i choose | 1 | 2 | 2 | 6,248,603 | 0 |
1 | 0 | I need to do lot html parsing / scraping /search engine /crawling.
There are many libraries currently like Scrapy, Beautiful Soup, lxml , lxml2 requests, pyquery.
Now i don't want to try each of these and then decide. basically i want to follow on one and then study in detail and then use that most often.
So which library should i go for which can perform all function mentioned above. Even though there may be diff solutions for diff problems. But i want onelibrary which could do all things even though it takes time to code but should be possible
Is it possible to do indexing in lxml? Is PyQuery same as lxml or its different? | false | 6,248,424 | 0.099668 | 0 | 0 | 1 | Since lots of HTML documents are not well-formed but rather a bunch of tags (sometimes not even properly nested) you probably want to go with BeautifulSoup instead of one of the xml-based parsers. | 0 | 162 | 0 | 1 | 2011-06-06T06:21:00.000 | python,parsing,search,web-crawler | If i have to choose only one html scraping library for python, which should i choose | 1 | 2 | 2 | 6,248,605 | 0 |
0 | 0 | I want to find whether two web pages are similar or not. Can someone suggest if python nltk with wordnet similarity functions helpful and how? What is the best similarity function to be used in this case? | false | 6,252,236 | 0.099668 | 0 | 0 | 1 | consider implementing Spotsigs | 0 | 6,475 | 0 | 7 | 2011-06-06T12:47:00.000 | python,nlp,nltk,wordnet | using python nltk to find similarity between two web pages? | 1 | 1 | 2 | 6,254,429 | 0 |
1 | 0 | I am using Beautiful Soup for parsing web pages.
Are there any functions in BS which i can use i making search engine or crawling the website to index it in database. | true | 6,260,277 | 1.2 | 0 | 0 | 1 | No, BeautifulSoup is not a search engine. It is also not a Swiss Army knife, nor can it make you a sandwich. You will need a database (preferably one that's optimized for search, like Sphinx or Lucene) to do that. | 0 | 262 | 0 | 0 | 2011-06-07T03:17:00.000 | python,linux,search,beautifulsoup | Is it possible to code search engine in beautiful Soup | 1 | 1 | 1 | 6,260,471 | 0 |
0 | 0 | Now I use lxml to parse HTML in python. But I haven't found any API to get font information of a text node. Is there any librafy to do that?
Thanks very much! | false | 6,273,635 | 0 | 1 | 0 | 0 | You can't get this information from the text nodes in the HTML, because it isn't there. | 0 | 203 | 0 | 0 | 2011-06-08T02:38:00.000 | python,html | How to get font of a HTML node with python? | 1 | 1 | 1 | 6,274,197 | 0 |
1 | 0 | I am trying to use the Facebook graph api to publish a swf file on my wall, and was wondering if there is anyway I can control the height of the swf file. It looks like facebook sets the height to 259px automatically. Any help would be really appreciated!
Thanks. | true | 6,286,449 | 1.2 | 0 | 0 | 0 | The answer was to use the still not completely deprecated REST api, which has that functionality. | 0 | 570 | 0 | 2 | 2011-06-08T23:23:00.000 | php,python,facebook,facebook-graph-api | Controlling size of swf file posted to wall using graph api | 1 | 2 | 2 | 6,337,169 | 0 |
1 | 0 | I am trying to use the Facebook graph api to publish a swf file on my wall, and was wondering if there is anyway I can control the height of the swf file. It looks like facebook sets the height to 259px automatically. Any help would be really appreciated!
Thanks. | false | 6,286,449 | 0.099668 | 0 | 0 | 1 | Facebook will automatically set the size to width:398px;height:259px.
The movie will be stretched to accomodate this.
If your movie is a different size, the best thing to do is to make sure it is at least the same aspect ratio. | 0 | 570 | 0 | 2 | 2011-06-08T23:23:00.000 | php,python,facebook,facebook-graph-api | Controlling size of swf file posted to wall using graph api | 1 | 2 | 2 | 6,292,361 | 0 |
1 | 0 | I'm a Python programmer specializing in web-scraping, I had to ask this question as I found nothing relevant.
I want to know what are the popular, well documented frameworks that are available for Python for scraping pure Javascript based sites? Currently I know Mechanize and Beautiful Soup but they do not interact with Javascript so I'm looking for something different. I would prefer something that would be as elegant and simple as mechanize.
I've done a bit of research and so far I've heard about Selenium, Selenium 2 and Windmill.
Right now I'm trying to choose among one these three and I do not know of any others.
So can anyone point out the features of these frameworks and what makes them different? I heard that Selenium uses a separate server to do all it's task and it seems to be feature rich. Also what is the core difference between Selenium and Selenium2? Please enlighten me if I'm wrong, and if you know of any other frameworks do mention it's features and other details.
Thanks. | false | 6,321,696 | 0 | 0 | 0 | 0 | Before using tools like Selenium that are designed for front end testing and not for scraping, you should have a look at where the data on the site comes from. Find out what XHR requests are made, what parameters they take and what the result is.
For example the site you mentioned in your comment does a POST request with lots of parameters in JavaScript and displays the result. You probably only need to use the result of this POST request to get your data. | 0 | 1,041 | 0 | 2 | 2011-06-12T11:32:00.000 | python,selenium,web-scraping,selenium-webdriver,windmill | Choosing a Python webscraping framework for handling pure Javascript based sites | 1 | 1 | 1 | 6,322,502 | 0 |
0 | 0 | I'm writing a python script that scrapes a website, where the website uses OpenID auth to identify me via google.
Is there a python library that will handle do this for me, or do I need to find out and replicate the steps that my browser already does?
Otherwise, is there some standard way of doing this in some other language? | true | 6,330,335 | 1.2 | 0 | 0 | 1 | From the client's perspective, an OpenID login is very similar to any other web-based login. There isn't a defined protocol for the client; it is an ordinary web session that varies based on your OpenID provider. For this reason, I doubt that any such libraries exist. You will probably have to code it yourself. | 0 | 698 | 0 | 1 | 2011-06-13T12:21:00.000 | python,authentication,openid,google-openid | Python - How to request pages from website that uses OpenID | 1 | 1 | 1 | 6,330,374 | 0 |
0 | 0 | I want to transfer files from say server A to server B directly. The script performing this operation is residing on some other server say C. How can it be achieved without being saving files temporarily on server C or local system.. | false | 6,333,590 | 0.197375 | 0 | 0 | 1 | Create ssh key pairs for each server, use ssh-copy-id to copy public keys from server A to Server B, and from Server C to Server A.
All you have to do then is to tell your script to ssh to remote server A and then execute scp to copy files over to Server B.
Edit: You have to setup your ssh keys without a passphrase ! (Or use ssh-agent on server C and server A) | 0 | 529 | 0 | 0 | 2011-06-13T16:53:00.000 | python,paramiko | establish connection with multiple servers simultaneously using python paramiko package | 1 | 1 | 1 | 6,343,363 | 0 |
1 | 0 | Is there anyway I can add a link to a flash file I post on my wall using the rest/graph api, such that when a user clicks on the flash file playing, it takes them to my app?
Thanks. | true | 6,339,541 | 1.2 | 0 | 0 | 0 | I suppose so... just set a click event handler inside the Flash movie to redirect the user to wherever you want. | 0 | 262 | 0 | 1 | 2011-06-14T05:42:00.000 | python,flash,facebook,facebook-graph-api | Link within flash file posted on facebook wall possible? | 1 | 2 | 2 | 6,349,413 | 0 |
1 | 0 | Is there anyway I can add a link to a flash file I post on my wall using the rest/graph api, such that when a user clicks on the flash file playing, it takes them to my app?
Thanks. | false | 6,339,541 | 0.099668 | 0 | 0 | 1 | I don't know if you still require this , But here is the solution to what you are seeking:
Use the Feed dialog , and specify in the link parameter the URL of your APP , and in the source parameter , the URL of the flash file you want to post. | 0 | 262 | 0 | 1 | 2011-06-14T05:42:00.000 | python,flash,facebook,facebook-graph-api | Link within flash file posted on facebook wall possible? | 1 | 2 | 2 | 15,536,720 | 0 |
0 | 0 | In python the recv is a blocking function or not? I'm learned in the Uni C and there the was blocking and non-blocking socket. So I just wan to ask weather in python the recv function is a blocking function or not. | false | 6,340,739 | 0.099668 | 0 | 0 | 1 | The socket in Python is blocking by default, unless you change it. | 0 | 653 | 0 | 3 | 2011-06-14T08:07:00.000 | python,recv | Python tcp send receive functions | 1 | 1 | 2 | 6,340,763 | 0 |
0 | 0 | I'm a python beginner, and I'm curious how can I send a dictionary through TCP | false | 6,341,823 | 0.244919 | 0 | 0 | 5 | Pickle is considered insecure for sending data structures across connections as the object can never be trustfully reconstructed. This is why yaml, json or any other format is considered preferable. | 0 | 22,945 | 0 | 9 | 2011-06-14T09:52:00.000 | python,dictionary,tcp | Python sending dictionary through TCP | 1 | 2 | 4 | 6,341,947 | 0 |
0 | 0 | I'm a python beginner, and I'm curious how can I send a dictionary through TCP | false | 6,341,823 | 0 | 0 | 0 | 0 | yes. its might be unsecured. but you can use https and send it thru post method. for get mehtod, try to consider encrypt pickle before sending | 0 | 22,945 | 0 | 9 | 2011-06-14T09:52:00.000 | python,dictionary,tcp | Python sending dictionary through TCP | 1 | 2 | 4 | 46,536,775 | 0 |
0 | 0 | Is there a way to "skip" a line using a SAX XML parser?
I've got a non-confirming XML document which is a concatenation of valid XML documents and thus the <?xml ...?> appears for each document. Also note I need to use a SAX parser since the input documents are huge.
I tried crafting a "custom stream" class as feeder for the parser but quickly realized that SAX uses the read method and thus reads stuff in "byte arrays" thereby exploding the complexity of this project.
thanks!
UPDATE: I know there is way around this using csplit but I am after a Python based solution if at all possible within reasonable limits.
Update2: Maybe I should have said "skipping to next document", that would have made more sense. Anyhow, that's what I need: a way of parsing multiple documents from a single input stream. | false | 6,344,070 | 0 | 0 | 0 | 0 | When you are concatenating the documents together, just replace the beginning <? and ?> with <!-- and -->, this will comment out the xml declarations. | 0 | 494 | 0 | 3 | 2011-06-14T13:17:00.000 | python,xml,sax | python sax parser skipping over exception | 1 | 1 | 1 | 6,657,346 | 0 |
0 | 0 | I'm trying to download multiple images concurrently using Python over the internet, and I've looked at several option but none of them seem satisfactory.
I've considered pyCurl, but don't really understand the example code, and it seems to be way overkill for a task as simple as this.
urlgrabber seems to be a good choice, but the documentation says that the batch download feature is still in development.
I can't find anything in the documentation for urllib2.
Is there an option that actually works and is simpler to implement? Thanks. | true | 6,363,111 | 1.2 | 0 | 0 | 1 | It's not fancy, but you can use urllib.urlretrieve, and a pool of threads or processes running it.
Because they're waiting on network IO, you can get multiple threads running concurrently - stick the URLs and destination filenames in a Queue.Queue, and have each thread suck them up.
If you use multiprocessing, it's even easier - just create a Pool of processes, and call mypool.map with the function and iterable of arguments. There isn't a thread pool in the standard library, but you can get a third party module if you need to avoid launching separate processes. | 1 | 500 | 0 | 0 | 2011-06-15T19:24:00.000 | python,curl,concurrency,download,urllib2 | Downloading images from multiple websites concurrently using Python | 1 | 1 | 1 | 6,363,541 | 0 |
0 | 0 | I have a web page that uses a Python cgi script to store requested information for later retrieval by me. As an example, the web page has a text box that asks "What is your name?" When the user inputs his name and hits the submit button, the web page calls the Python cgi script which writes the user's name to mytextfile.txt on the web site. The problem is that if anyone goes to www.mydomain.com/mytextfile.txt, they can see all of the information written to the text file. Is there a solution to this? Or am I using the wrong tool? Thanks for your time. | true | 6,371,097 | 1.2 | 1 | 0 | 2 | Definitely the wrong tool. Multiple times.
Store the file outside of the document root.
Store a key to the file in the user's session.
Use a web framework.
Use WSGI. | 0 | 104 | 0 | 1 | 2011-06-16T11:30:00.000 | python,cgi | Python CGI how to save requested information securely? | 1 | 2 | 2 | 6,371,127 | 0 |
0 | 0 | I have a web page that uses a Python cgi script to store requested information for later retrieval by me. As an example, the web page has a text box that asks "What is your name?" When the user inputs his name and hits the submit button, the web page calls the Python cgi script which writes the user's name to mytextfile.txt on the web site. The problem is that if anyone goes to www.mydomain.com/mytextfile.txt, they can see all of the information written to the text file. Is there a solution to this? Or am I using the wrong tool? Thanks for your time. | false | 6,371,097 | 0 | 1 | 0 | 0 | Store it outside the document root. | 0 | 104 | 0 | 1 | 2011-06-16T11:30:00.000 | python,cgi | Python CGI how to save requested information securely? | 1 | 2 | 2 | 6,371,124 | 0 |
0 | 0 | I want to access my linkedin account from command prompt and then i wanted to send mails from my account using command.
Also, I need the delivery reports of the mails.
Can anyone knows how can use that? | true | 6,373,779 | 1.2 | 1 | 0 | 4 | The Member to Member API will return a 2xx status code if your message is accepted by LinkedIn. And a 4xx status code if there's an error.
This means the message was put into the LinkedIn system, not that it has been opened, read, emailed, etc. You cannot get that via the API. | 0 | 1,130 | 0 | 0 | 2011-06-16T14:43:00.000 | python,command,linkedin | How to access linkedin from python command | 1 | 1 | 2 | 6,393,078 | 0 |
0 | 0 | I'm using the function dpkt.http.Request(), but sometimes the http flow is not a request.
Is there a quick way in python or dpkt to know if my request is GET or POST? | false | 6,401,232 | 0.099668 | 0 | 0 | 1 | Try parsing it as a HTTP request and catch dpkt.UnpackError so your program doesn't die if it's not a HTTP request.
If no exception was thrown, you can use .method of the Request object to get the method that was used. | 0 | 901 | 0 | 1 | 2011-06-19T07:53:00.000 | python,http | Best way to know if http request is GET or POST with DPKT? | 1 | 1 | 2 | 6,401,242 | 0 |
0 | 0 | I am using selenium2 RC with the python client (selenium.py) and I need to get the version of the selenium on the server. (for example "2rc2","2rc3" etc.)
is there any command i can send to the server to get its version? | false | 6,424,975 | 0.462117 | 0 | 0 | 5 | I know this was already answered but it may help someone else. Another way to get your selenium server's version is right click the selenium-server.jar, and open it was any file archiver software such as 7zip or winrar. There you should find a file called VERSION.txt which will tell you your servers version | 0 | 10,529 | 0 | 2 | 2011-06-21T12:08:00.000 | python,python-2.7,selenium-webdriver,selenium-rc | how to get the version of Selenium RC server | 1 | 1 | 2 | 7,946,600 | 0 |
0 | 0 | What is the preferred XML processor to use with Python?
Some choices are
minidom
PyXML
ElementTree
...
EDIT: I will need to be able to read in documents and manipulate them. I also require pretty print functionality. | false | 6,432,826 | 0 | 0 | 0 | 0 | I can vouch for ElementTree - it's not a particularly complete XML implementation. It's main strength is simplicity of use of the DOM tree objects. They behave like regular pythonic objects (sequences and dicts) even though their actual implementation is somewhat more complex than appearances might suggest. Of all the XML frameworks ET is the one that you can use to accomplish basic tasks quickly.
On the other hand if your XML is mostly quite conventional stuff it can do a good job of reading and formatting pretty much any document you throw at it.
Annoying limitations (which appeared not to have been fixed four months ago) is it's wonky support for XML namespaces, lack of Xpath.
In summary it's fine for basic uses. It will let you get up to speed very quickly. XML gurus will find it lacking. | 0 | 404 | 0 | 3 | 2011-06-21T22:34:00.000 | python,xml | Preferred Python XML processor | 1 | 1 | 4 | 6,433,113 | 0 |
0 | 0 | I am using python’s socket.py to create a connection to an ftp-server. Now I want to reset the connection (send a RST Flag) and listen to the response of the ftp-server. (FYI using socket.send('','R') does not work as the OS sends FIN flag instead of RST.) | false | 6,439,790 | 0.132549 | 0 | 0 | 2 | To send an RST on a TCP connection, set the SO_LINGER option to true with a zero timeout, then close the socket. This resets the connection. I have no idea how to do that in Python, or indeed whether you can even do it. | 0 | 25,931 | 0 | 25 | 2011-06-22T12:26:00.000 | python,sockets,networking,reset | Sending a reset in TCP/IP Socket connection | 1 | 1 | 3 | 6,440,026 | 0 |
1 | 0 | I have a Python script that can run in command line/console which works with the Google Calendar Data API to do some tasks like retrieve calendars and modify or update events. I want to turn it into a web application/interface, but was not sure what would be the optimal or simplest way going about it.
Some precursor information: I tried rewriting the application as .html files that used Javascript and its respective Google Calendar Data API. I ran into a few problems with that and found that it wasn't working as well as my Python script. It could possibly be because I am using a business gmail domain but I'm not entirely sure. It does however work fine with Python, so I've decided to stick with that.
I've only worked with Python scripts (and I'd only call myself a beginner), so I'm not sure what would be an ideal or optimal solution. I'd preferably (re: if even possible) like to have the Python script act as a backend/web-service and interface with a website through JSON, or use a Python webframework to develop it. I hope I got the bulk of my terminology right, my apologies if anything is unclear.
Any advice is appreciated, thanks! | true | 6,448,814 | 1.2 | 0 | 0 | 3 | Go check out Google App Engine. There's a Python API. It works well with other Google services, like Calendar. Probably the fastest way to get where you want to go. | 0 | 947 | 0 | 2 | 2011-06-23T02:48:00.000 | python,web-services,web-applications | Python - from Script to Web App? | 1 | 1 | 3 | 6,448,835 | 0 |
0 | 0 | So I finally managed to get my script to login to a website and download a file... however, in some instances I will have a url like "http://www.test.com/index.php?act=Attach&type=post&id=3345". Firefox finds the filename ok... so I should be able to.
I am unable to find the "Content-Disposition" header via something like remotefile.info()['Content-Disposition']
Also, remotefile.geturl() returns the same url.
What am I missing? How do I get the actual filename? I would prefer using the built-in libraries. | true | 6,463,179 | 1.2 | 0 | 0 | 2 | It is the task of the remote server/Service to provide the content-disposition header.
There is nothing you can do unless the remote server/service is under your own control.. | 0 | 580 | 0 | 0 | 2011-06-24T04:00:00.000 | python,url,login,download,urllib2 | Using python (urllib) to download a file, how to get the real filename? | 1 | 1 | 1 | 6,463,190 | 0 |
0 | 0 | I have to query ~10000 gameservers through an UDP protocol to check if they are online on a server every 15 minutes. My code is working, but servers that are offline block threads slowing progress down enormously. I use 20 threads, more will cause the UDP sockets of Python to slow down to a crawl.
Currently I'm using a five second timeout before deciding that the server is offline. Can this limit be further reduced, or must it even be upped?
Please don't suggest using heartbeats, my server is an unofficial masterserver for a game which has to leech and doesn't recieve most of the heartbeats. | true | 6,478,288 | 1.2 | 0 | 0 | 1 | You don't have to use synchronous communication (i.e. send packet, block and wait for results), especially not if you're using UDP. Just have one thread send out pings, and another one constantly receiving pongs on the same socket. If you have to do complicated processing with the results, you may want to use another one for that.
The challenge will be in the sending logic - you don't want to overwhelm your own internet connection, so I'd suggest a configurable rate of packets. Also, UDP packets can get lost in the network, so resend at least once or twice before giving up. As a timeout, I'd suggest about 2 seconds, because if a ping to game(i.e. highly delay-sensitive) server takes any longer than that, it's probably not usable anyway. | 0 | 593 | 0 | 4 | 2011-06-25T14:14:00.000 | python,udp,settimeout | What is a reasonable timeout for checking if a server is online? | 1 | 1 | 1 | 6,478,330 | 0 |
1 | 0 | I want to do some screen-scraping with Python 2.7, and I have no context for the differences between HTMLParser, SGMLParser, or Beautiful Soup.
Are these all trying to solve the same problem, or do they exist for different reasons? Which is simplest, which is most robust, and which (if any) is the default choice?
Also, please let me know if I have overlooked a significant option.
Edit: I should mention that I'm not particularly experienced in HTML parsing, and I'm particularly interested in which will get me moving the quickest, with the goal of parsing HTML on one particular site. | false | 6,494,199 | -1 | 0 | 0 | -4 | Well, software is like cars....different flavors about all do drive!
Go with BeautifulSoup (4). | 0 | 6,435 | 0 | 16 | 2011-06-27T14:11:00.000 | python,html,parsing,beautifulsoup,html-parsing | Parsing HTML with Python 2.7 - HTMLParser, SGMLParser, or Beautiful Soup? | 1 | 2 | 4 | 6,494,288 | 0 |
1 | 0 | I want to do some screen-scraping with Python 2.7, and I have no context for the differences between HTMLParser, SGMLParser, or Beautiful Soup.
Are these all trying to solve the same problem, or do they exist for different reasons? Which is simplest, which is most robust, and which (if any) is the default choice?
Also, please let me know if I have overlooked a significant option.
Edit: I should mention that I'm not particularly experienced in HTML parsing, and I'm particularly interested in which will get me moving the quickest, with the goal of parsing HTML on one particular site. | false | 6,494,199 | 1 | 0 | 0 | 6 | BeautifulSoup in particular is for dirty HTML as found in the wild. It will parse any old thing, but is slow.
A very popular choice these days is lxml.html, which is fast, and can use BeautifulSoup if needed. | 0 | 6,435 | 0 | 16 | 2011-06-27T14:11:00.000 | python,html,parsing,beautifulsoup,html-parsing | Parsing HTML with Python 2.7 - HTMLParser, SGMLParser, or Beautiful Soup? | 1 | 2 | 4 | 6,494,502 | 0 |
1 | 0 | I am running an automated test to test a webform for my company. They have just installed a zipcode service which automatically adds the Street and City/Region when you fill in the Address and housenumber.
This autofill appears when you deselect the last form element (e.g. that of the housenumber).
This is the order of the fields I'm using;
form:zipcode
form:housenumber
form:addition (optional)
form:street (gets filled in by service after zipcode and housenumber are provided)
form:city (the other autofill field)
When you fill this form out manually, the address appears as soon as you click or tab into the addition field (as it is optional) but when it's done automated it doesn't work.
I have tried several things like;
focus('form:addition') or
select('form:addition') but these don't work. I have tried
type('\t') to tab to the form field, and
type('form:addition', ' ') to type a space into the add. field and even
type('form:addition', "") to leave it empty. None of these attempts have worked so far.
Is there anyone that can help me with this? | false | 6,504,482 | 0 | 0 | 0 | 0 | Hi i got a solution for this i think,
problem is with generating the user interactions to the addition field.
use these statements
focus("form:addition");
keyPressNative("10") //this is ENTER command
it should work | 0 | 385 | 0 | 0 | 2011-06-28T09:39:00.000 | python,forms,selenium-rc,field,autofill | Selecting the next formfield with Selenium RC and python | 1 | 2 | 2 | 6,510,614 | 0 |
1 | 0 | I am running an automated test to test a webform for my company. They have just installed a zipcode service which automatically adds the Street and City/Region when you fill in the Address and housenumber.
This autofill appears when you deselect the last form element (e.g. that of the housenumber).
This is the order of the fields I'm using;
form:zipcode
form:housenumber
form:addition (optional)
form:street (gets filled in by service after zipcode and housenumber are provided)
form:city (the other autofill field)
When you fill this form out manually, the address appears as soon as you click or tab into the addition field (as it is optional) but when it's done automated it doesn't work.
I have tried several things like;
focus('form:addition') or
select('form:addition') but these don't work. I have tried
type('\t') to tab to the form field, and
type('form:addition', ' ') to type a space into the add. field and even
type('form:addition', "") to leave it empty. None of these attempts have worked so far.
Is there anyone that can help me with this? | false | 6,504,482 | 0 | 0 | 0 | 0 | Yesterday I found out that the zipcode service uses an Ajax call to retrieve the information. This call is executed when the zipcode and housenumber fields are both 'out of focus' (i.e. deselected).
The right statement I found to use this in my advantage is this;
selenium.fireEvent('form:number', 'blur') which deselects the last field where data was entered. | 0 | 385 | 0 | 0 | 2011-06-28T09:39:00.000 | python,forms,selenium-rc,field,autofill | Selecting the next formfield with Selenium RC and python | 1 | 2 | 2 | 6,531,214 | 0 |
1 | 0 | I'm working through a Selenium test where I want to assert a particular HTML node is an exact match as far as what attributes are present and their values (order is unimportant) and also that no other attributes are present. For example given the following fragment:
<input name="test" value="something"/>
I am trying to come up with the a good way of asserting its presence in the HTML output, such that the following (arbitrary) examples would not match:
<input name="test" value="something" onlick="doSomething()"/>
<input name="test" value="something" maxlength="75"/>
<input name="test" value="something" extraneous="a" unwanted="b"/>
I believe I can write an XPath statement as follows to find all of these, for example:
//input[value='something' and @name='test']
But, I haven't figured out how to write in such a way that it excludes not exact matches in a generalize fashion. Note, it doesn't have to be an XPath solution, but that struck me as the most likely elegant possibility. | false | 6,506,372 | 0 | 0 | 0 | 0 | There is no way to exclude unexpected attributes with XPath.
So you must find a safer way to locate elements you want. Things that you should consider:
In a form, each input should have a distinct name. The same is true for the form itself. So you can try //form[@name='...']/input[@name='...']
Add a class to the fields that you care about. Classes don't have be mentioned in any stylesheet. In fact, I used this for form field validation by using classes like decimal number or alpha number | 0 | 146 | 0 | 1 | 2011-06-28T12:28:00.000 | python,xpath,selenium,beautifulsoup,lxml | Finding Only HTML Nodes Whose Attributes Match Exactly | 1 | 1 | 3 | 6,506,453 | 0 |
0 | 0 | What is the recommended library for python to do everything that is Amazon EC2 related?
I came across boto and libcloud. Which one is easier to use? does libcloud offer the same functionality as boto? | true | 6,507,708 | 1.2 | 1 | 0 | 10 | The big advantage of libcloud is that it provides a unified interface to multiple providers, which is a big plus in my mind. You won't have to rewrite everything if you plan to migrate some instances to Rackspace later, or mix and match, etc. I haven't used it extensively but it looks fairly complete as far as EC2 goes. In boto's favor it has support for nearly all of Amazon's web services, so if you plan to be Amazon-centric and use other services you'll probably want to use boto.
That said, try both packages and see which you prefer. | 0 | 933 | 0 | 9 | 2011-06-28T14:04:00.000 | python,amazon-ec2 | Python & Amazon EC2 -- Recommended Library? | 1 | 1 | 1 | 6,508,616 | 0 |
0 | 0 | To use OAUTH with python-twitter I need to register a web app, website etc. The program I have is for personal use though. Since basic auth is now defunct and Oauth is not an option due to the requirements is there a work around to log in to twitter using a script?
I can't imagine that Twitter would alienate everyone who does not have a website/web app from logging in to twitter from a script that is for personal use. | false | 6,511,633 | 0 | 1 | 0 | 0 | If you want to access resources at Twitter, even if they are your own, and even if it is just for a "personal script" -> you have to use oAuth. | 0 | 881 | 0 | 1 | 2011-06-28T18:53:00.000 | python,authentication,twitter,oauth | How to log in to twitter without Oauth for a script? | 1 | 1 | 2 | 6,533,687 | 0 |
0 | 0 | I am accessing image data on Javascript. Now I'd like to pass this to Python process through Selenium API in the most efficient possible manner.
Passing canvas data is easy with canvas.toDataURL() method, but the downside is that the image is encoded and decoded to PNG, adding substantial overhead to the process.
I was just wondering if I could pass the raw array data from Javascript to Python via Selenium, so that
Either passing data in the native format (unsigned integer data)
Convert raw pixel data to base64 encoding, in some kind of toDataURL() manner or simply doing the processing yourself in Javascript (hopefully JIT'ed loop)
Looks like canvasContext.getImageData(0, 0, w, h).data object type is Uint8ClampedArray. What would be the best way to convert this data to some format which can be easily passed through Selenium to Python?
Selenium 2.0 RC, any Firefox version can be used. | true | 6,514,812 | 1.2 | 0 | 0 | 1 | Since your communication between Selenium and the browser through getEval is string based, I think that there is no escape from base64 encoding the image data. sending raw binary data will probably not be possible.
You can probably devise your own string encoding scheme, but it'll probably be as effective as the built in methods. | 0 | 1,163 | 0 | 3 | 2011-06-29T01:19:00.000 | javascript,python,firefox,canvas,selenium | Firefox, Selenium, toDataURL, Uint8ClampedArray and Python | 1 | 1 | 1 | 6,520,796 | 0 |
0 | 0 | I checked the documentation for Python-Twitter and I couldn't find any methods for OAuthentication. There is a methods for Basic Auth, but I obviously can't use that any more.
If I get a separate module for Oauth can I still use methods in Python-Twitter that require Oauth or does the Oauth only support methods from the same module I authenticated in. | false | 6,515,477 | 0 | 1 | 0 | 0 | OAuth is a generic protocol; any well-behaved library that implements it should work with any site's API (this is the whole point of having a standard...) | 0 | 66 | 0 | 0 | 2011-06-29T03:33:00.000 | python,twitter,module | Do I need a specific Python module Oauthentication? | 1 | 1 | 2 | 6,515,489 | 0 |
0 | 0 | I've written a python script using selenium to imitate a browser logging in and buying some stuff from a website. Therefore, the python script contains log-in information along with payment information (checking account info, etc). If i configure my apache webserver to be able to execute python scripts, so that when a client presses a button it runs my purchasing script, is there anyway that the client could see the contents of the python script (thereby gaining access to sensitive login and payment info)?
I remember reading that if an error occurs, the script would show up in plain text in the browser? Should I prevent this by using try and except blocks or is there a better method I'm not aware of?
Thanks for all your help in advance. | true | 6,538,018 | 1.2 | 0 | 0 | 0 | it is usually a good idea to put such information in an external config file which can't be served by the webserver directly and read this file in your script. in case of a configuration error the client might see your sourcecode but not the sensitive information | 0 | 189 | 0 | 0 | 2011-06-30T16:29:00.000 | python,security,apache,server-side | Security question relating to executing server-side script | 1 | 1 | 1 | 6,538,060 | 0 |
0 | 0 | I need to make a few changes to existing XML files, while preserving formatting and comments - everything except the minor changes I do should be untouched. I've tried xml.etree and lxml.etree with no success.
The XML is generated by my IDE, but its editor is lacking in functionality, so I have to make a few manual changes. I want to keep the formatting so the diffs are pretty and not polluting my history.
With the multitude of python XML libraries I thought I'd ask here if anyone has done something similar. | false | 6,539,891 | -0.099668 | 0 | 0 | -1 | How many and what kind of changes do you need to make? It sounds to me like you may be better served simply using a standalone XML editor (just Google for them, there are lots). Frankly, I'm kind of surprised your IDE doesn't have adequate search-and-replace for your needs. (Most of the ones I've seen even have regex facilities for this.) If you really do need to write a program to make the XML changes, and you don't want to mess up the formatting and comments, your best bet may be to open the XML as a regular text file in Python, and use its regular expression library (import re) to do the search-and-replace. | 0 | 539 | 0 | 7 | 2011-06-30T19:13:00.000 | python,xml,dom | Python library for editing XML preserving formatting and comments | 1 | 1 | 2 | 6,540,130 | 0 |
1 | 0 | I know there is lxml and BeautifulSoup, but that won't work for my project, because I don't know in advance what the HTML format of the site I am trying to scrape an article off of will be. Is there a python-type module similar to Readability that does a pretty good job at finding the content of an article and returning it? | false | 6,543,599 | 0 | 0 | 0 | 0 | Extracting the real content from a content-page can not be done automatically - at least not with the standard tools. You have to define/identify where the real content is stored (by specifying the related CSS ID or class in your own HTML extraction code). | 0 | 323 | 0 | 0 | 2011-07-01T04:31:00.000 | python,algorithm,screen-scraping,beautifulsoup,lxml | Python- is there a module that will automatically scrape the content of an article off a webpage? | 1 | 2 | 3 | 6,543,634 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.