Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I need to do a client-server application, the client will be made with python-gtk,
all procedures will be on server-side to free the client of this workload.
So i did search on google about client-server protocols and i found that CORBA and RPC are closer from what i had in mind, BUT also i want to made this app ready to accept web and mobile clients, so i found REST and SOAP.
From all that reading i found myself with this doubts, should i implement two different protocols, one for gtk-client (like RPC or CORBA) and another for web and mobile (REST or SOAP)?
Can i use REST or SOAP for all? | true | 5,124,408 | 1.2 | 0 | 0 | 4 | I've implemented webservices using SOAP/XMLRPC (it was easy to support both, the framework I was using at the time made it pretty trivial) before; I had thought about using standard HTTP without the SOAP/XMLRPC layer (before I was aware that REST had a name :) but decided against it in the end because "I didn't want to write client-side code to handle the datastructures". (The Perl client also had easy SOAP/XMLRPC APIs.)
In the end, I regretted the decision I made: I could have written the code to handle the datastructures myself in an afternoon (or at the most a day) -- or if I had chosen to use JSON, probably two hours. But the burden of the SOAP/XMLRPC API and library dependencies lives on, years after I saved a few hours of developing, and will continue to be a burden for future development of the product.
So I recommend giving REST a really good try before going with an RPC framework. | 0 | 490 | 0 | 3 | 2011-02-26T01:17:00.000 | python,rest,soap,rpc,corba | What protocol to use in client-server app communication with python? | 1 | 2 | 2 | 5,124,630 | 0 |
0 | 0 | I have a unicode string like '%C3%A7%C3%B6asd+fjkls%25asd' and I want to decode this string.
I used urllib.unquote_plus(str) but it works wrong.
expected : çöasd+fjkls%asd
result : çöasd fjkls%asd
double coded utf-8 characters(%C3%A7 and %C3%B6) are decoded wrong.
My python version is 2.7 under a linux distro.
What is the best way to get expected result? | false | 5,139,249 | 0 | 0 | 0 | 0 | '%C3%A7%C3%B6asd+fjkls%25asd' - this is not a unicode string.
This is a url-encoded string. Use urllib2.unquote() instead. | 0 | 15,321 | 0 | 12 | 2011-02-28T07:29:00.000 | url-encoding,python-unicode | python url unquote followed by unicode decode | 1 | 1 | 6 | 5,139,350 | 0 |
0 | 0 | Are there libraries for email message parsing (particularly, from Gmail's IMAP server) in Python (except for the standard email library)?
Or if not, maybe there is some C++ library for this purpose (and I'll be able to use it through SWIG)? | false | 5,156,422 | 0 | 1 | 0 | 0 | This question does not make sense. You can access GMail either using IMAP or POP3.
And for parsing retrieved emails you use the 'email' module of Python. Python provides support for all this out-of-the-box "all batteries included". | 0 | 909 | 0 | 0 | 2011-03-01T15:17:00.000 | python,email | python: libraries for parsing email | 1 | 1 | 2 | 5,156,532 | 0 |
0 | 0 | urllib fetches data from urls right? is there a python library that can do the reverse of that and send data to urls instead (for example, to a site you are managing)? and if so, is that library compatible with apache?
thanks. | true | 5,159,351 | 1.2 | 0 | 0 | 6 | What does sending data to a URL mean? The usual way to do that is just via an HTTP POST, and urllib (and urllib2) handle that just fine. | 0 | 174 | 0 | 0 | 2011-03-01T19:30:00.000 | python,urllib2,urllib | python library that is the inverse of urllib? | 1 | 1 | 2 | 5,159,377 | 0 |
0 | 0 | Say I have a paragraph in a site I am managing and I want a python program to change the contents of that paragraph. Is this plausible with urllib? | false | 5,159,674 | 0 | 0 | 0 | 0 | If you have access to any server-side scripting language, its easy. | 1 | 54 | 0 | 0 | 2011-03-01T20:02:00.000 | python | Filling out paragraph text via urllib? | 1 | 2 | 2 | 5,159,749 | 0 |
0 | 0 | Say I have a paragraph in a site I am managing and I want a python program to change the contents of that paragraph. Is this plausible with urllib? | true | 5,159,674 | 1.2 | 0 | 0 | 0 | Quite possibly; it depends on how the site is designed.
If the site is just a collection of static pages (ie .html files) then you would have to get a copy of the page, modify it, and upload the new version - most likely using sftp or WebDAV.
If the site is running a content management system (like Wordpress, Drupal, Joomla, etc) then it gets quite a bit simpler - your script can simply post new page content.
If it is static content maintained through a template system (ie Dreamweaver) then life gets quite a bit nastier again - because any changes you make will not be reflected in the template files, and will likely get overwritten and disappear the next time you update the site. | 1 | 54 | 0 | 0 | 2011-03-01T20:02:00.000 | python | Filling out paragraph text via urllib? | 1 | 2 | 2 | 5,160,286 | 0 |
1 | 0 | I'm running an android app on the emulator. This app tries to load a html file using the webview api.
I also have a simple http server running on the same computer under the directory where I want to serve the request using the following python command:
python -m SimpleHTTPServer 800
However, I couldn't access this link through either the app or the browser on the emulator:
http://localhost:800/demo.html
Please let me know if I'm missing something. | true | 5,185,016 | 1.2 | 0 | 0 | 16 | Use address 10.0.2.2 instead of localhost. | 0 | 3,711 | 0 | 6 | 2011-03-03T18:39:00.000 | python,android,android-emulator,webview | How can I let android emulator talk to the localhost? | 1 | 4 | 4 | 5,185,046 | 0 |
1 | 0 | I'm running an android app on the emulator. This app tries to load a html file using the webview api.
I also have a simple http server running on the same computer under the directory where I want to serve the request using the following python command:
python -m SimpleHTTPServer 800
However, I couldn't access this link through either the app or the browser on the emulator:
http://localhost:800/demo.html
Please let me know if I'm missing something. | false | 5,185,016 | 0 | 0 | 0 | 0 | Actually localhost refers to the emulators directory itself.
Use your system ip to access the link | 0 | 3,711 | 0 | 6 | 2011-03-03T18:39:00.000 | python,android,android-emulator,webview | How can I let android emulator talk to the localhost? | 1 | 4 | 4 | 5,185,075 | 0 |
1 | 0 | I'm running an android app on the emulator. This app tries to load a html file using the webview api.
I also have a simple http server running on the same computer under the directory where I want to serve the request using the following python command:
python -m SimpleHTTPServer 800
However, I couldn't access this link through either the app or the browser on the emulator:
http://localhost:800/demo.html
Please let me know if I'm missing something. | false | 5,185,016 | 0 | 0 | 0 | 0 | localhost is a short-cut to tell the "whatever" to talk to itself. So, you are telling the emulator to look for a web server running in the emulator.
Instead of trying to connect to localhost, look up the IP for your computer and use that instead. | 0 | 3,711 | 0 | 6 | 2011-03-03T18:39:00.000 | python,android,android-emulator,webview | How can I let android emulator talk to the localhost? | 1 | 4 | 4 | 5,185,051 | 0 |
1 | 0 | I'm running an android app on the emulator. This app tries to load a html file using the webview api.
I also have a simple http server running on the same computer under the directory where I want to serve the request using the following python command:
python -m SimpleHTTPServer 800
However, I couldn't access this link through either the app or the browser on the emulator:
http://localhost:800/demo.html
Please let me know if I'm missing something. | false | 5,185,016 | -0.099668 | 0 | 0 | -2 | Best solution is not to use emulator at all. Its slow and full of bugs. Get your employer to buy a device or two. | 0 | 3,711 | 0 | 6 | 2011-03-03T18:39:00.000 | python,android,android-emulator,webview | How can I let android emulator talk to the localhost? | 1 | 4 | 4 | 5,186,087 | 0 |
0 | 0 | Hey I'm working on a python project using sockets. Basically I want to loop a connection to a host for user input. Here is what I'm trying:
while True:
sock.connect((host, port))
inputstring = " > "
userInput = raw_input(inputstring)
sock.send(userInput + '\r\n\r\n')
recvdata = sock.recv(socksize)
print(recvdata)
But when I loop the socket and try connecting to localhost on port 80 I get an error saying the transport endpoint is still connected or something like that. How can I eliminate that problem? | true | 5,201,388 | 1.2 | 0 | 0 | 2 | Call connect outside the while loop. You only need to connect once. | 0 | 2,005 | 0 | 0 | 2011-03-05T03:10:00.000 | python,sockets,loops,tcp,connection | Python keeping socket alive? | 1 | 2 | 3 | 5,201,409 | 0 |
0 | 0 | Hey I'm working on a python project using sockets. Basically I want to loop a connection to a host for user input. Here is what I'm trying:
while True:
sock.connect((host, port))
inputstring = " > "
userInput = raw_input(inputstring)
sock.send(userInput + '\r\n\r\n')
recvdata = sock.recv(socksize)
print(recvdata)
But when I loop the socket and try connecting to localhost on port 80 I get an error saying the transport endpoint is still connected or something like that. How can I eliminate that problem? | false | 5,201,388 | 0.066568 | 0 | 0 | 1 | Put the sock.connect outside of your while True. | 0 | 2,005 | 0 | 0 | 2011-03-05T03:10:00.000 | python,sockets,loops,tcp,connection | Python keeping socket alive? | 1 | 2 | 3 | 5,201,407 | 0 |
0 | 0 | I have little working knowledge of python. I know that there is something called a Twitter search API, but I'm not really sure what I'm doing. I know what I need to do:
I need point data for a class. I thought I would just pull up a map of the world in a GIS application, select cities that have x population or larger, then export those selections to a new table. That table would have a key and city name.
next i randomly select 100 of those cities. Then I perform a search of a certain term (in this case, Gaddafi) for each of those 100 cities. All I need to know is how many posts there were on a certain day (or over a few days depending on amount of tweets there were).
I just have a feeling there is something that already exsists that does this, and I'm having a hard time finding it. I've dowloaded and installed python-twitter but have no idea how to get this search done. Anyone know where I can find or how I can make this tool? Any suggestions would really help. Thanks! | true | 5,209,003 | 1.2 | 1 | 0 | 0 | A tweet itself comes with a geo tag. But it is a new feature and majority tweets do not have it. So it is not possible to search for all tweets containing "Gaddafi" from a city given the city name.
What you could do is the reverse, you search for "Gaddafi" first (regardless of geo location), using search api. Then, for each tweet, find the location of the poster (either thru the RESTful api or use some sort of web scraping).
so basically you can classify the tweets collected according to the location of the poster.
I think only tweepy have access to both twitter search API as well as RESTful API. | 0 | 268 | 0 | 0 | 2011-03-06T06:00:00.000 | python,api,twitter | Performing multiple searches of a term in Twitter | 1 | 1 | 1 | 5,209,588 | 0 |
0 | 0 | I would like to send email through a proxy.
My current implementation is as follows:
I connect to the smtp server with authentication. After I've successfully logged in, I send an email. It works fine but when I look at the email header I can see my host name. I would like to tunnel it through a proxy instead.
Any help will be highly appreciated. | false | 5,239,797 | 0 | 1 | 0 | 0 | This code has earned from me.
1. The file name must not be email.py Rename file name for example emailSend.py
2. It is necessary to allow Google to send messages from unreliable sources. | 0 | 25,028 | 0 | 14 | 2011-03-08T23:51:00.000 | python,proxy,smtp,smtplib | Python smtplib proxy support | 1 | 1 | 8 | 44,179,874 | 0 |
1 | 0 | I just start to learn python and web2py. Because of web2py's web interface development, I am wondering how can web2py work with svn? If a team wants to build a website,how do they work together? How to control the iteration of source code? | true | 5,240,646 | 1.2 | 0 | 0 | 5 | Yes, it works fine with svn, hg, whatever source control you need to use.
Sometimes people think that you have to code with web2py's admin interface, but that really is not the case, once you realize it can be edited with any of your regular tools, you will see that you don't have to treat it any differently when it comes to source control either.
If you use the source version of web2py, you'll have a single folder on disk that contains an entire web2py application server (that in turn contains your 'application' folders). Just check that whole folder into source control.
Now, on the machine that is running web2py, you can make changes either with web2py's web interface, or by just editing the python files directly with another editor (I use WingIDE for example). You'll have the normal svn update/modify/commit cycle at this point.
If multiple people are editing code using web2py's admin interface, all of their changes will be made on the machine running web2py... just periodically do a commit from that system and you are all set.
Using the admin interface to modify the source code is convenient, but for for bigger changes, each member of your team should have their own local copy of the svn branch. They make changes to their local files and commit them. Then from the server running web2py, just do an 'svn up' to get modifications from the rest of the team. | 0 | 742 | 0 | 1 | 2011-03-09T02:27:00.000 | python,web2py | Can web2py work with svn? | 1 | 1 | 1 | 5,240,964 | 0 |
0 | 0 | I'd like to periodicity check if my SOCKS server is working fine. In order to do that, I thought of pinging 8.8.8.8 (google DNS server) through the SOCKS server.
Is there other recommended way?
If it's optimal, how can I ping through SOCKS with python? | true | 5,274,934 | 1.2 | 0 | 0 | 13 | A SOCKS proxy provides a TCP proxy service (SOCKS 5 added UDP Support). You cannot perform an ICMP Echo "via" a SOCKS proxy service.
Even if you could, you would be testing ICMP Echo and Echo Reply, and not that your SOCKS server is running fine.
If you want to test that your SOCKS server is "running fine" I'd suggest that you open local listening port and then try to connect to it via the SOCKS service. | 0 | 18,202 | 1 | 6 | 2011-03-11T15:35:00.000 | python,socks,icmp | Use ping through SOCKS server? | 1 | 1 | 4 | 5,275,216 | 0 |
1 | 0 | I have written a module in python which performs some function.
I then created a Google Chrome extension which makes use of JSON and javascript.
Now when I click on the extension I want it to execute the python program which is stored on my hard disk and display the output on the browser again.
Is there a way in which I can do this?? | false | 5,327,485 | 0.132549 | 0 | 0 | 2 | Probably a late reply but a possible solution is to make your python script act as a server and let the browser plugin interact with it. | 1 | 17,271 | 0 | 4 | 2011-03-16T15:21:00.000 | javascript,python,google-chrome | How can I execute a Python script from Javascript? | 1 | 2 | 3 | 25,336,537 | 0 |
1 | 0 | I have written a module in python which performs some function.
I then created a Google Chrome extension which makes use of JSON and javascript.
Now when I click on the extension I want it to execute the python program which is stored on my hard disk and display the output on the browser again.
Is there a way in which I can do this?? | false | 5,327,485 | 0.066568 | 0 | 0 | 1 | forgive me if i'm incorrect on infinite proportions.
I believe that JavaScript is executed in a sandboxed/ isolated environment. Therefore you cannot invoke a python interpreter* or any other executable residing on the system.
*unless the interpreter itself were written in javascript. | 1 | 17,271 | 0 | 4 | 2011-03-16T15:21:00.000 | javascript,python,google-chrome | How can I execute a Python script from Javascript? | 1 | 2 | 3 | 5,328,172 | 0 |
1 | 0 | I'm trying to parse a bunch of xml files with the library xml.dom.minidom, to extract some data and put it in a text file. Most of the XMLs go well, but for some of them I get the following error when calling minidom.parsestring():
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 5189: ordinal not in range(128)
It happens for some other non-ascii characters too. My question is: what are my options here? Am I supposed to somehow strip/replace all those non-English characters before being able to parse the XML files? | false | 5,329,668 | 0.119427 | 0 | 0 | 3 | Minidom doesn't directly support parsing Unicode strings; it's something that has historically had poor support and standardisation. Many XML tools recognise only byte streams as something an XML parser can consume.
If you have plain files, you should either read them in as byte strings (not Unicode!) and pass that to parseString(), or just use parse() which will read a file directly. | 0 | 16,977 | 0 | 12 | 2011-03-16T18:02:00.000 | python,unicode,minidom | How to parse unicode strings with minidom? | 1 | 1 | 5 | 5,333,621 | 0 |
1 | 0 | Can I send the url of the page from the javascript to the python program running on the client machine ??
Also can I redirect the output of the python program to the javascript to be displayed on the browser? | false | 5,329,942 | 0 | 0 | 0 | 0 | You can if you're running a server with Python.
Create a simple server with python and return json. The only requirement is that the python server must be started in order to except requests. | 0 | 73 | 0 | 0 | 2011-03-16T18:25:00.000 | javascript,python | Is it possible to send a URL from javascript running on the browser to the python program running on the client machine? | 1 | 1 | 2 | 5,330,014 | 0 |
1 | 0 | What's the easiest way to scrape just the text from a handful of webpages (using a list of URLs) using BeautifulSoup? Is it even possible?
Best,
Georgina | false | 5,331,266 | 0.066568 | 0 | 0 | 1 | It is perfectly possible. Easiest way is to iterate through list of URLs, load the content, find the URLs, add them to main list. Stop iteration when enough pages are found.
Just some tips:
urllib2.urlopen for fetching content
BeautifulSoup: findAll('a') for finding URLs | 0 | 6,124 | 0 | 6 | 2011-03-16T20:20:00.000 | python,screen-scraping,beautifulsoup,web-scraping | Python - Easiest way to scrape text from list of URLs using BeautifulSoup | 1 | 1 | 3 | 5,331,407 | 0 |
1 | 0 | i'm working in python 3.2 (newb) on windows machine (though i have ubuntu 10.04 on virtual box if needed, but i prefer to work on the windows machine).
Basically i'm able to work with the http module and urlib module to scrape web pages, but only those that don't have java script document.write("<div....") and the like that adds data that is not there while i get the actual page (meaning without real ajax scripts).
To process those kind of sites as well i'm pretty sure i need a browser java script processor to work on the page and give me an output with the final result, hopefully as a dict or text.
I tried to compile python-spider monkey but i understand that it's not for windows and it's not working with python 3.x :-?
Any suggestions ? if anyone did something like that before i'll appreciate the help! | false | 5,338,979 | 0.066568 | 0 | 0 | 1 | Use Firebug to see exactly what is being called to get the data to display (a POST or GET url?). I suspect there's an AJAX call that's retrieving the data from the server either as XML or JSON. Just call the same AJAX call, and parse the data yourself.
Optionally, you can download Selenium for Firefox, start a Selenium server, download the page via Selenium, and get the DOM contents. MozRepl works as well, but doesn't have as much documentation since it's not widely used. | 0 | 8,915 | 0 | 0 | 2011-03-17T12:26:00.000 | javascript,python,python-3.x,web-scraping | Scraping a web page with java script in Python | 1 | 2 | 3 | 5,352,888 | 0 |
1 | 0 | i'm working in python 3.2 (newb) on windows machine (though i have ubuntu 10.04 on virtual box if needed, but i prefer to work on the windows machine).
Basically i'm able to work with the http module and urlib module to scrape web pages, but only those that don't have java script document.write("<div....") and the like that adds data that is not there while i get the actual page (meaning without real ajax scripts).
To process those kind of sites as well i'm pretty sure i need a browser java script processor to work on the page and give me an output with the final result, hopefully as a dict or text.
I tried to compile python-spider monkey but i understand that it's not for windows and it's not working with python 3.x :-?
Any suggestions ? if anyone did something like that before i'll appreciate the help! | false | 5,338,979 | 0 | 0 | 0 | 0 | document.write is usually used because you are generating the content on the fly, often by fetching data from a server. What you get are web apps that are more about javascript than HTML. "Scraping" is rather more a question of downloading HTML and processing it, but here there isn't any HTML to download. You are essentially trying to scrape a GUI program.
Most of these applications have some sort of API, often returning XML or JSON data, that you can use instead. If it doesn't, your should probably try to remote control a real webbrowser instead. | 0 | 8,915 | 0 | 0 | 2011-03-17T12:26:00.000 | javascript,python,python-3.x,web-scraping | Scraping a web page with java script in Python | 1 | 2 | 3 | 5,340,415 | 0 |
1 | 0 | I have soap webservice built with soaplib,
but if client sent chunked request it fails on
length = req_env.get("CONTENT_LENGTH")
body = input.read(int(length))
because length is '' (empty string), any ideas how to fix soaplib? | true | 5,351,250 | 1.2 | 0 | 0 | 0 | a bit ugly, but looks like it works:
if '' != length:
body = input.read(int(length))
elif req_env.get("HTTP_TRANSFER_ENCODING").lower() == 'chunked':
chunk_size = int(input.readline(), 16)
while chunk_size > 0:
chunk_read_size = 0
tmp = input.read(chunk_size)
chunk_read_size += len(tmp)
body += tmp
while chunk_read_size | 0 | 279 | 0 | 0 | 2011-03-18T11:23:00.000 | python,web-services,soap,soaplib | fix soaplib to support chunked requests | 1 | 1 | 1 | 5,389,161 | 0 |
0 | 0 | I was looking at a book on python network programming and i wanted to know what would be the benefits to learning python network programming comprehensively? This would be in the context of being able to develop some really cool, ground breaking web apps. I am a python newbe so all opinions woul be appreciated.
Kind Regards
4 Years later:
This was 4yrs ago, its crazy how much I've grown as a developer. Regarding how it has helped, I've developed an email application, a chat application using Objective C, python Twisted on the server side, it also helped with developing my apns push notification pipeline. | false | 5,357,103 | 0.119427 | 1 | 0 | 3 | If you want to develop wed apps, than you should rather focus on web frameworks like Django or Pylons. | 0 | 2,411 | 0 | 1 | 2011-03-18T19:49:00.000 | python,network-programming | benefits of learning python network programming? | 1 | 5 | 5 | 5,357,120 | 0 |
0 | 0 | I was looking at a book on python network programming and i wanted to know what would be the benefits to learning python network programming comprehensively? This would be in the context of being able to develop some really cool, ground breaking web apps. I am a python newbe so all opinions woul be appreciated.
Kind Regards
4 Years later:
This was 4yrs ago, its crazy how much I've grown as a developer. Regarding how it has helped, I've developed an email application, a chat application using Objective C, python Twisted on the server side, it also helped with developing my apns push notification pipeline. | false | 5,357,103 | 0 | 1 | 0 | 0 | The network is and always will be the sexiest arena for a hacker. An attacker can do almost anything with simple network access, such as scan for hosts, inject packets, sniff data, remotely exploit hosts, and much more. But if you are an attacker who has worked your way into the deepest depths of an enterprise target, you may find yourself in a bit of a conundrum: you have no tools to execute network attacks. No netcat. No Wireshark. No compiler and no means to install one. However, you might be surprised to find that in many cases, you’ll find a Python install.
Hence, the one benefit of python network programming I see is that one has a chance of becoming a penetration tester or an Offensive security guy.
Edit (10-06-2020):
I have no knowledge of what I was thinking when I was writing this answer. It's clearly not helpful.
Regarding the question, making web apps is not equivalent to network programming. In network programming with python, you can start with lower level of the OSI model by using libraries like scapy. Here you can make raw packets and understand the various protocols with it. And then maybe move to application level with libraries like Scrapy which involves web scraping, making http requests, etc. But develping python web apps would use tools like flask, django, jinja, etc and the development of a web app can take the form totally different than that of a regular scripting tool. | 0 | 2,411 | 0 | 1 | 2011-03-18T19:49:00.000 | python,network-programming | benefits of learning python network programming? | 1 | 5 | 5 | 31,816,130 | 0 |
0 | 0 | I was looking at a book on python network programming and i wanted to know what would be the benefits to learning python network programming comprehensively? This would be in the context of being able to develop some really cool, ground breaking web apps. I am a python newbe so all opinions woul be appreciated.
Kind Regards
4 Years later:
This was 4yrs ago, its crazy how much I've grown as a developer. Regarding how it has helped, I've developed an email application, a chat application using Objective C, python Twisted on the server side, it also helped with developing my apns push notification pipeline. | true | 5,357,103 | 1.2 | 1 | 0 | 2 | "Network programming" isn't about "cool web apps". It's more about creating servers, and creating clients that talk to servers. It's about sockets, tcp/ip communication, XMLRPC, http protocols and all the technologies that let two computers talk to each other.
If all you're interested in is web apps, learning network programming won't benefit you a whole lot. | 0 | 2,411 | 0 | 1 | 2011-03-18T19:49:00.000 | python,network-programming | benefits of learning python network programming? | 1 | 5 | 5 | 5,357,140 | 0 |
0 | 0 | I was looking at a book on python network programming and i wanted to know what would be the benefits to learning python network programming comprehensively? This would be in the context of being able to develop some really cool, ground breaking web apps. I am a python newbe so all opinions woul be appreciated.
Kind Regards
4 Years later:
This was 4yrs ago, its crazy how much I've grown as a developer. Regarding how it has helped, I've developed an email application, a chat application using Objective C, python Twisted on the server side, it also helped with developing my apns push notification pipeline. | false | 5,357,103 | 0.039979 | 1 | 0 | 1 | "python network programming" isn't any special kind of network programming. It sounds like if you had a better grasp on network programming you would be able to see where python would fit in to your overall design. And instead of reading a generic book about it, you would dig through the python API's and go from there.
The cool thing about python is that it's a huge collection of libraries which are optimized to each task. At work we use python to do all our server-side heavy lifting. We then use jquery and the Objective-J based Cappuccino to present an interface to the user. | 0 | 2,411 | 0 | 1 | 2011-03-18T19:49:00.000 | python,network-programming | benefits of learning python network programming? | 1 | 5 | 5 | 5,359,594 | 0 |
0 | 0 | I was looking at a book on python network programming and i wanted to know what would be the benefits to learning python network programming comprehensively? This would be in the context of being able to develop some really cool, ground breaking web apps. I am a python newbe so all opinions woul be appreciated.
Kind Regards
4 Years later:
This was 4yrs ago, its crazy how much I've grown as a developer. Regarding how it has helped, I've developed an email application, a chat application using Objective C, python Twisted on the server side, it also helped with developing my apns push notification pipeline. | false | 5,357,103 | 0 | 1 | 0 | 0 | Python's stronger web tool is definitely Django.
Said that the biggest benefit of learning it is to achieve a pretty robust backend on your web project. | 0 | 2,411 | 0 | 1 | 2011-03-18T19:49:00.000 | python,network-programming | benefits of learning python network programming? | 1 | 5 | 5 | 62,314,047 | 0 |
0 | 0 | In an embedded device, What is a practical amount time to allow idle HTTP connections to stay open?
I know back in the olden days of the net circa 1999 internet chat rooms would sometimes just hold the connection open and send replies as they came in. Idle timeouts and session length of HTTP connections needed to be longer in those days...
How about today with ajax and such?
REASONING: I am writing a transparent proxy for an embedded system with low memory. I am looking for ways to prevent DoS attacks.
My guess would be 3 minutes, or 1 minute. The system has extremely limited RAM and it's okay if it breaks rare and unpopular sites. | false | 5,360,603 | 0.197375 | 0 | 0 | 2 | How about allowing idle HTTP connections to remain open unless another communication request comes in? If a connection is open and no one else is trying to communicate, the open connection won't hurt anything. If someone else does try to communicate, send a FIN+ACK to the first connection and open the second. Many http clients will attempt to receive multiple files using the same connection if possible, but can reconnect between files if necessary. | 0 | 3,087 | 0 | 1 | 2011-03-19T06:38:00.000 | http,memory-management,timeout,python-idle | In-practice ideal timeout length for idle HTTP connections | 1 | 2 | 2 | 5,398,935 | 0 |
0 | 0 | In an embedded device, What is a practical amount time to allow idle HTTP connections to stay open?
I know back in the olden days of the net circa 1999 internet chat rooms would sometimes just hold the connection open and send replies as they came in. Idle timeouts and session length of HTTP connections needed to be longer in those days...
How about today with ajax and such?
REASONING: I am writing a transparent proxy for an embedded system with low memory. I am looking for ways to prevent DoS attacks.
My guess would be 3 minutes, or 1 minute. The system has extremely limited RAM and it's okay if it breaks rare and unpopular sites. | true | 5,360,603 | 1.2 | 0 | 0 | 2 | In the old days (about 2000), an idle timeout was up to 5 minutes standard. These days it tends to be 5 seconds to 50 seconds. Apache's default is 5 seconds. With some special apps defaulting to 120 seconds.
So my assumption is, that with AJAX, long held-open HTTP connections are no longer needed. | 0 | 3,087 | 0 | 1 | 2011-03-19T06:38:00.000 | http,memory-management,timeout,python-idle | In-practice ideal timeout length for idle HTTP connections | 1 | 2 | 2 | 5,381,626 | 0 |
0 | 0 | When I execute multiple test simultaneously, i don't want to keep Firefox browser window visible.. I can minimize it using selenium.minimizeWindow() but I don't want to do it.
Is there any way to hide Firefox window? I am using FireFox WebDriver. | false | 5,370,762 | 0.057081 | 0 | 0 | 4 | If you're using Selenium RC or Remote WebDriver then you can run the browser instance on a remote, or virtual machine. This means that you shouldn't have to worry about hiding the browser windows as they won't be launching on your local machine. | 0 | 103,080 | 0 | 51 | 2011-03-20T19:10:00.000 | python,selenium,firefox,webdriver | How to hide Firefox window (Selenium WebDriver)? | 1 | 3 | 14 | 5,371,496 | 0 |
0 | 0 | When I execute multiple test simultaneously, i don't want to keep Firefox browser window visible.. I can minimize it using selenium.minimizeWindow() but I don't want to do it.
Is there any way to hide Firefox window? I am using FireFox WebDriver. | false | 5,370,762 | 0.042831 | 0 | 0 | 3 | If you are using KDE Desktop, you can make Firefox Windows to be initially opened being minimized. That made my day to me regarding this problem. Just do the following:
Open Firefox
Click on the Firefox icon on the top left corner of the menu bar -> Advanced -> Special Application Settings...
Go to the "Size & Position" tab.
Click on "Minimized" and choose "Apply Initially" (YES).
These settings will apply for new Firefox windows from now on and you will not be bothered with pop-ups anymore when running tests with Webdriver. | 0 | 103,080 | 0 | 51 | 2011-03-20T19:10:00.000 | python,selenium,firefox,webdriver | How to hide Firefox window (Selenium WebDriver)? | 1 | 3 | 14 | 13,800,788 | 0 |
0 | 0 | When I execute multiple test simultaneously, i don't want to keep Firefox browser window visible.. I can minimize it using selenium.minimizeWindow() but I don't want to do it.
Is there any way to hide Firefox window? I am using FireFox WebDriver. | false | 5,370,762 | 0.014285 | 0 | 0 | 1 | in options (Firefox options, chrome options )
set boolean headless to true by calling set_headless method. | 0 | 103,080 | 0 | 51 | 2011-03-20T19:10:00.000 | python,selenium,firefox,webdriver | How to hide Firefox window (Selenium WebDriver)? | 1 | 3 | 14 | 54,567,508 | 0 |
0 | 0 | I'm using Pythons SocketServer.ThreadingTCPServer. Now I want to know how many clients are connected at a certain moment.
How to solve this? | false | 5,370,778 | 0 | 0 | 0 | 0 | In a thread that "serves" client use some global count that is increased when client connects and decreased when disconnects.
If you want to count from OS level then use nestat -an with proper grep filter and wc -l (on Windows uses ports of grep and wc) | 0 | 4,268 | 0 | 4 | 2011-03-20T19:12:00.000 | python,tcp,network-programming | How to count connected clients in TCPServer? | 1 | 1 | 2 | 5,370,827 | 0 |
0 | 0 | There are many ways to read XML, both all-at-once (DOM) and one-bit-at-a-time (SAX). I have used SAX or lxml to iteratively read large XML files (e.g. wikipedia dump which is 6.5GB compressed).
However after doing some iterative processing (in python using ElementTree) of that XML file, I want to write out the (new) XML data to another file.
Are there any libraries to iteratively write out XML data? I could create the XML tree and then write it out, but that is not possible without oodles of ram. Is there anyway to write the XML tree to a file iteratively? One bit at a time?
I know I could generate the XML myself with print "<%s>" % tag_name, etc., but that seems a bit... hacky. | false | 5,377,980 | 0.049958 | 0 | 0 | 1 | If you're reading in XML dialect1, and have to write XML dialect2, wouldn't it be a good idea to write down the conversion process using xslt? You may not even need any source code that way. | 0 | 1,840 | 0 | 3 | 2011-03-21T13:06:00.000 | python,xml,memory | Iteratively write XML nodes in python | 1 | 2 | 4 | 5,378,220 | 0 |
0 | 0 | There are many ways to read XML, both all-at-once (DOM) and one-bit-at-a-time (SAX). I have used SAX or lxml to iteratively read large XML files (e.g. wikipedia dump which is 6.5GB compressed).
However after doing some iterative processing (in python using ElementTree) of that XML file, I want to write out the (new) XML data to another file.
Are there any libraries to iteratively write out XML data? I could create the XML tree and then write it out, but that is not possible without oodles of ram. Is there anyway to write the XML tree to a file iteratively? One bit at a time?
I know I could generate the XML myself with print "<%s>" % tag_name, etc., but that seems a bit... hacky. | false | 5,377,980 | 0.049958 | 0 | 0 | 1 | If you don't find anything else, what I'd prefer here is to inherit from ElementTree and create a "iteractiveElementTree", adding to it a "file" attribute. I'd subclasse the nodes to have a "start_tag_comitted" attribute and a "commit" method. Upon being called, this "commit" method would call the render method for a subtree - starting from the fartest parent where e"start_tag_comitted" is false. With the string in hand I'd manually strip the closing tags for the parents of the current node. There is the need to handle the previously oppened but not closed parent siblings as well.
Then, I'd remove the "commited" node from the memory model.
You will need to anotate node parents to each node as well, as ElementTree does not do that.
(Write me if there are no better answers an dyou get stuck there, I could implement this) | 0 | 1,840 | 0 | 3 | 2011-03-21T13:06:00.000 | python,xml,memory | Iteratively write XML nodes in python | 1 | 2 | 4 | 5,378,261 | 0 |
0 | 0 | When parsing an xml file into a Python ElementTree the attributes' order is mixed up because Python stores the attributes in a dictionary.
How can I change the order of the attributes in the dictionary? | false | 5,381,296 | 0 | 0 | 0 | 0 | XML does not define any ordering of attributes of a node. So the behavior is fine.
If you make assumptions about the ordering of attributes then your assumptions are wrong.
There is no ordering and you must not expect any kind of attribute ordering. So your question is invalid. | 0 | 4,248 | 0 | 2 | 2011-03-21T17:25:00.000 | python,xml,elementtree | How to order xml element attributes in Python? | 1 | 2 | 4 | 5,381,347 | 0 |
0 | 0 | When parsing an xml file into a Python ElementTree the attributes' order is mixed up because Python stores the attributes in a dictionary.
How can I change the order of the attributes in the dictionary? | false | 5,381,296 | 0 | 0 | 0 | 0 | You can not change the order of attributes internally in the dictionary. This is impossible unless you do some fancy hacking.
The solution therefore, is to manually access the attributes in the order you want them, or create a list of the keys/items and sort that the way you want. | 0 | 4,248 | 0 | 2 | 2011-03-21T17:25:00.000 | python,xml,elementtree | How to order xml element attributes in Python? | 1 | 2 | 4 | 5,399,912 | 0 |
0 | 0 | i have quite a lot of experience with python and gst-python, but no experience with plain gstreamer.
does anyone know (well, someone on earth probably does but...) how to create a custom element? i got as far as
class MyElement(Element):
by intuition, but i have no idea what next...
simply what i was hoping for was a "replace this function with the thing you want to happen to every unit that this element is passed", but i am pretty certain that it will be FAR more complicated than that.... | false | 5,382,305 | 0 | 1 | 0 | 0 | If you're creating a source element, you probably want to subclass gst.BaseSrc. Then, IIRC, the main thing you need to do is implement the do_create() virtual method. Don't forget to gobject.type_register() your class; you may also need to set the time format using set_format().
I second the recommendation to look at the Pitivi source code; it contains several GStreamer elements implemented in Python. | 0 | 1,272 | 0 | 0 | 2011-03-21T19:00:00.000 | python,gstreamer | How to make a custom element im gst-python | 1 | 1 | 1 | 8,164,493 | 0 |
0 | 0 | Greetings!
So I'm creating a Python script that will when finished be compiled with Shedskin. Currently we do a little FTP work in this script so we import the ftplib module. When I attempt to compile it with Shedskin we get the error back saying that there is no '_socket' file in our Python2.6 installation on Ubuntu. I've checked myself in the '/usr/lib/python2.6/lib-dynload' dir to confirm that yes there isn't any file entitled '_socket.so' present in that folder.
I've tried reinstalling the python2.6 package in Synaptic but to no avail.
What should I do? | false | 5,422,714 | 0 | 0 | 0 | 0 | Look for shdeskin supported library modules in
/usr/lib/python2.6/site-packages/shedskin/lib/.
My Fedora installation of shdeskin 0.7 does not include ftplib. | 0 | 501 | 0 | 0 | 2011-03-24T16:57:00.000 | python,sockets,ftp | Python: _socket.so | 1 | 1 | 1 | 5,423,202 | 0 |
1 | 0 | Using Google Appengine running the Python GDATA setup. I'm a member of the volunteer programming team for DPAU, which runs on Google Apps Education and has a Google Appengine running Python with help from the GDATA library.
I'm using the create_site function in the SitesClient class. I know there is an input called uri= but when I pass it through it always comes back as Invalid Request URI.
Also, Google's docs suggest the URI field is intended to be used for adding a site to a different domain. I want it on my normal domain (dpau.org) but I want to specify the url of the site because that's important. www.dpau.org/IWantThisURL
entry = client.create_site(orgName, description=orgDescription, source_site='https://sites.google.com/feeds/site/dpau.org/org', uri='https://sites.google.com/feeds/site/dpau.org/title-for-my-site')
I shall be very grateful for any help you can provide to us. I'm a bit of a newbie at python :) | false | 5,426,967 | 0 | 0 | 0 | 0 | The initial site URL is determined by the site name, using a simple stripping algorithm (all lowercase, no spaces or special chars except - I believe)
Change this into a two-step process:
create the site using a munged name that corresponds to the URL you want
update the site title, which retains the URL but updates the pretty title
Done :) | 0 | 132 | 0 | 0 | 2011-03-24T23:46:00.000 | python,google-apps,gdata-api,gdata | Can a create a site with a custom URI in Google Sites with Python? | 1 | 1 | 1 | 6,580,459 | 0 |
0 | 0 | How can i load a specific content from a website through python?For example,i want to load some posts of a blog and appear them to my own site.How can i do this? | false | 5,434,520 | 0.197375 | 0 | 0 | 2 | urllib and urllib2 will let you load the raw HTML. HTML parsers such as BeautifulSoup and lxml will let you parse the raw HTML so you can get at the sections you care about. Template engines such as Mako, Cheetah, etc. will let you generate HTML so that you can have web pages to display. | 0 | 4,342 | 0 | 1 | 2011-03-25T15:24:00.000 | python,load | Load website's content through python | 1 | 1 | 2 | 5,434,558 | 0 |
0 | 0 | Does anyone know how to write a live data sniffer in Python which extracts the originating IP address and the full URL that was being accessed? I have looked at pulling data from urlsnarf however IPv6 is not supported (and the connections will be to IPv6 hosts).
While I can pull data from tcpdump and greping for GET/POST that would leave me with simply the path on the webserver, and I would not obtain the associated FQDN. Unfortunately using SQUID w/ IPv6 TPROXY is not an option due to the configuration of the environment.
Does anyone have any ideas on how to do this with Python bindings for libpcap? Your help would be most appreciated :)
Thanks :) | true | 5,436,185 | 1.2 | 0 | 0 | 2 | Unfortunately, with IPv6 you are stuck doing your own TCP re-assembly. The good news that you are only concerned with URL data which should (generally) be in one or two packets.
You should be able to get away with using pylibpcap to do this. You'll want to use setfilter on your pcap object to make sure you are only looking at TCP traffic. As you move forward in your pcap loop you'll apply some HTTP regular expressions to the payload. If you have what looks like HTTP traffic go ahead and try to parse the header to get at the URL data. Hopefully, you'll get full URL with a line break before the end of the packet. If not, you are going to have to do some lightweight TCP reassembly.
Oh, and you'll want to use socket.inet_ntop and socket.getaddrinfo to print out info about the IPv6 host. | 0 | 1,843 | 0 | 5 | 2011-03-25T17:49:00.000 | python,url,ipv6,libpcap,sniffing | URL Sniffing in Python | 1 | 1 | 1 | 5,446,765 | 0 |
0 | 0 | I am working on a client-server simulation software. And I want the client to be implemented on the web, and also require that the client can do computations like matrix multiplication, random number generation etc., which framework can I use? And also I hope that the client side and server side communicate using simple socket, because the server code is implemented with c++. Any suggestions are really appreciated!!
Thanks
Simon | false | 5,447,104 | 0 | 0 | 0 | 0 | Matrix manipulation is in NumPy. Everything else listed in in the standard library. You may want to look into something like Twisted in order to mediate the network access though. | 0 | 396 | 0 | 0 | 2011-03-27T03:56:00.000 | c++,python | Python web client programming library | 1 | 1 | 2 | 5,447,117 | 0 |
0 | 0 | I am trying to send an email (through gmail) using python script that someone once wrote on this site, but I'm getting an error:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe8 in position 2: invalid continuation byte
the script:
import smtplib
from email.mime.text import MIMEText
#mail setup
FROMMAIL = "[email protected]"
LOGIN = FROMMAIL
PASSWORD = "yyy"
SUBJECT = "test subject"
TOMAIL = "[email protected]"
msg = MIMEText('testcontent')
msg['Subject'] = 'test'
msg['From'] = FROMMAIL
msg['To'] = TOMAIL
server = smtplib.SMTP('smtp.gmail.com', 587)
server.set_debuglevel(1)
server.ehlo()
server.starttls()
server.login(LOGIN, PASSWORD)
server.sendmail(FROMMAIL, [TOMAIL], msg.as_string())
server.quit()
The stacktrace:
Traceback (most recent call last):
File "C:\Users\xxx\Desktop\test.py", line 11, in
server = smtplib.SMTP('smtp.gmail.com', 587)
File "C:\Program Files\Python31\lib\smtplib.py", line 248, in __init__
fqdn = socket.getfqdn()
File "C:\Program Files\Python31\lib\socket.py", line 290, in getfqdn
name = gethostname()
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe8 in position 2: invalid continuation byte
I am using python v3.1.3.
How to resolve this?
Thank you. | true | 5,449,084 | 1.2 | 1 | 0 | 0 | Use the 'email' module of Python in order to generate proper formatted emails.
Dealing yourself with encoding issues on the application level while composing emails directly through Python is not the way to go. | 0 | 548 | 0 | 0 | 2011-03-27T12:03:00.000 | python,email,unicode,gmail,decode | Python : sending mail through gmail issue | 1 | 1 | 1 | 5,449,108 | 0 |
0 | 0 | I wanted to know if this was possible- I want to use Python to retweet every tweet a person sends out. If yes then how can I implement this? | false | 5,449,091 | 0 | 1 | 0 | 0 | The newest version of python-twitter allows you to retweet with the command
api.PostRetweet(tweet_id)
where api is a logged-in api and tweet_id is the id of the tweet you want to retweet. | 0 | 4,449 | 0 | 4 | 2011-03-27T12:05:00.000 | python,twitter | Sending out twitter retweets with Python | 1 | 1 | 4 | 16,450,651 | 0 |
0 | 0 | in python,can i load a module from remote server to local?
what i do this is want to protect my source code.
what should i do ,thanks | false | 5,470,161 | 0 | 1 | 0 | 0 | Yes, you can import your code in creative ways.
No, it will not protect the code from being seen. Rethink your strategy, not tactics. | 0 | 5,926 | 0 | 1 | 2011-03-29T09:04:00.000 | python,module,load | python can load modules from remote server? | 1 | 1 | 3 | 5,471,112 | 0 |
0 | 0 | Is there a way of using webdriverbackedselenium in python, as in
selenium =
webdriverbackedselenium(driver,"http://www.google.com") | false | 5,475,154 | 0 | 0 | 0 | 0 | In the meantime, one could manually write wrapper class that routes Selenium RC calls to WebDriver equivalent. I did some work on that for PHP, it's not that hard, though not all functions can be ported, but a majority can. And the process is similar across languages in terms of code flow / algorithm to wrap the calls. | 0 | 636 | 0 | 1 | 2011-03-29T15:38:00.000 | python,webdriver,selenium-webdriver | webdriverbackedselenium in python | 1 | 1 | 1 | 8,529,382 | 0 |
0 | 0 | Is there a messaging solution out there (preferably supporting Python) that I can use like a mailbox, e.g. retrieve messages from any given queue without having to subscribe? I suppose message queues could work, but I would have to repeatedly subscribe, grab messages from the queue, then unsubscribe, which does not sound optimal. | true | 5,482,097 | 1.2 | 1 | 0 | 0 | Most (if not all) Messaging solutions support two modes of messaging
Publish \ Subscribe -that is, you need to subscribe to get the message.
Queuing - one party sends a message to the queue, the other reads the message from the Queue - no subscription needed, and the message is consumed when it's read.
Actually, standard Queuing is more common then publish subscribe - you have better chances of finding a tool that supports queuing, but not pub\sub, then find a tool that supports pub\sub but not queuing.
You are probably looking for the 2nd mode | 0 | 423 | 0 | 0 | 2011-03-30T05:05:00.000 | python,email,scalability,message-queue,messaging | Protocol for retrieving and publishing messages (message queues without the pub/sub) | 1 | 1 | 3 | 5,483,282 | 0 |
0 | 0 | Is there any way of converting the python socket object to a void pointer. I am using ctypes and I have tried using cast and the conversion functions of python.
I am also using a DLL and there is a function which takes structure as its argument.
But when i pass the structure to the function I am getting invalid parameters as error.
Is there any drawback in using python for C DLL's created on C which take C type parameters? | true | 5,496,253 | 1.2 | 0 | 0 | 0 | There is no good reason to convert a socket to a pointer. You can use socket.fileno() to get the socket's file descriptor which should be usable in C code.
If your library expects a pointer, you might need to wrap it with a ctypes struct - but without knowing details we can't tell. | 0 | 203 | 0 | 0 | 2011-03-31T06:55:00.000 | python | PYTHON Sockets and structure | 1 | 1 | 1 | 5,496,296 | 0 |
1 | 0 | I have around 10 odd sites that I wish to scrape from. A couple of them are wordpress blogs and they follow the same html structure, albeit with different classes. The others are either forums or blogs of other formats.
The information I like to scrape is common - the post content, the timestamp, the author, title and the comments.
My question is, do i have to create one separate spider for each domain? If not, how can I create a generic spider that allows me scrape by loading options from a configuration file or something similar?
I figured I could load the xpath expressions from a file which location can be loaded via command line but there seems to be some difficulties when scraping for some domain requires that I use regex select(expression_here).re(regex) while some do not. | false | 5,497,268 | 0.033321 | 0 | 0 | 1 | You can use a empty allowed_domains attribute to instruct scrapy not to filter any offsite request. But in that case you must be careful and only return relevant requests from your spider. | 0 | 3,829 | 0 | 6 | 2011-03-31T08:44:00.000 | python,screen-scraping,scrapy | what is the best way to scrape multiple domains with scrapy? | 1 | 3 | 6 | 8,621,234 | 0 |
1 | 0 | I have around 10 odd sites that I wish to scrape from. A couple of them are wordpress blogs and they follow the same html structure, albeit with different classes. The others are either forums or blogs of other formats.
The information I like to scrape is common - the post content, the timestamp, the author, title and the comments.
My question is, do i have to create one separate spider for each domain? If not, how can I create a generic spider that allows me scrape by loading options from a configuration file or something similar?
I figured I could load the xpath expressions from a file which location can be loaded via command line but there seems to be some difficulties when scraping for some domain requires that I use regex select(expression_here).re(regex) while some do not. | false | 5,497,268 | 0 | 0 | 0 | 0 | You should use BeautifulSoup especially if you're using Python. It enables you to find elements in the page, and extract text using regular expressions. | 0 | 3,829 | 0 | 6 | 2011-03-31T08:44:00.000 | python,screen-scraping,scrapy | what is the best way to scrape multiple domains with scrapy? | 1 | 3 | 6 | 5,508,257 | 0 |
1 | 0 | I have around 10 odd sites that I wish to scrape from. A couple of them are wordpress blogs and they follow the same html structure, albeit with different classes. The others are either forums or blogs of other formats.
The information I like to scrape is common - the post content, the timestamp, the author, title and the comments.
My question is, do i have to create one separate spider for each domain? If not, how can I create a generic spider that allows me scrape by loading options from a configuration file or something similar?
I figured I could load the xpath expressions from a file which location can be loaded via command line but there seems to be some difficulties when scraping for some domain requires that I use regex select(expression_here).re(regex) while some do not. | false | 5,497,268 | 0.033321 | 0 | 0 | 1 | I do sort of the same thing using the following XPath expressions:
'/html/head/title/text()' for the title
//p[string-length(text()) > 150]/text() for the post content. | 0 | 3,829 | 0 | 6 | 2011-03-31T08:44:00.000 | python,screen-scraping,scrapy | what is the best way to scrape multiple domains with scrapy? | 1 | 3 | 6 | 6,232,758 | 0 |
1 | 0 | I've got a python crawler crawling a few webpages every few minutes. I'm now trying to implement a user interface to be accessed over the web and to display the data obtained by the crawler. I'm going to use php/html for the interface. Anyway, the user interface needs some sort of button which triggers the crawler to crawl a specific website straight away (and not wait for the next crawl iteration).
Now, is there a way of sending data from the php script to the running python script? I was thinking about standard input/output, but could not find a way this can be done (writing from one process to another process stdin). Then I was thinking about using a shared file which php writes into and python reads from. But then I would need some way to let the python script know, that new data has been written to the file and a way to let the php script know when the crawler has finished its task. Another way would be sockets - but then I think, this would be a bit over the top and not as simple as possible.
Do you have any suggestions to keep everything as simple as possible but still allowing me to send data from a php script to a running python process?
Thanks in advance for any ideas!
Edit: I should note, that the crawler saves the obtained data into a sql database, which php can access. So passing data from the python crawler to the php script is no problem. It's the other way round. | false | 5,499,558 | 0 | 1 | 0 | 0 | Since i don't know too much about how python works just treat this like a wild idea.
Create an XML on your server which is accessible by both of python and PHP
On the PHP side you can insert new nodes to this XML about new urls with a processed=false flag
Python come and see for unprocessed tasks then fetch data and put sources onto your db
After successful fetching, toggle the processed flag
When next time PHP touch this XML, delete nodes with processed=true attributes
Hope it helps you in some way. | 0 | 1,079 | 0 | 1 | 2011-03-31T12:06:00.000 | php,python,stdout,stdin,web-crawler | Passing Data to a Python Web Crawler from PHP Script | 1 | 2 | 3 | 5,499,681 | 0 |
1 | 0 | I've got a python crawler crawling a few webpages every few minutes. I'm now trying to implement a user interface to be accessed over the web and to display the data obtained by the crawler. I'm going to use php/html for the interface. Anyway, the user interface needs some sort of button which triggers the crawler to crawl a specific website straight away (and not wait for the next crawl iteration).
Now, is there a way of sending data from the php script to the running python script? I was thinking about standard input/output, but could not find a way this can be done (writing from one process to another process stdin). Then I was thinking about using a shared file which php writes into and python reads from. But then I would need some way to let the python script know, that new data has been written to the file and a way to let the php script know when the crawler has finished its task. Another way would be sockets - but then I think, this would be a bit over the top and not as simple as possible.
Do you have any suggestions to keep everything as simple as possible but still allowing me to send data from a php script to a running python process?
Thanks in advance for any ideas!
Edit: I should note, that the crawler saves the obtained data into a sql database, which php can access. So passing data from the python crawler to the php script is no problem. It's the other way round. | false | 5,499,558 | 0.066568 | 1 | 0 | 1 | Best possible way to remove dependencies of working with different languages is to use a message queuing library (like rabbitMQ or ActiveMQ)
By using this you can send direct messages from php to python or vice versa...
If you want an easy way out you need to modify your python script(more on the lines of what fabrik said) to poll a database(or a file) for any new jobs...and process it if it finds one... | 0 | 1,079 | 0 | 1 | 2011-03-31T12:06:00.000 | php,python,stdout,stdin,web-crawler | Passing Data to a Python Web Crawler from PHP Script | 1 | 2 | 3 | 5,500,025 | 0 |
1 | 0 | How can I get plain text (stripped of HTML) from inside a any tag with the class name of calendardescription given a URL in python? Matching text in different tags should also be separated by a blank line. This is for text to speech purposes.
Thanks in advance. | false | 5,508,314 | 0.099668 | 0 | 0 | 1 | look in the direction of beautiful soup. | 1 | 197 | 0 | 0 | 2011-04-01T01:30:00.000 | python,html | How can I get plain text from within a HTML class given a URL in python? | 1 | 1 | 2 | 5,508,324 | 0 |
0 | 0 | In Python, is there a way to check if a string is valid JSON before trying to parse it?
For example working with things like the Facebook Graph API, sometimes it returns JSON, sometimes it could return an image file. | false | 5,508,509 | 0.07983 | 0 | 0 | 2 | I would say parsing it is the only way you can really entirely tell. Exception will be raised by python's json.loads() function (almost certainly) if not the correct format. However, the the purposes of your example you can probably just check the first couple of non-whitespace characters...
I'm not familiar with the JSON that facebook sends back, but most JSON strings from web apps will start with a open square [ or curly { bracket. No images formats I know of start with those characters.
Conversely if you know what image formats might show up, you can check the start of the string for their signatures to identify images, and assume you have JSON if it's not an image.
Another simple hack to identify a graphic, rather than a text string, in the case you're looking for a graphic, is just to test for non-ASCII characters in the first couple of dozen characters of the string (assuming the JSON is ASCII). | 1 | 268,500 | 0 | 246 | 2011-04-01T02:16:00.000 | python,json | How do I check if a string is valid JSON in Python? | 1 | 1 | 5 | 5,508,597 | 0 |
0 | 0 | I was wondering if it is possible to implement a web based game in python. I have searched the internet and cannot seem to find any tutorial or document pointing to this. Any advice will be appreciated.. | false | 5,522,809 | 0 | 0 | 0 | 0 | Of course it is. But without more information on what type of game it is impossible to provide further guidance. | 0 | 10,562 | 0 | 1 | 2011-04-02T11:18:00.000 | python,web,client-side,server-side | python web based game | 1 | 2 | 5 | 5,522,817 | 0 |
0 | 0 | I was wondering if it is possible to implement a web based game in python. I have searched the internet and cannot seem to find any tutorial or document pointing to this. Any advice will be appreciated.. | false | 5,522,809 | 0 | 0 | 0 | 0 | Python can be used in a variety of ways for developing web games. In particular, it can be very useful for doing the back-end of a multiplayer game. For the front-end you will probably need to use a client-side technology like Flash, but there have been turn-based games that simply use static HTML as the front-end (for example, Urban Dead) and that could be implemented in Python alone without a separate client-side technology. | 0 | 10,562 | 0 | 1 | 2011-04-02T11:18:00.000 | python,web,client-side,server-side | python web based game | 1 | 2 | 5 | 5,558,715 | 0 |
1 | 0 | Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request. | false | 5,532,541 | 0 | 0 | 0 | 0 | If you are just trying to simulate load, you might want to check out something like selenium, which runs through a browser and handles postbacks like a browser does. | 0 | 2,041 | 0 | 1 | 2011-04-03T21:09:00.000 | javascript,asp.net,python,screen-scraping | How can I scrape an ASP.NET site that does all interaction as postbacks? | 1 | 3 | 3 | 5,532,871 | 0 |
1 | 0 | Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request. | true | 5,532,541 | 1.2 | 0 | 0 | 2 | Without knowing any specifics, my hunch is that you are using a hardcoded session id and the web server's app domain recycled and created new encryption/decryption keys, rendering your hardcoded session id (which was encrypted by the old keys) useless. | 0 | 2,041 | 0 | 1 | 2011-04-03T21:09:00.000 | javascript,asp.net,python,screen-scraping | How can I scrape an ASP.NET site that does all interaction as postbacks? | 1 | 3 | 3 | 5,532,579 | 0 |
1 | 0 | Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request. | false | 5,532,541 | 0 | 0 | 0 | 0 | You could try using Firebugs NET tab to monitor all requests, browse around manually and then diff the requests that you generate with ones that your screen scraper is generating. | 0 | 2,041 | 0 | 1 | 2011-04-03T21:09:00.000 | javascript,asp.net,python,screen-scraping | How can I scrape an ASP.NET site that does all interaction as postbacks? | 1 | 3 | 3 | 5,532,627 | 0 |
0 | 0 | I made a little parser using HTMLparser and I would like to know where a link is redirected. I don't know how to explain this, so please look this example:
On my page I have a link on the source: http://www.myweb.com?out=147, which redirects to http://www.mylink.com. I can parse http://www.myweb.com?out=147 without any problems, but I don't know how to get http://www.mylink.com. | false | 5,538,280 | 0.291313 | 0 | 0 | 3 | You can not get hold of the redirection URL through parsing the HTML source code.
Redirections are triggered by the server and NOT by the client. You need to perform a HTTP request to the related URL and check the HTTP response of the server - in particular for the HTTP status code 304 (Redirection) and the new URL. | 0 | 8,180 | 0 | 6 | 2011-04-04T12:14:00.000 | python,parsing,redirect | Determining redirected URL in Python | 1 | 1 | 2 | 5,538,415 | 0 |
0 | 0 | I am using the universal feed parser library in python to get an atom feed. This atom feed has been generated using google reader after bundling several subscriptions.
I am able to receive the latest feeds, however the feedparser.parse(url) returns a FeedParserDict which doesnot have the etag or modified values. I unable to just check for the latest feeds because of this.
Does google reader send an etag value? if yes why isn't the feedparser returning it?
~Vijay | true | 5,552,049 | 1.2 | 0 | 0 | 0 | The Google Reader API does not support ETags or If-Modified-Since. However, it does support an ot=<timestamp in seconds since the epoch> parameter which you can use to restrict fetched data to items since you last attempted a fetch. | 0 | 203 | 1 | 0 | 2011-04-05T12:46:00.000 | python,atom-feed,google-reader | getting etag from 'google reader bundle' feed using universal feed parser in python | 1 | 1 | 1 | 5,554,213 | 0 |
1 | 0 | I am trying to monitor day-to-day prices from an online catalogue.
The site uses HTTPS and generates the catalogue pages with javascript. How can i interface with the site and make it generate the pages I need?
I have done this with other sites where the HTML can easily be accessed, I have no problem parseing the HTML once generated.
I only know Python and Java.
Thanks in advance. | false | 5,561,950 | 0.066568 | 0 | 0 | 1 | If they've created a Web API that their JavaScript interfaces with, you might be able to scrape that directly, rather than trying to go the HTML route.
If they've obfuscated it or that option isn't available for some other reason, you'll basically need a Web browser to evaluate the JavaScript and then scrap the browser's DOM. Perhaps write a browser plugin? | 0 | 9,750 | 0 | 12 | 2011-04-06T05:41:00.000 | java,javascript,python,https,web-scraping | How to scrape HTTPS javascript web pages | 1 | 1 | 3 | 5,561,974 | 0 |
0 | 0 | Due to paranoid Sharepoint administrators we have the need clone an existing list over the SOAP API ourselves..Is there an (easy) way to accomplish this? We already have a Python based API for accessing the list items and field descriptions over SOAP - I am not sure if this is enough for create a 1:1 copy ourselves...is there a better more straight forward way? | false | 5,568,942 | 0 | 0 | 0 | 0 | There is not an "easy" way to do this. You would have to retrieve the schema from the web service, and modify it before instantiating it somewhere else. It depends a bit on where the clone operation is destined, but you will have to change the schema in some way to some degree, regardless of the destination.
Beyond that, it depends on how customized the list is. If you are using custom content types, you'll have to clone them as well, with all the same difficulties. If it has custom views, you'll have to do the same trick with them. If you have custom forms for your views, I think you're hosed. I don't know of any way to get those forms from the web service interface. | 0 | 242 | 0 | 1 | 2011-04-06T15:30:00.000 | python,sharepoint,soap,sharepoint-2010 | Cloning a list (definition and items) over SOAP? | 1 | 1 | 2 | 5,569,326 | 0 |
0 | 0 | For the work I am currently doing I need similar functionality as Bittorrent, only difference is I need to do some sort of extra analysis on every block received by client from peers. Though I am fairly new with Python, I found official Bittorrent client source code easy to understand (as compared to Transmission's C source code). But I can't seem to figure out the part in the source code where it deals/handles every block received.
It'd be great if anyone, who is acquainted with Bittorrent official client source code (or Transmission), can provide me some pointers for the same. | false | 5,579,047 | 0 | 1 | 0 | 0 | For Transmission, try looking at libtransmission/peer-mgr.c for code specific to each type of message received from a particular peer. This file represents the peer manager and all communication with it.
It uses libtransmission/peer-msgs.c for handling the exact messages. | 0 | 168 | 0 | 2 | 2011-04-07T09:42:00.000 | python,c,bittorrent | Block handling in bittorent | 1 | 1 | 2 | 5,582,728 | 0 |
0 | 0 | How can I make a PHP 5.3 webserver using Python?
I know how to make a simple HTTP server, but how can I include PHP?
Thanks. | false | 5,591,230 | -0.291313 | 1 | 0 | -3 | After some clarifications at last it came clear that your question was
"how to connect PHP to HTTP server"
So, actually you were interested in three letters: CGI.
However I still doubt you will get any good from it. | 0 | 1,908 | 0 | 1 | 2011-04-08T06:20:00.000 | php,python | PHP Webserver in Python | 1 | 1 | 2 | 5,591,381 | 0 |
0 | 0 | I know I can get Selenium 2's webdriver to run JavaScript and get return values but so much asynchronous stuff is happening I would like JavaScript to talk to Selenium instead of the other way around. I have done some searching and haven't found anything like this. Do people just generally use implicitly_wait? That seems likely to fail since it's not possible to time everything? Perfect example would be to let Selenium know when an XHR completed or an asynchronous animation with undetermined execution time.
Is this possible? We're using Selenium 2 with Python on Saucelabs. | false | 5,600,048 | 0 | 0 | 0 | 0 | Testing animation with selenium is opening a can of worms. The tests can be quite brittle and cause many false positives.
The problem is to do that the calls are asynchronous, and difficult to track the behaviour and change in state of the page.
In my experience the asynchronous call can be so quick that the spinner is never displayed, and the state of the page may skip a state entirely (that Selenium can detect).
Waiting for the state of the page to transition can make the tests less brittle, however the false positives cannot be removed entirely.
I recommend manual testing for animation. | 0 | 3,988 | 0 | 8 | 2011-04-08T19:43:00.000 | javascript,python,asynchronous,selenium,selenium-webdriver | Can JavaScript talk to Selenium 2? | 1 | 3 | 4 | 16,048,919 | 0 |
0 | 0 | I know I can get Selenium 2's webdriver to run JavaScript and get return values but so much asynchronous stuff is happening I would like JavaScript to talk to Selenium instead of the other way around. I have done some searching and haven't found anything like this. Do people just generally use implicitly_wait? That seems likely to fail since it's not possible to time everything? Perfect example would be to let Selenium know when an XHR completed or an asynchronous animation with undetermined execution time.
Is this possible? We're using Selenium 2 with Python on Saucelabs. | false | 5,600,048 | 0.099668 | 0 | 0 | 2 | Not to be overly blunt, but if you want your App to talk to your Test Runner, then you're doing it wrong.
If you need to wait for an XHR to finish, you could try displaying a spinner and then test that the spinner has disappeared to indicate a successful request.
In regards to the animation, when the animation has completed, maybe its callback could add a class indicating that the animation has finished and then you could test for the existence of that class. | 0 | 3,988 | 0 | 8 | 2011-04-08T19:43:00.000 | javascript,python,asynchronous,selenium,selenium-webdriver | Can JavaScript talk to Selenium 2? | 1 | 3 | 4 | 5,600,229 | 0 |
0 | 0 | I know I can get Selenium 2's webdriver to run JavaScript and get return values but so much asynchronous stuff is happening I would like JavaScript to talk to Selenium instead of the other way around. I have done some searching and haven't found anything like this. Do people just generally use implicitly_wait? That seems likely to fail since it's not possible to time everything? Perfect example would be to let Selenium know when an XHR completed or an asynchronous animation with undetermined execution time.
Is this possible? We're using Selenium 2 with Python on Saucelabs. | false | 5,600,048 | 0.148885 | 0 | 0 | 3 | Theoretically it is possible, but I would advise against it.
The solution would probably have some jQuery running on the site that sets a variable to true when the JavaScript processing has finished.
Set selenium up to loop through a getEval until this variable becomes true and then do something in Selenium.
It would meet your requirements but it's a really bad idea. If for some reason your jQuery doesn't set the trigger variable to true (or whatever state you expect) Selenium will sit there indefinetly. You could put a really long timeout on it, but then what would be the different in just getting Selenium to do a getEval and wait for a specific element to appear?
It sounds like you are trying to overengineer your solution and it will cause you more pain in the future will very few additional benefits. | 0 | 3,988 | 0 | 8 | 2011-04-08T19:43:00.000 | javascript,python,asynchronous,selenium,selenium-webdriver | Can JavaScript talk to Selenium 2? | 1 | 3 | 4 | 5,627,924 | 0 |
0 | 0 | Braintree's transparent redirect works beautifully, I don't have to pass any credit card info through my servers, and I'd like to keep it this way. My question is what is the preferred method to allow returning customers to use vaulted credit card/billing information? Credit card token is a protected field, so it cannot be submitted by the customer via an option field. Instead, I need to specify credit card token before generating the transaction data field. The problem with this is twofold, 1. handling disabled javascript if I were to attempt some AJAX and 2. forcing a returning user through a separate page so they can choose their credit card/billing info is almost as much hassle as re-entering the info itself. | false | 5,607,535 | 0.291313 | 0 | 0 | 3 | You're right that using credit card tokens with transparent redirect is slightly difficult to deal with using the current Braintree API.
However if you already have the users credit card information stored in the vault, you can use a server to server request since you won't have to capture any sensitive information. A simple HTML select for the credit card token field would work, and your HTML form would post to your own sever.
To make this solution even more comprehensive, you can have the tr_data field included, if the user wants to enter a new card you can submit the form to Braintree as a TR request.
If you have any more questions or want to work through this code together email Braintree support: [email protected]
I'm a developer at Braintree and would be happy to help you with any more technical questions. | 0 | 799 | 0 | 2 | 2011-04-09T20:03:00.000 | python,ajax,django,credit-card,braintree | Braintree python transparent redirect with vault option | 1 | 2 | 2 | 5,615,096 | 0 |
0 | 0 | Braintree's transparent redirect works beautifully, I don't have to pass any credit card info through my servers, and I'd like to keep it this way. My question is what is the preferred method to allow returning customers to use vaulted credit card/billing information? Credit card token is a protected field, so it cannot be submitted by the customer via an option field. Instead, I need to specify credit card token before generating the transaction data field. The problem with this is twofold, 1. handling disabled javascript if I were to attempt some AJAX and 2. forcing a returning user through a separate page so they can choose their credit card/billing info is almost as much hassle as re-entering the info itself. | false | 5,607,535 | 0 | 0 | 0 | 0 | I am a Python developer and just successfully integrated Django with Braintree.
I used almost the same approach as BenMills's description: using S2S API rather than TR for credit card switching while having the ability to create a new credit card on the same page!
But I think there might be a potential way to solve your problem: General several TR forms in one single page with corresponding tr_data for each credit card under that user, thus you don't have to worry about using AJAX to generate tr_data upon user's choices. | 0 | 799 | 0 | 2 | 2011-04-09T20:03:00.000 | python,ajax,django,credit-card,braintree | Braintree python transparent redirect with vault option | 1 | 2 | 2 | 5,849,277 | 0 |
1 | 0 | I am somewhat new to RESTful APIs.
I'm trying to implement a python system that will control various tasks across multiple computers, with one computer acting as the controller.
I would like all these tasks to be divided amongst multiple users (ex. task foo runs as user foo, and task bar runs as user bar) while handling all requests with a central system. The central system should also act as a simple web server and be able to server basic pages for status purposes.
It it possible to have each user register a "page" with a central server for the API and have the server pass all requests to the programs (probably written in Python)? | false | 5,608,319 | 0 | 0 | 0 | 0 | Yes. Keep in mind that being RESTful is merely a way to organize your web application's URL's in a standard way. You can build your web application to do whatever you want. | 0 | 622 | 0 | 0 | 2011-04-09T22:19:00.000 | python,xml,linux,rest | RESTful API across multiple users | 1 | 1 | 2 | 5,609,144 | 0 |
1 | 0 | Is there a way to call a python function when a certain link is clicked within a html page?
Thanks | false | 5,615,228 | 1 | 0 | 0 | 7 | Yes, but not directly; you can set the onclick handler to invoke a JavaScript function that will construct an XMLHttpRequest object and send a request to a page on your server. That page on your server can, in turn, be implemented using Python and do whatever it would need to do. | 0 | 101,647 | 0 | 15 | 2011-04-10T22:47:00.000 | python,html | Call a python function within a html file | 1 | 1 | 5 | 5,615,253 | 0 |
0 | 0 | I'm using a Digi 3G router that can be programmed with python, and I want it to make periodic SSH connections to another device. I've read everything about paramiko, but don't know how to install it in the router.
I want to know if there is any other way of including paramiko into a device, apart from installing (i.e. including some library), or if it exist another possibility apart from paramiko for this particular case.
Thanks in advance. | false | 5,636,251 | 0 | 1 | 0 | 0 | The description for the Digi 3G states that it is capable of python scripting, using a custom development environment. To make this work, you would have to use the python source code from paramiko; the executable would not be installable directly on the router (since the executable is designed to be run on a computer, not a router).
The source code would probably have to be modified, since the APIs for this sort of thing are most likely different than those in a computer, and the router would have to be capable of what you are asking of it. | 0 | 202 | 0 | 0 | 2011-04-12T13:47:00.000 | python | SSH connection with python from a device (not computer) | 1 | 1 | 1 | 5,639,270 | 0 |
1 | 0 | I am try to make a web application that needs to parse one specific wikipedia page & extract some information which is stored in a table format on the page. The extracted data would then need to be stored onto a database.
I haven't really done anything like this before. What scripting language should I use to do this? I have been reading a little & looks like Python (using urllib2 & BeautifulSoup) should do the job, but is it the best way of approaching the problem.
I know I could also use the WikiMedia api but is using python a good idea for general parsing problems?
Also the tabular data on the wikipedia page may change so I need to parse every day. How do I automate the script for this? Also any ideas for version control without external tools like svn so that updates can be easily reverted if need be? | true | 5,647,413 | 1.2 | 0 | 0 | 1 | What scripting language should I use to do this?
Python will do, as you've tagged your question.
looks like Python (using urllib2 & BeautifulSoup) should do the job, but is it the best way of approaching the problem.
It's workable. I'd use lxml.etree personally. An alternative is fetching the page in the raw format, then you have a different parsing task.
I know I could also use the WikiMedia api but is using python a good idea for general parsing problems?
This appears to be a statement and an unrelated argumentative question. Subjectively, if I was approaching the problem you're asking about, I'd use python.
Also the tabular data on the wikipedia page may change so I need to parse every day. How do I automate the script for this?
Unix cron job.
Also any ideas for version control without external tools like svn so that updates can be easily reverted if need be?
A Subversion repository can be run on the same machine as the script you've written. Alternatively you could use a distributed version control system, e.g. git.
Curiously, you've not mentioned what you're planning on doing with this data. | 0 | 923 | 0 | 2 | 2011-04-13T10:02:00.000 | python,parsing,screen-scraping | How to parse a specific wiki page & automate that? | 1 | 1 | 2 | 5,647,655 | 0 |
0 | 0 | I'm venturing in unknown territory here...
I am trying to work out how hard it could be to implement an Email client using Python:
Email retrieval
Email sending
Email formatting
Email rendering
Also I'm wondering if all protocols are easy/hard to support e.g. SMTP, IMAP, POP3, ...
Hopefully someone could point me in the right direction :) | false | 5,647,487 | 0.099668 | 1 | 0 | 2 | If I were you, I'd check out the source code of existing email-clients to get an idea: thunderbird, sylpheed-claws, mutt...
Depending on the set of features you want to support, it is a big project. | 0 | 40,032 | 0 | 23 | 2011-04-13T10:08:00.000 | python,email,smtp,imap,email-client | How hard is it to build an Email client? - Python | 1 | 2 | 4 | 5,647,584 | 0 |
0 | 0 | I'm venturing in unknown territory here...
I am trying to work out how hard it could be to implement an Email client using Python:
Email retrieval
Email sending
Email formatting
Email rendering
Also I'm wondering if all protocols are easy/hard to support e.g. SMTP, IMAP, POP3, ...
Hopefully someone could point me in the right direction :) | false | 5,647,487 | 1 | 1 | 0 | 8 | I think you will find much of the clients important parts prepackaged:
Email retrieval - I think that is covered by many of the Python libraries.
Email sending - This would not be hard and it is most likely covered as well.
Email formatting - I know this is covered because I just used it to parse single and multipart emails for a client.
Email rendering - I would shoot for an HTML renderer of some sort. There is a Python interface to the renderer from the Mozilla project. I would guess there are other rendering engines that have python interfaces as well. I know wxWidgets has some simple HTML facilities and would be a lot lighter weight. Come to think about it the Mozilla engine may have a bunch of the other functions you would need as well. You would have to research each of the parts.
There is lot more to it than what is listed above. Like anything worth while it won't be built in a day. I would lay out precisely what you want it to do. Then start putting together a prototype. Just build a simple framework that does basic things. Like only have it support the text part of a message with no html. Then build on that.
I am amazed at the wealth of coding modules available with Python. I needed to filter html email messages, parse stylesheets, embed styles, and whole host of other things. I found just about every function I needed in a Python library somewhere. I was especially happy when I found out that some css sheets are gzipped that there was a module for that!
So if you are serious about then dig in. You will learn a LOT. :) | 0 | 40,032 | 0 | 23 | 2011-04-13T10:08:00.000 | python,email,smtp,imap,email-client | How hard is it to build an Email client? - Python | 1 | 2 | 4 | 6,868,304 | 0 |
0 | 0 | Freebase's Python API uses GUIDs that have a set prefix and a zero-padded suffix:
"guid":"#9202a8c04000641f8000000000211f52" (http://wiki.freebase.com/wiki/Guid)
"Freebase guids are represented with 32 hexidecimal characters, the first 17 are the graph prefix and the remaining 15 are the item suffix" (http://tinyify.freebaseapps.com/).
This format enables the GUID to compress down for short URLs.
How do you construct GUIDs like this? | false | 5,652,093 | 0 | 0 | 0 | 0 | You'd need to look at the source for Freebase where it generates that GUID. It's definitely not a standard RFC GUID of any sort. | 0 | 476 | 0 | 1 | 2011-04-13T15:55:00.000 | python,guid,uuid,tinyurl | How do you construct a GUID with a set prefix and zero-padded suffix? | 1 | 1 | 1 | 5,659,017 | 0 |
1 | 0 | I have an application developed in Python-Zope where only on some of the pages, I am getting "page has expired issue" and this does not come every time. This issue comes when I click on "Back" or "Cancel" buttons which use browser history to redirect to the earlier pages. I have reviewed my code and there is not code setting response headers to prevent page caching.
Also the issue is with internet explorer only and code works fine with mozilla.
Is there a way I can prevent this message?
Thanks in advance. | true | 5,653,478 | 1.2 | 0 | 0 | 1 | Is your page served on HTTPS?
If so this is the expected behavior. By default IE will not cache a secured page on disk, nor will it automatically resubmit pages with POST data.
This is security feature (prevent cache sniffing, etc) and is about the only thing IE does correctly. | 0 | 110 | 0 | 1 | 2011-04-13T17:56:00.000 | python,internet-explorer,zope | Inconsistent page has expired message in internet explorer | 1 | 1 | 1 | 5,653,835 | 0 |
0 | 0 | I have a very random request from a client wanting a desktop based program that can retrieve information from web based APIs such as Google Analytics, SEOMoz and other similar services. They then want parts or all of these results stored in a central location. They have said they DO NOT want a web application at all, but they do want the ability to have the data stored on a "server" (read, server version of the desktop client) on their LAN. The program should be able to run on Windows. Linux and OS X support would be nice.
I have some programming experience and am semi-familiar with Python so I was thinking about using it. My problem is that I am unsure about the communication between clients/servers on the LAN and the communication between the client and web based APIs.
1) Is this even possible? If so, what are good places to look for resources on the API integration and network communication/any good examples?
2) Any suggestions/tips on what to look out for (e.g, common mistakes, major security issues)?
3) Any ways that would be better than what I have outlined above?
4) Database, what would be good options? I would like to make this as independent as possible and more or less self contained (relying on a minimal amount of installed software).
Thanks for any and all input! | true | 5,657,800 | 1.2 | 0 | 0 | 2 | yes, urllib, urllib2 and urlparse are the core python libs for reading to/from urls. I would suggest PyQt for the gui framework (pyGTK, wxWidgets and tkinter would be fine also, they all run fine on the three os's you've mentioned). PyQt includes everything you would need, without reaching into any other library. It includes network access controls, QUrl for url parsing and construction and plenty of GUI tools for displaying the data.
nothing out of the norm, except see [1] in 4
see 1 about PyQt. But what you're going to end up doing is crafting the urls you want to access (including all get/post parameters), sending the data, listening for the data, parse data, place in db. Client would just query db, get results, filtering/ordering/management would be done in the gui code. QtDeclarative might be a good choice for allowing users to craft their own queries (Thought of but never implemented this yet), which you can execute too.
I would suggest postgres, mysql, or any db, relational or not (see mongodb, but don't think PyQt supports it yet, but you can use any python lib with pyqt nicely). Then you can store the data there, have the clients query it and have a client sit on the server, that has extra functionality to query the sites and insert the data Though this might be better served as cli tool that gets the data and inserts into the db (so you can run it on a cron job if wanted).
You would only need to develop the client code and database schema. [1] Remember db's can have multiple clients and users directly connect to a database server. The only real caveat is configuring the server to listen only on the lan to receive the client, and the users permissions are set correctly. This is just cosmetically different from connecting a webapp to the db. You're just granting more users and clients access.
Your biggest problem should be figuring out how to deal with the tediousness of GUI Programming :P | 0 | 2,666 | 0 | 2 | 2011-04-14T02:06:00.000 | python,api,desktop-application | Python Desktop Program Using Web APIs | 1 | 1 | 1 | 5,657,892 | 0 |
1 | 0 | I'm looking to be able to query a site for warranty information on a machine that this script would be running on. It should be able to fill out a form if needed ( like in the case of say HP's service site) and would then be able to retrieve the resulting web page.
I already have the bits in place to parse the resulting html that is reported back I'm just having trouble with what needs to be done in order to do a POST of data that needs to be put in the fields and then being able to retrieve the resulting page. | false | 5,667,699 | 0 | 0 | 0 | 0 | I’ve only done a little bit of this, but:
You’ve got the HTML of the form page. Extract the name attribute for each form field you need to fill in.
Create a dictionary mapping the names of each form field with the values you want submit.
Use urllib.urlencode to turn the dictionary into the body of your post request.
Include this encoded data as the second argument to urllib2.Request(), after the URL that the form should be submitted to.
The server will either return a resulting web page, or return a redirect to a resulting web page. If it does the latter, you’ll need to issue a GET request to the URL specified in the redirect response.
I hope that makes some sort of sense? | 0 | 24,175 | 0 | 9 | 2011-04-14T18:17:00.000 | python,forms,automation,urllib2,urllib | Python urllib2 automatic form filling and retrieval of results | 1 | 1 | 3 | 5,668,120 | 0 |
1 | 0 | Need some direction on this.
I'm writing a chat room browser-application, however there is a subtle difference.
These are collaboration chats where one person types and the other person can see live ever keystroke entered by the other person as they type.
Also the the chat space is not a single line but a textarea space, like the one here (SO) to enter a question.
All keystrokes including tabs/spaces/enter should be visible live to the other person. And only one person can type at one time (I guess locking should be trivial)
I haven't written a multiple chatroom application. A simple client/server where both are communicatiing over a port is something I've written.
So here are the questions
1.) How is a multiple chatroom application written ? Is it also port based ?
2.) Showing the other persons every keystroke as they type is I guess possible through ajax. Is there any other mechanism available ?
Note : I'm going to use a python framework (web2py) but I don't think framework would matter here.
Any suggestions are welcome, thanks ! | false | 5,672,100 | 0.099668 | 0 | 0 | 1 | You could try doing something like IRC, where the current "room" is sent from the client to the server "before" the text (/PRIVMSG #room-name Hello World), delimited by a space. For example, you could send ROOMNAME Sample text from the browser to the server.
Using AJAX would be the most reasonable option. I've never used web2py, but I'm guessing you could just use JSON to parse the data between the browser and the server, if you wanted to be fancy. | 0 | 1,196 | 0 | 2 | 2011-04-15T03:30:00.000 | python,livechat | Multiple chat rooms - Is using ports the only way ? What if there are hundreds of rooms? | 1 | 1 | 2 | 5,672,152 | 0 |
1 | 0 | I am new to python and i don't know many things.I want to parse an html page,modify it and show it in my own page.I try use beautifulsoup in order to parse html but some errors are generated.I searched on the web but i don't know any specific way to do this.Can anyone help me? | false | 5,675,531 | 0.099668 | 0 | 0 | 1 | Read the BeautifulSoup documentation - and come back if you have real issues. | 0 | 606 | 0 | 1 | 2011-04-15T10:42:00.000 | python,html,beautifulsoup | Parsing html content with python and modifying it | 1 | 1 | 2 | 5,675,635 | 0 |
0 | 0 | I'm writing a TCP server that can take 15 seconds or more to begin generating the body of a response to certain requests. Some clients like to close the connection at their end if the response takes more than a few seconds to complete.
Since generating the response is very CPU-intensive, I'd prefer to halt the task the instant the client closes the connection. At present, I don't find this out until I send the first payload and receive various hang-up errors.
How can I detect that the peer has closed the connection without sending or receiving any data? That means for recv that all data remains in the kernel, or for send that no data is actually transmitted. | true | 5,686,490 | 1.2 | 0 | 0 | 18 | I've had a recurring problem communicating with equipment that had separate TCP links for send and receive. The basic problem is that the TCP stack doesn't generally tell you a socket is closed when you're just trying to read - you have to try and write to get told the other end of the link was dropped. Partly, that is just how TCP was designed (reading is passive).
I'm guessing Blair's answer works in the cases where the socket has been shut down nicely at the other end (i.e. they have sent the proper disconnection messages), but not in the case where the other end has impolitely just stopped listening.
Is there a fairly fixed-format header at the start of your message, that you can begin by sending, before the whole response is ready? e.g. an XML doctype? Also are you able to get away with sending some extra spaces at some points in the message - just some null data that you can output to be sure the socket is still open? | 0 | 35,496 | 1 | 27 | 2011-04-16T12:32:00.000 | python,c,linux,sockets,tcp | Detect socket hangup without sending or receiving? | 1 | 2 | 7 | 8,434,845 | 0 |
0 | 0 | I'm writing a TCP server that can take 15 seconds or more to begin generating the body of a response to certain requests. Some clients like to close the connection at their end if the response takes more than a few seconds to complete.
Since generating the response is very CPU-intensive, I'd prefer to halt the task the instant the client closes the connection. At present, I don't find this out until I send the first payload and receive various hang-up errors.
How can I detect that the peer has closed the connection without sending or receiving any data? That means for recv that all data remains in the kernel, or for send that no data is actually transmitted. | false | 5,686,490 | -0.028564 | 0 | 0 | -1 | You can select with a timeout of zero, and read with the MSG_PEEK flag.
I think you really should explain what you precisely mean by "not reading", and why the other answer are not satisfying. | 0 | 35,496 | 1 | 27 | 2011-04-16T12:32:00.000 | python,c,linux,sockets,tcp | Detect socket hangup without sending or receiving? | 1 | 2 | 7 | 8,386,973 | 0 |
0 | 0 | I would like to implement a server in Python that streams music in MP3 format over HTTP. I would like it to broadcast the music such that a client can connect to the stream and start listening to whatever is currently playing, much like a radio station.
Previously, I've implemented my own HTTP server in Python using SocketServer.TCPServer (yes I know BaseHTTPServer exists, just wanted to write a mini HTTP stack myself), so how would a music streamer be different architecturally? What libraries would I need to look at on the network side and on the MP3 side? | false | 5,688,573 | 0.039979 | 0 | 0 | 1 | Since you already have good python experience (given you've already written an HTTP server) I can only provide a few pointers on how to extend the ground-work you've already done:
Prepare your server for dealing with Request Headers like: Accept-Encoding, Range, TE (Transfer Encoding), etc. An MP3-over-HTTP player (i.e. VLC) is nothing but an mp3 player that knows how to "speak" HTTP and "seek" to different positions in the file.
Use wireshark or tcpdump to sniff actual HTTP requests done by VLC when playing an mp3 over HTTP, so you know how what request headers you'll be receiving and implement them.
Good luck with your project! | 0 | 28,874 | 0 | 28 | 2011-04-16T18:21:00.000 | python,http,streaming,mp3,shoutcast | Writing a Python Music Streamer | 1 | 1 | 5 | 14,489,267 | 0 |
0 | 0 | I am looking for a Twitter Streaming API Python library with proxy support. I love tweepy, but unfortunately I haven't seen a way to use an HTTP proxy. Any ideas? | false | 5,700,302 | 0.197375 | 1 | 0 | 2 | Tweepy uses httplib internally which is too low level to have proxy settings. You have to change Stream._run() method to connect to proxy instead of target host and use full (with scheme and host) URL in request. | 0 | 1,810 | 0 | 2 | 2011-04-18T08:29:00.000 | python,api,twitter,proxy,streaming | Twitter Streaming API Python library with proxy support? | 1 | 1 | 2 | 5,755,092 | 0 |
0 | 0 | I have a simple client code using xmlrpclib.
try:
Server.func1
Server.func2
.....
Server.funcN
except:
pass
, where Server - ServerProxy from xmlrpclib. How to do this on twisted ?
I see this example:
from twisted.web.xmlrpc import Proxy
from twisted.internet import reactor
def printValue(value):
print repr(value)
reactor.stop()
def printError(error):
print 'error', error
reactor.stop()
Server = Proxy('http://advogato.org/XMLRPC')
Server.callRemote('func1',).addCallbacks(printValue, printError)
reactor.run()
but how to add several nesting callRemote functions ? | false | 5,705,420 | 0.197375 | 0 | 0 | 1 | You have code in the sample you pasted which takes an action when an XML-RPC call completes. printValue prints the result of a call and printError print an error which occurs during a call.
If you want to make another call after one finishes, then maybe instead of just printing something in printValue, you could issue another Server.callRemote there. | 0 | 305 | 0 | 1 | 2011-04-18T15:41:00.000 | python,twisted | using several xmlrpc commands on twisted | 1 | 1 | 1 | 5,707,345 | 0 |
1 | 0 | I made several tests in Selenium IDE and saved it as a test suite in the HTML format which works fine for importing back into selenium IDE. Now however I would like to expand these tests using python and when I click export test suite and choose python I get this error
Suite export not implemented for the chrome://selenium-ide/content/formats/python-rc formatter
How can I enable this option in selenium IDE?
Note I also have found an additional plugin for Firefox that allows batch conversion of tests but still does not allow export of the entire test suite as one file. I realize I could combine these files by hand but in the future I would like to have this option in my workflow.
Thank you.
p.s Running Firefox 3.6 on fedora 14 | true | 5,749,321 | 1.2 | 0 | 0 | 6 | What version of SIDE are you running? If you are running the latest version (1.9), go into options, check the box off for "Enable Experimental Features" and the formats menu should now list Python. | 0 | 16,458 | 0 | 10 | 2011-04-21T20:14:00.000 | python,firefox,selenium,selenium-ide | Export test as python from Selenium IDE | 1 | 1 | 3 | 15,879,812 | 0 |
0 | 0 | does anybody know in which way the string 'Krummh%C3%B6rn' is encoded?
Plain text is "Krummhörn".
I need to decode strings like this one in Python and tried urllib.unquote('Krummh%C3%B6rn')
The result: 'Krummh\xc3\xb6rn' | false | 5,755,058 | 0.26052 | 0 | 0 | 4 | You're halfway there. Take that result and decode it as UTF-8. | 0 | 310 | 0 | 3 | 2011-04-22T11:42:00.000 | python,encoding,urldecode | Which encoding? | 1 | 1 | 3 | 5,755,083 | 0 |
1 | 0 | I am running python manage.py runserver from a machine A
when I am trying to check in machine B. The url I typed is http://A:8000/ .
I am getting an error like The system returned: (111) Connection refused | false | 5,768,797 | 0.066568 | 0 | 0 | 3 | in flask using flask.ext.script, you can do it like this:
python manage.py runserver -h 127.0.0.1 -p 8000 | 0 | 220,548 | 0 | 88 | 2011-04-24T05:20:00.000 | python,django | manage.py runserver | 1 | 3 | 9 | 34,825,606 | 0 |
1 | 0 | I am running python manage.py runserver from a machine A
when I am trying to check in machine B. The url I typed is http://A:8000/ .
I am getting an error like The system returned: (111) Connection refused | false | 5,768,797 | 0.088656 | 0 | 0 | 4 | Just in case any Windows users are having trouble, I thought I'd add my own experience. When running python manage.py runserver 0.0.0.0:8000, I could view urls using localhost:8000, but not my ip address 192.168.1.3:8000.
I ended up disabling ipv6 on my wireless adapter, and running ipconfig /renew. After this everything worked as expected. | 0 | 220,548 | 0 | 88 | 2011-04-24T05:20:00.000 | python,django | manage.py runserver | 1 | 3 | 9 | 34,215,336 | 0 |
1 | 0 | I am running python manage.py runserver from a machine A
when I am trying to check in machine B. The url I typed is http://A:8000/ .
I am getting an error like The system returned: (111) Connection refused | false | 5,768,797 | 0 | 0 | 0 | 0 | I had the same problem and here was my way to solve it:
First, You must know your IP address.
On my Windows PC, in the cmd windows i run ipconfig and select my IP V4 address. In my case 192.168.0.13
Second as mention above: runserver 192.168.0.13:8000
It worked for me.
The error i did to get the message was the use of the gateway address not my PC address. | 0 | 220,548 | 0 | 88 | 2011-04-24T05:20:00.000 | python,django | manage.py runserver | 1 | 3 | 9 | 32,757,797 | 0 |
0 | 0 | I'm planning to design a server that receives data from multiple clients, the server don't need to send anything back to the client, though STATUS_OK is still cool but not necessary.
I know the basics of Python socket module, twisted framework but my question is, should i use UDP or TCP? The client that need to stay connected at all.
I hope you guys understand my question, thank you for your wonderful help here | false | 5,769,096 | 0.049958 | 0 | 0 | 1 | Can you afford to lose messages? If yes, use UDP. Otherwise use TCP. It's what they're designed for. | 0 | 3,415 | 0 | 3 | 2011-04-24T06:37:00.000 | python,sockets,tcp,udp,twisted | Python Socket programming (TCP vs. UDP) | 1 | 4 | 4 | 5,769,103 | 0 |
0 | 0 | I'm planning to design a server that receives data from multiple clients, the server don't need to send anything back to the client, though STATUS_OK is still cool but not necessary.
I know the basics of Python socket module, twisted framework but my question is, should i use UDP or TCP? The client that need to stay connected at all.
I hope you guys understand my question, thank you for your wonderful help here | false | 5,769,096 | -0.099668 | 0 | 0 | -2 | For how long will one client be connected to the server? How many concurrent connections are you planning to handle? If there will be very short bursts of data for a lot of clients, then you should go with UDP. But chances are, TCP will do just fine initially. | 0 | 3,415 | 0 | 3 | 2011-04-24T06:37:00.000 | python,sockets,tcp,udp,twisted | Python Socket programming (TCP vs. UDP) | 1 | 4 | 4 | 5,769,114 | 0 |
0 | 0 | I'm planning to design a server that receives data from multiple clients, the server don't need to send anything back to the client, though STATUS_OK is still cool but not necessary.
I know the basics of Python socket module, twisted framework but my question is, should i use UDP or TCP? The client that need to stay connected at all.
I hope you guys understand my question, thank you for your wonderful help here | false | 5,769,096 | 0.197375 | 0 | 0 | 4 | You should always use TCP until you have a performance problem that you know can be mitigated with UDP. TCP is easier to understand when it fails. | 0 | 3,415 | 0 | 3 | 2011-04-24T06:37:00.000 | python,sockets,tcp,udp,twisted | Python Socket programming (TCP vs. UDP) | 1 | 4 | 4 | 5,771,992 | 0 |
0 | 0 | I'm planning to design a server that receives data from multiple clients, the server don't need to send anything back to the client, though STATUS_OK is still cool but not necessary.
I know the basics of Python socket module, twisted framework but my question is, should i use UDP or TCP? The client that need to stay connected at all.
I hope you guys understand my question, thank you for your wonderful help here | false | 5,769,096 | 0 | 0 | 0 | 0 | I would use TCP in your situation, but it's hard to tell what the specifics of your needs are. TCP is in most cases a better protocol because it's much more reliable. Data is very rarely lost in TCP, however this does slow it down a bit. Since you're not sending anything back to the client, the fact that TCP is a streaming protocol shouldn't really matter too much.
So I'd just go with TCP. | 0 | 3,415 | 0 | 3 | 2011-04-24T06:37:00.000 | python,sockets,tcp,udp,twisted | Python Socket programming (TCP vs. UDP) | 1 | 4 | 4 | 5,774,174 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.