Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
1
0
I know there is lxml and BeautifulSoup, but that won't work for my project, because I don't know in advance what the HTML format of the site I am trying to scrape an article off of will be. Is there a python-type module similar to Readability that does a pretty good job at finding the content of an article and returning it?
false
6,543,599
0
0
0
0
Using HTQL, the query is: &html_main_text
0
323
0
0
2011-07-01T04:31:00.000
python,algorithm,screen-scraping,beautifulsoup,lxml
Python- is there a module that will automatically scrape the content of an article off a webpage?
1
2
3
6,567,206
0
0
0
I need to perform some xml parsing using a machine that I may not have permission to install libraries in. So is it possible to include a python library like lxml with my source?
false
6,550,005
0.049958
0
0
1
For pure python libraries, you can do that. Unfortunately, lxml depends on a bunch of compiled c code, so it's not going to be generally portable in the way you want. Using virtualenv is a good alternative, since it helps isolate a project and the installed files. For lxml to work, you'll need the libraries necessary to compile code, which your system may not contain.
0
1,197
0
5
2011-07-01T15:30:00.000
python,lxml
Is it possible to include a library like lxml without installing it?
1
1
4
6,552,762
0
0
0
i am still a novice in these areas so here is my question: I want to see the dns request sent out by my browser (say chrome).so i set up a udp server in python with host='' and port=21567(can be anything other than the previlaged and reserved ones).i set my server to listen for connections using the udp.recvfrom(1024) and set the proxy in my browser to localhost and respective port number so my browser should send the request to my server when i type in a url right??? is that right??? if it is then my server is not detecting a connection if it is wrong then please tell me the actual mechanism in technical details Thanks in advance
false
6,556,035
0.197375
0
0
1
Setting up a proxy in your browser tells it where to make TCP connections; it doesn't have anything to do with how it queries DNS, which is determined by your operating system's resolver. For Linux you'd just shut down bind, e.g. Debian /etc/init.d/bind9 stop; then your Python script would catch the traffic on port 53. And make sure nameserver 127.0.0.1 is at the top of /etc/resolv.conf. For Windows you'll need to set your DNS to the localhost (127.0.0.1), somewhere in the network settings.
0
84
0
0
2011-07-02T08:55:00.000
python,dns
Web dev in python
1
1
1
6,556,075
0
1
0
I have split the code into multiple files. I have imported all the functions from all other files into admin.py. Lets say I want to call a function XYZ. If I give path to function as admin/XYZ it gives me error as invalid function and for this I have to give the path as file_with_XYZ_function/XYZ. Is there a way to overcome this problem and simple call all the imported functions from one single file
false
6,557,000
0
0
0
0
you can create python files in modules folder and import them just like how you import python libraries in your controllers. But you have to give the path to those files like from applications.myApp.modules.myModule import * this is my solution for my wrappers. now you can use your functions by calling their name myFunction
0
3,186
0
1
2011-07-02T12:32:00.000
python,function,import,call,web2py
Function call using import in web2py
1
1
2
6,846,235
0
0
0
I'm trying to determine if a server socket was shut down gracefully or not in python. The program received files and if some error occurs during the send the file should be removed. I know your first suggestion will be to let the client send either the length of the file or some terminating data to let the server know that the file transfer is complete. Sadly, I cannot change the client, it just dumps the file over raw tcp and there is nothing I can do about it. So please, no "but if you could change it" or "your protocol is flawed", I did not write the protocol and I will have to deal with it. The script should work on both OSX and linux so epoll is out. Any suggestions? I can add that I don't care what error occurs, just that any error did.
true
6,590,400
1.2
0
0
0
It turns out that all I needed to do was to set the socket timeout to something lower, like 1 minute, the recv call will throw and exception after 1 minute.
0
481
1
2
2011-07-06T00:33:00.000
python,sockets,serversocket
Detecting graceful shut downs of sockets in python
1
2
2
6,602,114
0
0
0
I'm trying to determine if a server socket was shut down gracefully or not in python. The program received files and if some error occurs during the send the file should be removed. I know your first suggestion will be to let the client send either the length of the file or some terminating data to let the server know that the file transfer is complete. Sadly, I cannot change the client, it just dumps the file over raw tcp and there is nothing I can do about it. So please, no "but if you could change it" or "your protocol is flawed", I did not write the protocol and I will have to deal with it. The script should work on both OSX and linux so epoll is out. Any suggestions? I can add that I don't care what error occurs, just that any error did.
false
6,590,400
0.099668
0
0
1
If the other end has finished sending data, they should shutdown (2) the socket for writing. Then in your multiplexing loop you would be able to check if the socket has been shutdown for writing or has just be closed (writing & reading) by the remote end. This is a graceful shutdown and you can use select/poll/kqueue or whatever. If the client doesn't do that, doesn't send terminating character and doesn't send the lenght or a checksum it looks hard. You should drop more detail about the protocol, does the client opens a TCP socket for the sole purpose of sending a file ? Does it close the socket if the transfer is complete ? In the worst case scenario you would have to check the integrity of the file, if you know its format you can try to check it during run time or have a cron tab do the cleaning. If you cannot, well it will be hard to find a solution. Good luck...
0
481
1
2
2011-07-06T00:33:00.000
python,sockets,serversocket
Detecting graceful shut downs of sockets in python
1
2
2
6,590,500
0
0
0
PycURL is a thin wrapper around the C libcurl API, I wonder if there's any more pythonic wrapper for libcurl which supports CurlMulti ?
true
6,591,238
1.2
0
0
2
As with most bindings made for C libraries, it is clever to make a very thin layer on top of it to offer the most of it. This said, it generally makes sense to wrap that binding itself then to offer more higher level features and more language-specific ways to do things, rather than to write yet another binding. So no, I don't know of any such wrapper but I would advice that such a wrapper would be made on top of pycurl rather than on top of libcurl directly!
0
785
0
5
2011-07-06T03:25:00.000
python,libcurl,pycurl
PyCurl alternative, a pythonic wrapper for libcurl?
1
1
1
6,691,232
0
1
0
After several readings to Scrapy docs I'm still not catching the diferrence between using CrawlSpider rules and implementing my own link extraction mechanism on the callback method. I'm about to write a new web crawler using the latter approach, but just becuase I had a bad experience in a past project using rules. I'd really like to know exactly what I'm doing and why. Anyone familiar with this tool? Thanks for your help!
false
6,591,255
0.099668
0
0
1
If you want selective crawling, like fetching "Next" links for pagination etc., it's better to write your own crawler. But for general crawling, you should use crawlspider and filter out the links that you don't need to follow using Rules & process_links function. Take a look at the crawlspider code in \scrapy\contrib\spiders\crawl.py , it isn't too complicated.
0
6,599
0
9
2011-07-06T03:27:00.000
python,web-crawler,scrapy
Following links, Scrapy web crawler framework
1
1
2
6,591,511
0
0
0
First off, sorry for the cryptic question. My team is currently using Selenium 2.0rc3 (with python) for testing our web app with chrome. When we used the 2.02b version of Selenium, our test passed (it was a little slow and we had small hacks that we added to webdriver). After we upgraded, the test became extremely fast and started failing. After debugging we found out most test failed because webdrivers click() function was not blocking() successive calls. Currently we added a sleep()/timeout of .5 secs after each click and while this solves the immediate problem, it doesn't quite achieve our main goal (which is to speed up our test)
false
6,602,151
0.066568
0
0
1
I've tried the selenium-2 rc3 python bindings for chrome. My experience was the opposite of what you're describing - after clicking, the driver didn't know that the page was ready for it to continue. So instead of speeding up the tests, they turned out very slow (because the driver was waiting for ages). However, the firefox driver seems pretty stable - maybe you should stick with it until the chrome driver gets baked a bit more.
0
2,381
0
4
2011-07-06T19:57:00.000
python,selenium,webdriver
Selenium 2.0rc3 click function too fast?
1
3
3
6,603,609
0
0
0
First off, sorry for the cryptic question. My team is currently using Selenium 2.0rc3 (with python) for testing our web app with chrome. When we used the 2.02b version of Selenium, our test passed (it was a little slow and we had small hacks that we added to webdriver). After we upgraded, the test became extremely fast and started failing. After debugging we found out most test failed because webdrivers click() function was not blocking() successive calls. Currently we added a sleep()/timeout of .5 secs after each click and while this solves the immediate problem, it doesn't quite achieve our main goal (which is to speed up our test)
false
6,602,151
0
0
0
0
If the click() executes ajax calls I would suggest you to use NicelyResynchronizingAjaxController
0
2,381
0
4
2011-07-06T19:57:00.000
python,selenium,webdriver
Selenium 2.0rc3 click function too fast?
1
3
3
6,602,262
0
0
0
First off, sorry for the cryptic question. My team is currently using Selenium 2.0rc3 (with python) for testing our web app with chrome. When we used the 2.02b version of Selenium, our test passed (it was a little slow and we had small hacks that we added to webdriver). After we upgraded, the test became extremely fast and started failing. After debugging we found out most test failed because webdrivers click() function was not blocking() successive calls. Currently we added a sleep()/timeout of .5 secs after each click and while this solves the immediate problem, it doesn't quite achieve our main goal (which is to speed up our test)
true
6,602,151
1.2
0
0
5
Your problem is not really that it's clicking too fast. Just that it's clicking before that element is present. There are two ways to get round this: Wait until the element is present before clicking Increase the implicit wait time I'm afraid I haven't used the WebDriver Python bindings. However, I can tell you how it is done in Java and hopefully you can find the Python equivalent yourself. To wait for an element, we have in Java a class called WebDriverWait. You would write a Function which you pass to the until() method which passes only when the element exists. One way you could do that is with driver.findElements( By... ) or wrap driver.findElement( By... ) in an exception handler. The Function is polled till it returns true or the timeout specified is hit. The second method is the preferred method for your case and in Java you can do driver.manage().timeouts().implicitlyWait( ... ).
0
2,381
0
4
2011-07-06T19:57:00.000
python,selenium,webdriver
Selenium 2.0rc3 click function too fast?
1
3
3
6,602,293
0
0
0
I know about gethostbyaddr in Python and that is somewhat useful for me. I would like to get even more info about an ip address like one can find at various websites such as who hosts that ip address, the country of origin, ..., etc. I need to accomplish this programmatically. Are there any built in commands for Python, or would I need access to some database which contains this type of information, or are there any Python APIs? Python is not my native language so I am not as familiar with how one would approach such a problem in Python.
false
6,611,298
0
1
0
0
Ok, here is my answer. I am going to work on cleaning up for public consumption a Python 3.x version of pywhois that I have on my machine and hopefully in the next week I will submit my code to the subversion repository. From the IP addresses I am using, I have about a 78% success rate for retrieving info first by applying gethostbyaddr to the IP address as phihag suggested and then putting that through pywhois. I will let the reader decide for themselves whether that rate is high enough for their particular application.
0
3,195
0
3
2011-07-07T13:24:00.000
python,reverse-dns,gethostbyaddr,pywhois
In Python, Getting More Info About An IP Address
1
1
2
6,644,900
0
1
0
Is there an Option in feedparser to query only the new entries newer then feed.updated? Or can you set a parameter to get only entries from a specific date/today/week etc.? (Safari´s RSS Reader provides this options...)
false
6,642,570
0.379949
0
0
2
I don't understand the question as written. An RSS feed is an XML document. Feedparser retrieves and parses this entire document. It can't query just part of a document. It's up to you to write the code around feedparser to extract what you want (e.g., for each entry, you can look at d.entries[0].date and compare it with another date/time stamp or range to determine if you're interested in it or not). I don't know what you mean by looking for entries newer then feed.updated, since there shouldn't be any (the newest entries would have been entered when the feed was last updated).
0
458
0
3
2011-07-10T17:16:00.000
python,rss,feedparser
Feedparser Date parameter/Time-specific query
1
1
1
6,685,581
0
0
0
I'm attempting to allow a subprocess sandboxed with Pypy to communicate, using a limited protocol, with the parent process. After reviewing the source code of the pypy/pypy/translator/sandbox/sandlib.py included with Pypy, it appears that there is a VirtualizedSocketProc that allows os.open calls to open sockets. I've changed some functionality of the code (for example, allowing TCP connections on limited ports), but very little has been changed. However, I'm unable to actually import Pypy's socket module because it requires a non-existent _socket module, which seems to be located in the interpreter-level parts of the code. Is what I'm trying to do feasible? If so, how do I import the socket module? If not, what else can I do?
true
6,655,258
1.2
0
0
4
I've investigated this further, and it appears that this is a fairly fundamental problem. The socket module, implemented at the library level (inside of the lib directories) is essentially an empty shell for the the _socket library, which is an interpreter-level module defined in the pypy/module directory. For those unfamiliar with PyPy, there are two types of modules that can be imported, roughly corresponding to the pure-Python and C libraries in CPython. Modules implemented at the library level can be included easily in the sandbox, and are in fact included in the "default" pypy_interact sandbox. However, modules written at the interpreter level are not available inside the sandbox. It seems that my approach was fundmanetaly flawed, because of this critical distinction. Instead, there are a few other options that you can consider, should you run into the same problem: Use os.open directly with a filename beginning with tcp://. This actually works very well and is my favoured approach. Implement your own socket library. This is certainly not preferable, but I believe that it would be possible to create a relatively empty socket library that simply communicates with the sandbox controller as above wrapping the socket functionality. It might even be possible to modify the default socket library to achieve this (without including _socket, for example).
0
673
0
6
2011-07-11T19:18:00.000
python,sockets,sandbox,pypy
Using the socket module in sandboxed Pypy
1
1
1
6,700,339
0
0
0
As a part of a research, I need to download freely available RDF (Resource Description Framework - *.rdf) files via web, as much as possible. What are the ideal libraries/frameworks available in Python for doing this? Are there any websites/search engines capable of doing this? I've tried Google filetype:RDF search. Initially, Google shows you 6,960,000 results. However, as you browse individual results pages, the results drastically drop down to 205 results. I wrote a script to screen-scrape and download files, but 205 is not enough for my research and I am sure there are more than 205 files in the web. So, I really need a file crawler. I'd like to know whether there are any online or offline tools that can be used for this purpose or frameworks/sample scripts in Python to achieve this. Any help in this regards is highly appreciated.
false
6,681,043
0
0
0
0
here's one workaround : get "download master" from chrome extensions, or similar program search on google or other for results, set google to 100 per page select - show all files write your file extension, .rdf press enter press download you can have 100 files per click, not bad.
0
2,777
0
3
2011-07-13T15:04:00.000
python,screen-scraping,web-crawler
Crawling web for specific file type
1
3
5
25,339,316
0
0
0
As a part of a research, I need to download freely available RDF (Resource Description Framework - *.rdf) files via web, as much as possible. What are the ideal libraries/frameworks available in Python for doing this? Are there any websites/search engines capable of doing this? I've tried Google filetype:RDF search. Initially, Google shows you 6,960,000 results. However, as you browse individual results pages, the results drastically drop down to 205 results. I wrote a script to screen-scrape and download files, but 205 is not enough for my research and I am sure there are more than 205 files in the web. So, I really need a file crawler. I'd like to know whether there are any online or offline tools that can be used for this purpose or frameworks/sample scripts in Python to achieve this. Any help in this regards is highly appreciated.
false
6,681,043
0
0
0
0
Did you notice the text something like "google has hidden similar results, click here to show all results" at the bottom of one page? Might help.
0
2,777
0
3
2011-07-13T15:04:00.000
python,screen-scraping,web-crawler
Crawling web for specific file type
1
3
5
6,681,208
0
0
0
As a part of a research, I need to download freely available RDF (Resource Description Framework - *.rdf) files via web, as much as possible. What are the ideal libraries/frameworks available in Python for doing this? Are there any websites/search engines capable of doing this? I've tried Google filetype:RDF search. Initially, Google shows you 6,960,000 results. However, as you browse individual results pages, the results drastically drop down to 205 results. I wrote a script to screen-scrape and download files, but 205 is not enough for my research and I am sure there are more than 205 files in the web. So, I really need a file crawler. I'd like to know whether there are any online or offline tools that can be used for this purpose or frameworks/sample scripts in Python to achieve this. Any help in this regards is highly appreciated.
false
6,681,043
0
0
0
0
teleport pro, although it maybe cant copy from google, too big, it can probably handly proxy sites that return google results, and i know, for a fact, i could download 10 000 pdfs with in a day if i wanted to. it has filetype specifiers and many options.
0
2,777
0
3
2011-07-13T15:04:00.000
python,screen-scraping,web-crawler
Crawling web for specific file type
1
3
5
25,339,126
0
0
0
I have a very simple CGI webserver running using python CGIHTTPServer class. This class spawns and executes a cgi-php script. If the webpage sends POST request, how can I access the POST request data in the php script?
true
6,712,092
1.2
1
0
1
When you say your Python process "spawns and executes" a cgi-php script, I believe what you mean is "it calls my PHP script by executing the PHP CLI executable, passing it the name of my script." Using the PHP CLI executable, HTTP-specific superglobals and environment values will not be set automatically. You would have to read in all HTTP request headers and GET/POST data in your Python server process, and then set them in the environment used by your PHP script. The whole experiment sounds interesting, but this is what mod_php does already.
0
1,199
0
0
2011-07-15T19:16:00.000
php,python,cgi,webserver,cgihttprequesthandler
Accessing POST request data from CGIHTTPServer (python)
1
1
1
6,712,146
0
0
0
I'm looking for a way to write a XMPP bot that would listen to a RabbitMQ queue and send messages to the XMPP channel notifying users of any new issues ( already got Nagios sending notifications to RabbitMQ). I've tried using xmppy and it stopped working and I stumbled across SleekXMPP which looks fairly better. I'm just wondering if I define a AMQP listener to automatically call the XMPP "send" method in the bot. So it would be listening both on AMQP and XMPP at the same time. Thank you for your help! Edit: Would BOSH be much better solution here ?
false
6,723,588
0.066568
1
0
1
This is really quite simple. I suggest that you start by writing an AMQP listener that simply prints out received messages. Once you get that working it should be obvious how to integrate that into an XMPP bot.
0
571
0
0
2011-07-17T11:40:00.000
python,xmpp,amqp
Interconnecting AMQP and XMPP
1
1
3
6,724,730
0
0
0
Let me start with an example: There is a c++ program which can be run on my server, the program is named "Guess the Number" which means every time it runs, first it will generate an integer between 1 and 100 randomly. then i need the user to guess a number and pass it to me though a web page, form or something. now i want pass the number to the program and then the program will tell me whether it's bigger or smaller. then i put the information on the web page to let the user know and then make his next guess. i am able to write the program. and i know how pass the first argument and give back the information, but don't know how to interact in the next steps. i.e. How to pass the arguments to the program REAL-TIME and get the output? to make this more clearly: i use subprocess in python to run the program with the first argument and get the output. the c++ program use std inputs and outputs, like while (!check(x)) scanf("%d",&x);, and in check(int x), i use if (x>rand_num) printf("too big\n"); to output.
false
6,736,152
0
0
0
0
This sounds a lot like a homework question, but even with this list, you have a lot of work ahead of you for a dubious reward, so here we go. Your C++ program should listen on a socket Your python program needs to listen on a web socket, and also have a connection open to the C++ program through the C++ socket. I'd suggest something like web.py for your web framework Your web.py program is going to accept XMLHTTP Requests at a URL your web page is going to submit requests through that XMLHTTP request, and send results back into the web page. An easy way to do this on the frontend is to you jquery ajax commands; they will hit your web.py URL, which will validate the input, call a function to send it off to the C++ socket, get a response and send it back as a response to your jquery request. Good luck.
0
233
1
1
2011-07-18T16:30:00.000
python,ajax,cgi,subprocess,pipe
How to INTERACT between a program on the server and the user via a web page REAL-TIME
1
1
2
6,737,020
0
0
0
I'm trying to get an xml report of the results of my behavior tests running in the lettuce framework. According the --help for lettuce, you should use the switch --with-xunit. I've done that (and also used --xunit-file) but no report is generated. I tried reinstalling lettuce but still no luck. How can I get it to generate this report?
true
6,739,856
1.2
1
0
1
This was actually something simple: the folder the tests were running in were owned by root so I needed to run the tests with sudo in order for it to generate the report.
0
2,043
0
1
2011-07-18T21:49:00.000
python,lettuce
Python lettuce test results in XML
1
1
2
6,884,086
0
1
0
I am not much into gaming but i am learning and doing some practicles in Artifical Intelligence algorithms. Now as i can develop full fledge application so it means even if learn various techniques , i won't be having anything to show in interview. I have seen that all AI techniques / algorithms are usually tested as simulation. i saw one video from google where they showed their AI techniques in small games wherevery small characters were showing /doing things based on their learning. So i think by implementing them in small games , i can demonstrate what i have learned so that i can have small preactical applicatin. But i want that on a website , so i want to know whats the best way to have simulations /games inside browser
false
6,742,589
0
0
0
0
As far as I know it is not possible to execute python scripts in the browser. What you can do is generate the set of actions to be taken on the server side using python and then send these commands to the browser and interpret them in javascript. Or if you don't want a server side, you can just write the game in actionscript, silverlight or javascript/html5.
0
1,443
0
0
2011-07-19T05:19:00.000
python,browser,artificial-intelligence,pygame
Can i code browser games in python in my website
1
1
2
6,742,643
0
0
0
I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites. My requirements for the solution are: Should be interruptable. Ctrl+C should immediately terminate all downloads. There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown. It should work on Linux and Windows too. It should retry downloads, be resilient against network errors and should timeout properly. It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way. It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time. Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files. Preferably it should take advantage of http keep-alive to maximize the transfer speed. Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution. I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously. Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice.
false
6,750,619
1
0
0
7
You can try pycurl, though the interface is not easy at first, but once you look at examples, its not hard to understand. I have used it to fetch 1000s of web pages in parallel on meagre linux box. You don't have to deal with threads, so it terminates gracefully, and there are no processes left behind It provides options for timeout, and http status handling. It works on both linux and windows. The only problem is that it provides a basic infrastructure (basically just a python layer above the excellent curl library). You will have to write few lines to achieve the features as you want.
0
6,882
0
19
2011-07-19T16:28:00.000
python,http,parallel-processing,download,feed
Library or tool to download multiple files in parallel
1
4
10
6,882,144
0
0
0
I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites. My requirements for the solution are: Should be interruptable. Ctrl+C should immediately terminate all downloads. There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown. It should work on Linux and Windows too. It should retry downloads, be resilient against network errors and should timeout properly. It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way. It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time. Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files. Preferably it should take advantage of http keep-alive to maximize the transfer speed. Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution. I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously. Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice.
false
6,750,619
0.099668
0
0
5
There are lots of options but it will be hard to find one which fits all your needs. In your case, try this approach: Create a queue. Put URLs to download into this queue (or "config objects" which contain the URL and other data like the user name, the destination file, etc). Create a pool of threads Each thread should try to fetch a URL (or a config object) from the queue and process it. Use another thread to collect the results (i.e. another queue). When the number of result objects == number of puts in the first queue, then you're finished. Make sure that all communication goes via the queue or the "config object". Avoid accessing data structures which are shared between threads. This should save you 99% of the problems.
0
6,882
0
19
2011-07-19T16:28:00.000
python,http,parallel-processing,download,feed
Library or tool to download multiple files in parallel
1
4
10
6,750,711
0
0
0
I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites. My requirements for the solution are: Should be interruptable. Ctrl+C should immediately terminate all downloads. There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown. It should work on Linux and Windows too. It should retry downloads, be resilient against network errors and should timeout properly. It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way. It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time. Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files. Preferably it should take advantage of http keep-alive to maximize the transfer speed. Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution. I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously. Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice.
false
6,750,619
-0.019997
0
0
-1
Threading isn't "half-assed" unless you're a bad programmer. The best general approach to this problem is the producer / consumer model. You have one dedicated URL producer, and N dedicated download threads (or even processes if you use the multiprocessing model). As for all of your requirements, ALL of them CAN be done with the normal python threaded model (yes, even catching Ctrl+C -- I've done it).
0
6,882
0
19
2011-07-19T16:28:00.000
python,http,parallel-processing,download,feed
Library or tool to download multiple files in parallel
1
4
10
6,750,741
0
0
0
I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites. My requirements for the solution are: Should be interruptable. Ctrl+C should immediately terminate all downloads. There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown. It should work on Linux and Windows too. It should retry downloads, be resilient against network errors and should timeout properly. It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way. It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time. Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files. Preferably it should take advantage of http keep-alive to maximize the transfer speed. Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution. I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously. Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice.
false
6,750,619
0
0
0
0
I used the standard libs for that, urllib.urlretrieve to be precise. downloaded podcasts this way, via a simple thread pool, each using its own retrieve. I did about 10 simultanous connections, more should not be a problem. Continue a interrupted download, maybe not. Ctrl-C could be handled, I guess. Worked on Windows, installed a handler for progress bars. All in all 2 screens of code, 2 screens for generating the URLs to retrieve.
0
6,882
0
19
2011-07-19T16:28:00.000
python,http,parallel-processing,download,feed
Library or tool to download multiple files in parallel
1
4
10
6,889,198
0
1
0
So, I have a FB app already, and when people connect I ask for the extended permission "email" and that is all well and good. I save the email and can get a list of their friends' ids and basic profiles. I want to know if there is a way for me to figure out which of their friends have allowed their email to be visible to people/friends/public (not my app) and then get that email so I can give back a list of people they can connect to. So, let's say you have 12 on facebook, and of those 12, 4 have allowed anyone to see their email. I want to give you those 4 emails because they have allowed them to be publicly/friend visible. Other than those, I'd have to set up a custom "request permission to know your email" kind of thing. It seems as though it's not possible, but I just wanted to make sure.
true
6,755,679
1.2
0
0
0
We don't allow access to friend email addresses for obvious privacy reasons. You have to explicitly ask for this from each user.
0
226
0
1
2011-07-20T00:21:00.000
python,facebook-graph-api
Facebook Graph API: Have extended permission, need something odd
1
1
1
6,770,444
0
0
0
i want to use python in combination with tcpdump to put the generated stream from tcpdump into a more readable way. There are some fields in the streams with interesting values. I found some stuff here at stackoverflow regarding python and tcpdump but in my mind the best way is to put it in an array. After this is done i want to read some fields out of the array, and than it can be cleared and used for the next network frame. Can somebody give me some hints, how this can be done?
false
6,791,916
0
0
0
0
You can try using Scapy sniffing function sniff, append the captured packets to a list and do your extraction process.
0
476
0
0
2011-07-22T15:00:00.000
python,stream
put stream from tcpdump into an array - which python version should i use?
1
1
2
17,312,308
0
0
0
I have written a custom assert function in user-extensions.js that uses a custom locator function - also implemented in user-extensions.js - to locate a particular element on a page. Without getting into details; I need the custom locator function because I'm trying to locate an element in a different namespace on the page - something Selenium doesn't seem to support natively. When calling the custom assert function from Selenium IDE, it uses the custom locator function "implicitly" to lookup the element I'm looking for, does the assert and everything works fine. With "implicitly" I mean that I call the assert function with a Target "abc=..." and Selenium IDE then knows that should use the locateElementByAbc locator function to lookup the particular element. However, when calling the same custom assert function from Selenium RC (Python), using the do_command function in Python, the custom locator function is apparently not called implicitly, and the element is not found. How do I make my Selenium RC Python script to use the locateElementByAbc function? Note that the user-extensions.js is loaded when starting the Selenium server, so that part is working. The assert function can also be called using the Selenium do_command function from Python. Thanks for your help, it is well appreciated!
false
6,805,180
0
0
0
0
The problem is most likely that your custom locator function (LocateElementByMyprefix()) is not being registered. Selenium RC does user-extension.js setup just a little differently from Selenium IDE, and the timing can get in the way. Try calling selenium.browserbot._registerAllLocatorFunctions() after your function is defined.
0
592
0
0
2011-07-24T05:53:00.000
python,selenium,selenium-rc,locate,user-extensions.js
Calling custom assert function from Selenium RC (Python) with a custom locator function in user-extensions.js
1
1
1
6,819,144
0
1
0
I'm using jQuery's $.ajax() to send Ajax request to server and bring back a large chunk of HTML content to replace the content of a div. Everything works fine except for the problem that while the div is being updated, the page is kind of frozen (even the vertical scroll bar is not draggable). It comes back to normal after the div is updated. Anyone knows if this is desired behavior? (yes, the HTML content is a little big but not super big) Thanks
false
6,824,062
0
0
0
0
Try to implement a loader with a freezing background.
0
510
0
0
2011-07-26T00:39:00.000
javascript,jquery,python,ajax
Page scroll bar freezes while div is updated by Ajax responseText
1
2
2
6,824,093
0
1
0
I'm using jQuery's $.ajax() to send Ajax request to server and bring back a large chunk of HTML content to replace the content of a div. Everything works fine except for the problem that while the div is being updated, the page is kind of frozen (even the vertical scroll bar is not draggable). It comes back to normal after the div is updated. Anyone knows if this is desired behavior? (yes, the HTML content is a little big but not super big) Thanks
false
6,824,062
0
0
0
0
Can you link your ajax code? I think you are doing a sync call, try setting async: true. Still without the code thats all I can think about.
0
510
0
0
2011-07-26T00:39:00.000
javascript,jquery,python,ajax
Page scroll bar freezes while div is updated by Ajax responseText
1
2
2
6,825,203
0
1
0
I have this problem, I need to scrape lots of different HTML data sources, each data source contains a table with lots of rows, for example country name, phone number, price per minute. I would like to build some semi automatic scraper which will try to .. find automatically the right table in the HTML page, -- probably by searching the text for some sample data and trying to find the common HTML element which contain both extract the rows -- by looking at above two elements and selecting the same patten identify which column contains what -- by using some fuzzy algorithm to best guess which column is what. export it to some python / other list -- cleaning everytihng. does this look like a good design ? what tools would you choose to do it in if you program in python ?
false
6,852,061
0.379949
0
0
4
does this look like a good design ? No. what tools would you choose to do it in if you program in python ? Beautiful Soup find automatically the right table in the HTML page, -- probably by searching the text for some sample data and trying to find the common HTML element which contain both Bad idea. A better idea is to write a short script to find all tables, dump the table and the XPath to the table. A person looks at the table and copies the XPath into a script. extract the rows -- by looking at above two elements and selecting the same patten Bad idea. A better idea is to write a short script to find all tables, dump the table with the headings. A person looks at the table and configures a short block of Python code to map the table columns to data elements in a namedtuple. identify which column contains what -- by using some fuzzy algorithm to best guess which column is what. A person can do this trivially. export it to some python / other list -- cleaning everytihng. Almost a good idea. A person picks the right XPath to the table. A person writes a short snippet of code to map column names to a namedtuple. Given these parameters, then a Python script can get the table, map the data and produce some useful output. Why include a person? Because web pages are filled with notoriously bad errors. After having spent the last three years doing this, I'm pretty sure that fuzzy logic and magical "trying to find" and "selecting the same patten" isn't a good idea and doesn't work. It's easier to write a simple script to create a "data profile" of the page. It's easier to write a simple script reads a configuration file and does the processing.
0
1,205
0
1
2011-07-27T22:35:00.000
python
Smart Automatic Scraping
1
1
2
6,853,367
0
1
0
Is there a recommendable library to parse form data and/or multi-part requests with Python and WSGI?
true
6,869,107
1.2
0
0
1
You can use cgi.FieldStorage to parse form posts etc. You may be better off though using Werkzeug/Flask which has its own implementation which works a lot better, plus you get higher level stuff which makes things a lot easier.
0
1,493
0
2
2011-07-29T05:42:00.000
python,wsgi,multipartform-data,form-data
Parsing form data / multi-part requests with Pythons WSGI
1
1
1
6,909,531
0
1
0
I have some random HTML and I used BeautifulSoup to parse it, but in most of the cases (>70%) it chokes. I tried using Beautiful soup 3.0.8 and 3.2.0 (there were some problems with 3.1.0 upwards), but the results are almost same. I can recall several HTML parser options available in Python from the top of my head: BeautifulSoup lxml pyquery I intend to test all of these, but I wanted to know which one in your tests come as most forgiving and can even try to parse bad HTML.
false
6,870,446
0.049958
0
0
1
If beautifulsoup doesn't fix your html problem, the next best solution would be regular expression. lxml, elementtree, minidom are very strict in parsing and actually they are doing right. Other tips: I feed the html to lynx browser through command prompt, and take out the text version of the page/content and parse using regex. Converting to html to text or html to markdown strips all the html tags and you will remain with text. That is easy to parse.
0
1,484
0
4
2011-07-29T08:22:00.000
python,html-parsing,beautifulsoup,lxml,pyquery
What’s the most forgiving HTML parser in Python?
1
2
4
6,870,510
0
1
0
I have some random HTML and I used BeautifulSoup to parse it, but in most of the cases (>70%) it chokes. I tried using Beautiful soup 3.0.8 and 3.2.0 (there were some problems with 3.1.0 upwards), but the results are almost same. I can recall several HTML parser options available in Python from the top of my head: BeautifulSoup lxml pyquery I intend to test all of these, but I wanted to know which one in your tests come as most forgiving and can even try to parse bad HTML.
true
6,870,446
1.2
0
0
2
I ended up using BeautifulSoup 4.0 with html5lib for parsing and is much more forgiving, with some modifications to my code it's now working considerabily well, thanks all for suggestions.
0
1,484
0
4
2011-07-29T08:22:00.000
python,html-parsing,beautifulsoup,lxml,pyquery
What’s the most forgiving HTML parser in Python?
1
2
4
6,896,409
0
0
0
I'm using the Python Requests library to make HTTP requests. I obtain a cookie from the server as text. How do I turn that into a CookieJar with the cookie in it?
false
6,878,418
0.066568
0
0
3
Well, cookielib.LWPCookieJar has load and save methods on it. Look at the format and see if it matches the native cookie format. You may well be able to load your cookie straight into a cookie jar using StringIO. Alternatively, if Requests is using urllib2 under the hood, you could add a cookie handler to the default opener.
0
74,064
0
43
2011-07-29T20:02:00.000
python,cookies,python-requests,httprequest
Putting a `Cookie` in a `CookieJar`
1
1
9
6,929,808
0
0
0
I'm porting one of my projects from C# and am having trouble solving a multithreading issue in Python. The problem relates to a long-lived HTTP request, which is expected (the request will respond when a certain event occurs on the server). Here's the summary: I send the request using urllib2 on a separate thread. When the request returns or times out, the main thread is notified. This works fine. However, there are cases where I need to abort this outstanding request and switch to a different URL. There are four solutions that I can consider: Abort the outstanding request. C# has WebRequest.Abort(), which I can call cross-thread to abort the request. Python urllib2.Request appears to be a pure data class, in that instances only store request information; responses are not connected to Request objects. So I can't do this. Interrupt the thread. C# has Thread.Interrupt(), which will raise a ThreadInterruptedException in the thread if it is in a wait state, or the next time it enters such a state. (Waiting on a monitor and file/socket I/O are both waiting states.) Python doesn't seem to have anything comparable; there does not appear to be a way to wake up a thread that is blocked on I/O. Set a low timeout on the request. On a timeout, check an "aborted" flag. If it's false, restart the request. Similar to option 3, add an "aborted" flag to the state object so that when the request does finally end in one way or another, the thread knows that the response is no longer needed and just shuts itself down. Options 3 and 4 seem to be the only ones supported by Python, but option 3 is a horrible solution and 4 will keep open a connection I don't need. I am hoping to be a good netizen and close this connection when I no longer need it. Is there any way to actually abort the outstanding request, one way or another?
false
6,916,976
0.049958
0
0
1
Similar to Spike Gronim's answer, but even more heavy handed. Consider rewriting this in twisted. You probably would want to subclass twisted.web.http.HTTPClient, in particular implementing handleResponsePart to do your client interaction (or handleResponseEnd if you don't need to see it before the response ends). To close the connection early, you just call the loseConnection method on the client protocol.
0
799
0
4
2011-08-02T18:45:00.000
python,multithreading
Aborting HTTP request cross-thread
1
1
4
6,917,498
0
0
0
I've wrote a simple client and a server. They both can configured to use SSL or not. You can set it up in the client and in the server. My problem is, if I try to connect without SSL to a server setup with SSL, the connection is made but stuck. (of course... it is normal). How my client can know that he is trying to connect without SSL to a server using SSL? and vice versa? The best solution will be that my client autodetect if server use SSL or not and do the proper connect(TCP or SSL). Thans in advance for any answer =)
true
6,925,122
1.2
0
0
5
Use a different port number for SSL connections. This is how HTTP / HTTPS are used. Or Define a command in your protocol to transform the connection into SSL e.g. STARTTLS, based on a capability negotiation. This is one way the same thing is achieved in SMTP.
0
623
0
1
2011-08-03T10:35:00.000
python,ssl,tcp,twisted
How to know if my server is using SSL or not (with Twisted)
1
2
2
6,925,249
0
0
0
I've wrote a simple client and a server. They both can configured to use SSL or not. You can set it up in the client and in the server. My problem is, if I try to connect without SSL to a server setup with SSL, the connection is made but stuck. (of course... it is normal). How my client can know that he is trying to connect without SSL to a server using SSL? and vice versa? The best solution will be that my client autodetect if server use SSL or not and do the proper connect(TCP or SSL). Thans in advance for any answer =)
false
6,925,122
0
0
0
0
The best solution will be that my client autodetect if server use SSL or not and do the proper connect(TCP or SSL). Not possible. The first event in an SSL connection is a message from the client, not from the server.
0
623
0
1
2011-08-03T10:35:00.000
python,ssl,tcp,twisted
How to know if my server is using SSL or not (with Twisted)
1
2
2
6,925,332
0
0
0
I am new to Python and would like to write a script to change Windows proxy settings based on the network I am connected to. Is there any existing python module I can use? Appreciate your help. Thanks, Sethu
false
6,935,796
0
0
0
0
Cannot you set the HTTP_PROXY environment variable in Windows (either manually or within your program) for your application before sending the request? That should take care that any request you send it via urllib2 goes via Proxy.
0
1,602
0
3
2011-08-04T03:14:00.000
python,windows,proxy
Which Python module to use to access the Proxy setting of Windows 7?
1
1
2
6,935,860
0
0
0
Maybe a silly question, but I usually learn a lot with those. :) I'm working on a software that deals a lot with XML, both as input and as output, and in between a lot of processing occurs. My first thought was to use internally a dict as a internal data structure, and from there, work my way with a process of reading and writing it. What you guys think? Any better approach, python-wise?
false
6,957,355
0
0
0
0
It depends completely on what type of data you have in XML, what kinds of processing you'll need to do with it, what sort of output you'll need to produce from it, etc.
1
95
0
1
2011-08-05T13:48:00.000
python
Ideal Data Structure to Deal with XML Data
1
1
2
6,957,382
0
0
0
Is there a way to access the screen of a stock version of a headless VirtualBox 4.x remotely using RDP with Python or access it using the VNC protocol? I want to be able to access the boot screen (F12), too, so I cannot boot a VNC server in the Guest as the Guest is not yet booted. Note that I already have an RFB version in pure Python, however stock VirtualBox seems not to support VNC style remote connections, OTOH I somehow was unable to find a Python RDP library, sadly. What I found so far but I do not want to use: A Java RDP client, however I do not want to switch horses, so I want to keep it Python VirtualBox API seems to provide Python with access to the framebuffer, but I am not completely sure. However this then is bound to VirtualBox only, an RDP library (or letting VB talk RFB) would be more generic. Notes: So what I need either is a way to add VNC/RFB support to an original VirtualBox (.vbox-extpack?) or find some RDP library written in pure Python. It must be available on at least all platforms for which VirtualBox is available. If neither is possible, I think I will try the VirtualBox API in Python.
false
6,975,995
0
0
0
0
Have you considered Jython, which ought to be able to integrate natively with the Java library you already have?
0
1,273
0
2
2011-08-07T21:46:00.000
python,virtualbox,rdp
How to connect Python to VirtualBox using RDP or RFB?
1
1
2
7,852,841
0
1
0
I'd like to: Send request from the browser to a python CGI script This request will start some very long operation The browser should check progress of that operation every 3 seconds How can I do that?
false
6,983,622
0.099668
1
0
1
You could do this - Use Javascript to check progress every 3 seconds. Your Python script hit at anytime should give the progress at that instance. It should essentially act as state-machine. You need a server here or maybe the progress is stored on some file. But having a server is the way to go... Maybe the long running process stores the progress in a MySQL table. Hope this helps...
0
345
0
0
2011-08-08T14:25:00.000
python,ajax,cgi
Python with CGI - handle ajax
1
1
2
6,983,929
0
0
0
I have written an application which using twitter authentication. Authentication process: Get the request token Authorize the token Get the access key Now I have access key, I can use the twitter API. I stored the access_key pair in the database for further use. But when next time user logged in via twitter, since already i have access_key pair I dont want to authorize. How can I do that?
false
6,998,046
0.197375
1
0
1
You mean you're using the "log in via twitter" hack? Well, if you want a "remember me" option you can build one just like normal (store some session info in a cookie... all security caveats apply). If the actually are logged out and there's no cookie then you have to go through the whole process again anyway. The authorize step will just redirect them right back, likely, since they have already authorized your app.
0
151
0
1
2011-08-09T14:44:00.000
python,api,twitter
twitter authentication issue using oauth?
1
1
1
6,998,115
0
0
0
I'm primarily using Drupal and am considering moving away from CMS. If I were to build my own platform could I integrate modules like commenting systems, user login, etc. through a PHP/Python API? What would be the proper steps/good places to look/good tutorial on this? Would I have to build all of my own tables manually to suit the needs of such custom modules? I'm wondering if this would even be possible with out having to hard code all of this by hand? Thank you.
false
7,006,013
0
1
0
0
First of all if you are considering moving away from CMS than you should consider using some sort of framework but with time you will come to idea that you need your own shit in order to be satisfied. Second, subject you are trying to decipher is a little bit more complex than just writing it down here. I would suggest you to first think what do you need. What is your main goal with it or what are you trying to accomplish? For example in meaning of commenting, if you want the truth nor PHP nor Python are masterpieces. Why not to consider Node.JS for that? I mean, web is becoming real-time more and more. Now days we have scripts or to be more precise, pieces of art such as Socket.IO who can with help of Node.JS handle large amount of traffic without any problem. Something nor Python nor PHP can do. Some stuff you will need to code by yourself but most of the time you just need to code "architecture link" between one versus many features. Eg. take some code and adjust it to be able use it from your own framework or whatever. As far I see it. I like to do all crucial parts by myself but for example there is Zend Framework and you can use ACL + Auth library and start from there. Hope this makes some sense. Cheers!
0
152
0
1
2011-08-10T04:46:00.000
php,python
Are there comment system/user loggin type modules through PHP API/Python API outside of CMS?
1
1
3
7,006,100
0
1
0
I'm new to Python so I'm sorry if this is a newbie question. I'm trying to build a program involving webscraping and I've noticed that Python 3 seems to have significantly fewer web-scraping modules than the Python 2.x series. Beautiful Soup, mechanize, and scrapy -- the three modules recommended to me -- all seem to be incompatible. I'm wondering if anyone on this forum has a good option for webscraping using python 3. Any suggestions would be greatly appreciated. Thanks, Will
false
7,019,350
0.291313
0
0
3
lxml.html works on Python 3, and gets you html parsing, at least. BeautifulSoup 4, which is in the works, should support Python 3 (I've done some work on this).
0
1,521
0
5
2011-08-10T23:54:00.000
python-3.x,web-scraping
Python 3 web scraping options
1
1
2
7,033,832
0
0
0
Is it possible to retrieve messages from an SQS queue during their visibility timeout if you don't have the message id? I.e. something along the lines of "get invisible messages" or "get messages currently being processed". I'm working with large timeouts and sometimes I'd like to be able to inspect the queue to see what's left. I'd rather not have to wait for a 5 hour timeout to expire. I'm working with boto in python. Thanks.
false
7,044,319
0.379949
1
0
2
As far as I know there is no way of doing this. Seeing as your processing code should only take what it needs off the queue and nothing more it doesn't seem like you would ever need to do this. Do your jobs actually take 5hrs to complete? I assume based on "I'm working with large timeouts" they do but if not you can set the expiration to make it a shorter period.
0
1,141
0
3
2011-08-12T17:58:00.000
python,amazon-web-services,parallel-processing,boto,amazon-sqs
Is it possible to retrieve messages from SQS while they're invisible?
1
1
1
7,045,553
0
1
0
I want to be able to identify an audio sample (that is provided by the user) in a audio file I've got (mp3). The mp3 file is a radio stream that I've kept for testing purposes, and I have the Pre-roll of the show. I want to identify it in the file and get the timestamp where it's playing in the file. Note: The solution can be in any of the following programming languages: Java, Python or C++. I don't know how to analyze the video file and any reference about this subject will help.
false
7,052,169
0.197375
1
0
2
I'd start by computing the FFT spectrogram of both the haystack and needle files (so to speak). Then you could try and (fuzzily) match the spectrograms - if you format them as images, you could even use off-the-shelf algorithms for that. Not sure if that's the canonical or optimal way, but I feel like it should work.
0
573
0
2
2011-08-13T17:37:00.000
java,c++,python,audio,signal-processing
Identify audio sample in a file
1
1
2
7,052,252
0
0
0
I am very new to programming, and have had no formal training in it before so please bear with me if this is a vague question. I was just curious: how do different programs on the same computer communicate with each other? From my programming experience I believe it can be achieved by socket programming? Thanks
false
7,053,905
0.066568
0
0
1
Sockets, shared memory, events / signals, pipes, semaphores, message queues, mailslots. Just search the Internet for either.
1
650
0
6
2011-08-13T23:19:00.000
java,c++,python,communication
Communicating between applications?
1
1
3
7,053,927
0
0
0
I've been struggling along with sockets, making OK progress, but I keep running into problems, and feeling like I must be doing something wrong for things to be this hard. There are plenty of tutorials out there that implement a TCP client and server, usually where: The server runs in an infinite loop, listening for and echoing back data to clients. The client connects to the server, sends a message, receives the same thing back, and then quits. That I can handle. However, no one seems to go into the details of what you should and shouldn't be doing with sequential communication between the same two machines/processes. I'm after the general sequence of function calls for doing multiple messages, but for the sake of asking a real question, here are some constraints: Each event will be a single message client->server, and a single string response. The messages are pretty short, say 100 characters max. The events occur relatively slowly, max of say, 1 every 5 seconds, but usually less than half that speed. and some specific questions: Should the server be closing the connection after its response, or trying to hang on to the connection until the next communication? Likewise, should the client close the connection after it receives the response, or try to reuse the connection? Does a closed connection (either through close() or through some error) mean the end of the communication, or the end of the life of the entire object? Can I reuse the object by connecting again? Can I do so on the same port of the server? Or do I have reinstantiate another socket object with a fresh call to socket.socket()? What should I be doing to avoid getting 'address in use' errors? If a recv() times out, is the socket reusable, or should I throw it away? Again, can I start a new connection with the same socket object, or do I need a whole new socket?
true
7,062,317
1.2
0
0
2
If you know that you will communicate between the two processes soon again, there is no need for closing the connection. If your server has to deal with other connections as well, you want to make it multithreaded, though. The same. You know that both have to do the same thing, right? You have to create a new socket on the client and you can also not reuse the socket on the server side: you have to use the new socket returned by the next (clientsocket, address) = serversocket.accept() call. You can use the same port. (Think of webservers, they always accept connections to the same port, from thousands of clients) In both cases (closing or not closing), you should however have a message termination sign, for example a \n. Then you have to read from the socket until you have reached the sign. This usage is so common, that python has a construct for that: socket.makefile and file.readline UPDATE: Post the code. Probably you have not closed the connection correctly. You can call recv() again. UPDATE 2: You should never assume that the connection is reliable, but include mechanisms to reconnect in case of errors. Therefore it is ok to try to use the same connection even if there are longer gaps. As for errors you get: if you need specific help for your code, you should post small (but complete) examples.
0
643
0
3
2011-08-15T06:46:00.000
python,sockets
What is the correct procedure for multiple, sequential communications over a socket?
1
1
1
7,062,480
0
1
0
Is there a way to set custom headers of responses to static file requests? E.g. I'd want to set Access-Control-Allow-Origin: * when serving static files.
false
7,074,662
0.099668
0
0
1
You can't; the only thing you can do is to stream these static files adding the Access-Control-Allow-Origin header to the Response.
0
692
1
4
2011-08-16T07:05:00.000
python,google-app-engine
App Engine, Python: setting Access-Control-Allow-Origin (or other headers) for static file responses
1
1
2
7,075,300
0
0
0
I am trying to write a Python program that POSTs a build request to a Jenkins server over HTTPS. I have tried PycURL, which works well, but am now trying to replace that with standard library facilities. The revised program receives a 404 response from the server however, so I would like to inspect both versions of the program's actual POST requests to the server (with and without PycURL) to see what's different. Which tool can I use to capture my program's POST requests and analyze them?
false
7,075,312
0
0
0
0
Use the program to dump the traffic tcpdump or WireShark
0
1,792
0
0
2011-08-16T08:13:00.000
python,http,networking,http-post
How do I debug HTTPS post requests from my commandline program?
1
1
3
7,075,385
0
1
0
I love node.js, socket.io, the templating engines, etc: as a web framework, it's amazing. A lot of my back-end work is with NLP, Machine Learning, and Data Mining, for which there exist hundreds of rock-solid Python libraries, but no Javascript libraries. If I were using Django, I'd just import the libraries and chug away. What's the recommended approach for handling these complex tasks with node.js? Should I stick with Python web frameworks, is there a convention to dealing with these libraries, or some solution I'm missing?
true
7,091,583
1.2
0
0
2
I think you have answered your own question. Python has greater maturity and by your own admission has the libraries you require. Could you narrow down your requirements a bit more?
0
360
0
0
2011-08-17T10:51:00.000
python,node.js
Complex server logic and node.js
1
1
1
7,092,538
0
0
0
Firstly, sorry i'm just a beginner. Everytime i try to run/open python an IDLE subprocess error message comes up saying 'socket error: conection refused'. I'm not sure what to do about it, and because of this i can't use python. Could anyone help me please?
false
7,092,639
0
0
0
0
You've probably got some kind of firewall in place that prevents IDLE from opening a local network connection. What kind of firewall are you using? PS - in a pinch, you can just run python.exe from where it is installed and use a standard text editor (such as Notepad++) to create new scripts and modules. Do you know how to set the PATH environment variable in Windows? Where did you install Python?
1
946
0
0
2011-08-17T12:18:00.000
python,python-idle
Python won't run everytime i try to run it, why is this?
1
2
2
7,092,786
0
0
0
Firstly, sorry i'm just a beginner. Everytime i try to run/open python an IDLE subprocess error message comes up saying 'socket error: conection refused'. I'm not sure what to do about it, and because of this i can't use python. Could anyone help me please?
false
7,092,639
0
0
0
0
On Windows, you have to create a firewall rule for IDLE, since it uses a network connection for certain tasks. Unfortunately I could not find any documentation on this topic, but I think it can be read in the IDLE README.
1
946
0
0
2011-08-17T12:18:00.000
python,python-idle
Python won't run everytime i try to run it, why is this?
1
2
2
7,092,770
0
0
0
I'm using Fabric for my build script. I just cloned one of my VMs and created a new server. The Fabric script (which uses paramiko underneath) works fine one server but not the other. Since it's a clone I don't know what could be different but everytime I run my Fabric script I get the error Error reading SSH protocol banner. This script is connecting with the same user on both servers. The script works fine on all other servers except this new one that I just clones. The only thing that is radically different is the IP address which is totally different range. Any ideas on what could be causing this?
false
7,206,272
0.291313
1
0
3
Try changing the banner timeout from 15 seconds to 30 secs in the transport.py file. Also, it could be that the sshd daemon on the server is hung. Can you SSH into it manually?
0
29,345
0
6
2011-08-26T14:33:00.000
python,linux,ssh,fabric,paramiko
Paramiko Error: Error reading SSH protocol banner
1
2
2
7,207,845
0
0
0
I'm using Fabric for my build script. I just cloned one of my VMs and created a new server. The Fabric script (which uses paramiko underneath) works fine one server but not the other. Since it's a clone I don't know what could be different but everytime I run my Fabric script I get the error Error reading SSH protocol banner. This script is connecting with the same user on both servers. The script works fine on all other servers except this new one that I just clones. The only thing that is radically different is the IP address which is totally different range. Any ideas on what could be causing this?
true
7,206,272
1.2
1
0
8
This issue didn't lie with Paramiko, Fabric or the SSH daemon. It was simply a firewall configuration in ISPs internal network. For some reason, they don't allow communication between different subnets of theirs. We couldn't really fix the firewall configuration so instead we switched all our IPs to be on the same subnet.
0
29,345
0
6
2011-08-26T14:33:00.000
python,linux,ssh,fabric,paramiko
Paramiko Error: Error reading SSH protocol banner
1
2
2
7,252,752
0
0
0
The site I am developing supports sign-in with the default SimpleOpenIDSelector providers (the same providers that are listed on stackoverflow login page). While it works for simple sign-ins, my AX-required requests remain unfulfilled. For example, Blogger does not disclose first/last/friendly name through AX. How can I ensure that I get a string that is the user's preferred name? What are the possible workarounds or alternatives or standard methods of dealing with this? I am using the latest python-openid library.
true
7,218,916
1.2
0
0
1
You can not. Since SREG and AX are extensions to OpenID, you can't expect everyone to use it, and therefore you can't be sure that you will get any data back from a provider. The standard method of handling this is to have a normal registration form with the missing fields. Simply use the SREG/AX data from the provider as a convenience for the user (as if he entered the data manually in your registration form), not as something you should rely on.
0
97
0
1
2011-08-28T03:38:00.000
openid,python-openid
Reliable method of getting username or realname from OpenID
1
1
1
7,226,744
0
0
0
When I use selenium webdriver with a custom firefox profile I get the firefox Add-ons pop up showing 2 extensions: Firefx WebDriver 2.5.0 and Ubuntu Firefox Modifications 0.9rc2. How can I get rid of this popup? I looked in the server jar to see if I could find the extensions, no luck. Looked online for the extensions, no luck. When I run the code without using the custom profile there is no popup.
false
7,233,631
0
0
0
0
Open Firefox with this profile (with profile manager), go to Firefox preferences, turn off updates - this works for me.
0
1,461
0
4
2011-08-29T17:33:00.000
python,firefox,selenium
Using Selenium webdriver with a custom firefox profile - how to get rid of addins popup
1
1
2
7,267,936
0
0
0
I wrote a tcp based server with the twisted.internet module. It's a high concurrency environment. I usually send data by the instance of protocol.Protocol, and I got a problem with that. Some of the tcp connections may be closed caused by timeout, and it seems I cannot get any notification so that the data I have written in the closed connection may be lost. And the data loss problem may caused by some other way. Is there any good way to control it? (socket.send could return a state, transport.write seems have no return)
true
7,237,996
1.2
0
0
4
This problem is not specific to Twisted. Your protocol must have some acknowledgement that data was received, if you want to know that it was received. The result from send() does not tell you that the data was authoritatively received by the peer; it just says that it was queued by the kernel for transport. From your application's point of view, it really doesn't matter whether the data was queued by Twisted, or by your C runtime, or by your kernel, or an intermediary downstream switch, or the peer's kernel, or whatever. Maybe it's sent, maybe it's not. Put another way, transport.write() takes care of additional buffering that send() doesn't, guaranteeing that it always buffers all of your bytes, whereas send() will only buffer some. You absolutely need to have an application-level acknowledgement message if you care about whether a network peer has seen your data or not.
0
531
1
4
2011-08-30T02:10:00.000
python,tcp,twisted
data loss problem of tcp protocol in twisted
1
1
1
7,238,064
0
0
0
I want to have my application read a document using xml.sax.parse. Things work fine but when I move the executable to a Windows server 2008 machine things break down. I get an SAXReaderNotAvailable exception with "No parsers found" message. The setup I'm using to build the executable is: 64 bit windows 7 Python 2.7.2 32-bit PyInstaller 1.5.1
true
7,241,240
1.2
0
0
0
The executable turned out to be fine. For some reason or the other there's wrong versions of the needed dlls in PATH and the executable ended up trying to use those.
0
446
1
0
2011-08-30T09:36:00.000
python
How can I use xml.sax module on an executable made with PyInstaller?
1
1
2
7,305,695
0
1
0
I am trying to take updating weather data from a website that isn't mine and put a chunk of it into a generic text file every 30 minutes. The text file should not have any html tags or anything but could be delimited by commas or periods or tabs. The website generating the data puts the data in a table with no class or id. What i need is the text from one tag and each of its individual tags within. The tag is on the same line number every time regardless of the updated data. This seems a bit silly of a challenge as the method for getting the data doesn't seem ideal. I'm open to suggestions for different methods for getting an updated (hourly-twice dailyish) temperature/dewpoint/time/etc data point and for it to be put in a text file. With regards to automating it every 30 minutes or so, i have an automation program that can download webpages at any time interval. I hope i was specific enough with this rather weird(to me at least) challenge. I'm not even sure where to start. I have lots of experience with html and basic knowledge of Python, javascript, PHP, and SQL but i am open to taking code or learning syntax of other languages.
false
7,249,412
0
0
0
0
For Python For timed tasks for N minutes create an UNIX cron job or Windows equivalent which runs your .py script regularly Download the weather data using urllib2 module in .py script Parse HTML using BeautifulSoup or lxml libraries Select the relevant bits of HTML using XPath selectors or CSS selectors (lxml) Process data and write it to a text file The actual implementation is left as an exercise to a reader :)
0
74
0
1
2011-08-30T20:30:00.000
php,python,html
Saving select updating data points from an external webpage to a text file
1
1
3
7,249,558
0
0
0
I want to rename all files in a folder and add a .xml extension. I am using Unix. How can I do that?
false
7,253,198
0.099668
1
0
2
In Python: Use os.listdir to find names of all files in a directory. If you need to recursively find all files in sub-directories as well, use os.walk instead. Its API is more complex than os.listdir but it provides powerful ways to recursively walk directories. Then use os.rename to rename the files.
0
13,442
1
11
2011-08-31T06:11:00.000
php,python,linux,shell,unix
How to add .xml extension to all files in a folder in Unix/Linux
1
1
4
7,253,246
0
0
0
I am trying to get some locations in New York using FourSquare API using the following API call: https://api.foursquare.com/v2/venues/search?ll=40.7,-74&limit=50 What I don't understand is that if the call imposes a limit of 50 search results (which is the maximum), how can I get more locations? When using Facebook API, the results being returned were random so I could issue multiple calls to get more results but FourSquare seems to be returning the same result set. Is there a good way to get more locations? EDIT: Ok. So there was a comment saying that I could be breaking a contractual agreement and I am not sure why this would be the case. I would gladly accept a reasoning for this. My doubt is this: Let us say that hypothetically, the location I am searching for is not in the 50 results returned? In that case, shouldn't there be a pagination mechanism somewhere?
false
7,303,309
0.099668
0
0
1
The current API limits results to 50. You should try altering your coordinates to be more precise to avoid not finding your venue. Pagination would be nice but 50 is a lot of venues for a search.
0
2,222
0
1
2011-09-05T02:02:00.000
javascript,python,http,foursquare
How do I get more locations?
1
1
2
7,354,603
0
1
0
StAX seems to be a pulling parser (like SAX but without inversion of control). But I didn't find the equivalent of python's expandNode which is what I was interested in in the first place, I don't care about inversion of control. For those who don't know pulldom, it's a S(t)AX parser where at any point you can obtain the current subtree as a DOM Node.
false
7,307,423
0.197375
0
0
1
The Java approach seems to be that either you get streaming or DOM parser but not both; while python allows mixing the two.
0
180
0
0
2011-09-05T11:30:00.000
java,python,xml,sax,stax
Is there an equivalent of python's pulldom for java?
1
1
1
7,309,549
0
0
0
Im creating the front end for a web service, and another company the back end. I need a good, simple and easily understandable way of making a document of API calls that we can collaborate on and edit together without confusing one another. are there any good specs/examples etc of project API documentation so this doesnt get in a huge mess with many re-writes?
false
7,333,852
0
0
0
0
For small APIs I´ve began to use Google Docs. Its collaboration features are awesome and you can see a list of all changes made on this document.
0
1,413
0
2
2011-09-07T12:24:00.000
javascript,python,ajax,api,specifications
Good examples/templates/best practices API documentation
1
1
2
7,333,988
0
0
0
I have an application which communicates over the local area network. However, I want to instead make it communicate over the internet. To do this I propose making an intermediate program which will read the network traffic generated from the application on one computer and send it to the application on another computer. This involves: Reading the outgoing network traffic of the application Sending a copy of this traffic over the internet to another computer Giving this copy to the application on the other computer Instead of this: Application on computer A <-LAN-> Application on computer B I want to achieve this: Application on A <--> My Program on A <-INTERNET-> My program on B <--> Application on B I can accomplish (2), but with (1) and (3) my problem is that I have very little experience with networking and I do not know where to start. I can program in python but would be willing to use c++ to accomplish this. (Hamachi does not work for this application, I do not know why.) In response to comments I do not intend to manipulate any data unless it is necessary to make the connection work. I have no control over the application itself and it does not provide me with any methods to configure the connection with the exception of a port number. TCP and UDP are both used on the port 6112. The IP addresses used are first 255.255.255.255 for a generic broadcast used to discover other applications on the LAN (with UDP), then a TCP connection is established.
false
7,337,966
0.099668
0
0
1
Why re-engineer the wheel? Why not just use OpenVPN, n2n or vtun etc etc.
0
1,131
0
0
2011-09-07T17:23:00.000
c++,python,proxy,network-programming
How to create a generic network proxy using Python or C++?
1
1
2
7,338,903
0
0
0
I would like to ask how can I get list of urls which are opened in my webbrowser, from example in firefox. I need it in Python. Thanks
false
7,358,224
0
0
0
0
First I'd check if the browser has some kind of command line argument which could print such informations. I only checked Opera and it doesn't have one. What you could do is parse session file. I'd bet that every browser stores list of opened tabs/windows on disk (so it could recover after crash). Opera has this information in ~/.opera/sessions/autosave.win. It's pretty straight-forward text file. Find other browsers' session files in .mozzila, .google, etc.. or if you are on windows in /user/ directories. There might be commands to ask running instance for its working directory (as you can specify it on startup and it doesn't have to be the default one). That's the way I'd go. Might be the wrong one.
0
1,872
0
1
2011-09-09T07:05:00.000
python
Get url from webbrowser in python
1
1
2
7,358,989
0
0
0
Is there any way how to find out, if ip address comming to the server is proxy in Python? I tried to scan most common ports, but i don't want to ban all ips with open 80 port, because it don't have to be proxy. Is there any way how to do it in Python? I would prefere it before using some external/paid services.
false
7,371,442
0.197375
1
0
1
If it's a HTTP traffic, you can scan for headers like X-Forwarded-For. But whatever you do it will always be only a heuristic.
0
328
0
2
2011-09-10T11:39:00.000
python,sockets,proxy
Python - Determine if ip is proxy or not
1
1
1
7,378,232
0
0
0
conn=httlib.HTTPConnection(self.proxy) Self.proxy has destination ip and port. I want to do multiple connection from multiple IP addresses to destination How to specify the source IP while connect request.Please help me out. Thanks in Advance.
false
7,387,190
0
0
0
0
I assume that you have multiple network connections on the same computer (i.e. a wired and wireless connection) and you want to make sure that your connect goes over a specific interface. In general you cannot do this. How your traffic is sent to a specific ip address, and therefore what source ip address it shows, is determined by your operating system's routing tables. As you haven't specified what operating system this I can't go into more detail. You may be able to do this using some of the more advanced routing configuration, but that's an operating system level problem and can't be done through Python.
0
2,170
0
0
2011-09-12T11:31:00.000
python,http,connect
Http connect request from multiple IP address to destination in python
1
2
2
7,387,467
0
0
0
conn=httlib.HTTPConnection(self.proxy) Self.proxy has destination ip and port. I want to do multiple connection from multiple IP addresses to destination How to specify the source IP while connect request.Please help me out. Thanks in Advance.
false
7,387,190
0
0
0
0
I got the solution but not 100% Requirement: Has to send request from 10 Ip address to one destination. Achieved the same through the following API class httplib.HTTPConnection(host[, port[, strict[, timeout[, source_address]]]]) here, we can mention the last parameter source IP Like, httlib.HTTPConnection(dest_ip, dest_port, src_ip) For Example:httlib.HTTPConnection("198.168.1.5",8080,"198.168.1.1") created the connection under for loop for 10 unique src ip addresses. Output: Connected to destination with 10 different port number with same IP address. I don't know why it happens like this. Problem solved. Thanks for all.
0
2,170
0
0
2011-09-12T11:31:00.000
python,http,connect
Http connect request from multiple IP address to destination in python
1
2
2
7,399,178
0
0
0
I am basically new to this kind of work.I am programming my application in C# in VS2010.I have a crystal report that is working fine and it basically gets populated with some xml data. That XMl data is coming from other application that is written in Python on another machine. That Python script generates some data and that data is put on the memory stream. I basically have to read that memory stream and write my xml which is used to populate my crystal report. So my supervisor wants me to use remote procedure call. I have never done any remote procedure calling. But as I have researched and understood. I majorly have to develop a web or WCF service I guess. I don't know how should I do it. We are planning to use the http protocol. So, this is how it is supposed to work. I give them the url of my service and they would call that service and my service should try to read the data they put on the memory stream. After reading the data I should use part of the data to write my xml and this xml is used to populate my crystal report. The other part of the data ( other than the data used to write the xml) should be sent to a database on the SQl server. This is my complete problem definition. I need ideas and links that will help me in solving this problem.
false
7,392,676
0
1
0
0
As John wrote, you're quite late if it's urgent and your description is quite vague. There are 1001 RPC techniques and the choice depends on details. But taking into account that you seem just to exchange some xml data, you probably don't need a full RPC implementation. You can write a HTTP server in python with just a few lines of code. If it needs to be a bit more stable and log running, have a look at twisted. Then just use pure html and the WebClient class. Not a perfect solution, but worked out quite well for me more than once. And you said it's urgent! ;-)
0
779
0
0
2011-09-12T19:05:00.000
c#,python,web-services,rpc
Remote procedure call in C#
1
1
1
7,392,759
0
0
0
most of my job is on a citrix ICA app. i work in a winsows enviroment. among other things, i have to print 300 reports from my app weekly. i am trying to automate this task. i was using a screenshot automation tool called sikuli, but it is not portable form station to station. i thought i might be able to inject packets and send the commands on that level. i was not able to read the packets i captured with whireshark or do anythin sensable with them. i have expirence with python and if i get pointed in the right direction, i am pretty sure i can pull something off. does anyone have any ideas on how to do this (i am leaning towards packet injection aat the moment, but am open to ideas). thanks for the help, sam
false
7,398,343
0
1
0
0
after a lot of research, it cant be done. some manipulation like change window focus with the ICA COM object.
0
978
0
0
2011-09-13T07:34:00.000
python,automation,citrix,packet-injection
citrix GUI automation or packet injection?
1
1
1
10,010,649
0
1
0
I'm trying to use CherryPy for a simple website, having never done Python web programming before. I'm stuck trying to allow the download of a file that is dynamically created. I can create a file and return it from the handler, or call serve_fileobj() on the file, but in either case the contents of the file are simply rendered to the screen, rather than downloaded. Does CherryPy offer any useful methods here? How can this be accomplished?
true
7,400,601
1.2
0
0
2
Add 'Content-Disposition: attachment; filename="<file>"' header to response
0
1,404
0
1
2011-09-13T10:47:00.000
python,cherrypy
How to allow dynamically created file to be downloaded in CherryPy?
1
1
3
7,401,164
0
0
0
I am using buildbot (system for CI) and have one problem. How can I send parameters of Change to all builders? I want to use the properties comments and who of Changes object. Thx
true
7,417,518
1.2
1
0
0
I'm find answer: inheritance from BuildStep and use self.build.allChanges() and for set property: self.setProperty()
0
587
0
1
2011-09-14T13:56:00.000
python,continuous-integration,buildbot
Buildbot properties from changes to all build
1
1
1
7,442,406
0
1
0
I'm trying to parse a ODF-document with xml.dom.minidom. I would like to get all elements that are text:p OR text:h. Seems like there would be a way to add a wildcard in the getElementsByTagName method. Or is it? Is there a better way to parse a odf-document without uno?
false
7,421,351
0
0
0
0
As getElementsByTagName returns a DOMElement list you could just simply concatenate the two lists. Alternatively XPath supports and/or operators, so you could use that. That would require using the elementTree or lxml modules instead.
0
897
0
0
2011-09-14T18:42:00.000
python
Wildcards in getElementsByTagName (xml.dom.minidom)
1
1
2
7,421,454
0
0
0
I'm working with mobile, so I expect network loss to be common. I'm doing payments, so each request matters. I would like to be able to test my server to see precisely how it will behave with client network loss at different points in the request cycle -- specifically between any given packet send/receive during the entire network communication. I suspect that the server will behave slightly differently if the communication is lost while sending the response vs. while waiting for a FIN-ACK, and I want to know which timings of disconnections I can distinguish. I tried simulating an http request using scapy, and stopping communication between each TCP packet. (I.e.: first send SYN then disappear; then send SYN and receive SYN-ACK and then disappear; then send SYN and receive SYN-ACK and send ACK and then disappear; etc.) However, I quickly got bogged down in the details of trying to reproduce a functional TCP stack. Is there a good existing tool to automate/enable this kind of testing?
true
7,424,349
1.2
0
0
2
Unless your application is actually responding to and generating its own IP packets (which would be incredibly silly), you probably don't need to do testing at that layer. Simply testing at the TCP layer (e.g, connect(), send(), recv(), shutdown()) will probably be sufficient, as those events are the only ones which your server will be aware of.
0
435
1
3
2011-09-14T23:54:00.000
python,http,tcp,network-programming,scapy
How to test server behavior under network loss at every possible packet
1
1
1
7,424,772
0
1
0
here is the sample code : #!/usr/bin/env python # Sample Python client accessing JIRA via SOAP. By default, accesses # http://jira.atlassian.com with a public account. Methods requiring # more than basic user-level access are commented out. Change the URL # and project/issue details for local testing. # # Note: This Python client only works with JIRA 3.3.1 and above (see # http://jira.atlassian.com/browse/JRA-7321) # # Refer to the SOAP Javadoc to see what calls are available: import SOAPpy, getpass, datetime soap = SOAPpy.WSDL.Proxy('http://jira.company.com:8080/rpc/soap/jirasoapservice-v2?wsdl') jirauser='username' passwd='password' # This prints available methods, but the WSDL doesn't include argument # names so its fairly useless. Refer to the Javadoc URL above instead #print 'Available methods: ', soap.methods.keys() def listSOAPmethods(): for key in soap.methods.keys(): print key, ': ' for param in soap.methods[key].inparams: print '\t', param.name.ljust(10), param.type for param in soap.methods[key].outparams: print '\tOut: ', param.name.ljust(10), param.type auth = soap.login(jirauser, passwd) issue = soap.getIssue(auth, 'QA-79') print "Retrieved issue:", issue print "Done!" The complete error is as follows , in order to provide the complete context: IMPORT: http://service.soap.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://jira.mycompany.com:8080/rpc/soap/jirasoapservice-v2 no schemaLocation attribute in import IMPORT: http://exception.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://schemas.xmlsoap.org/soap/encoding/ no schemaLocation attribute in import /usr/local/lib/python2.6/dist-packages/wstools-0.3-py2.6.egg/wstools/XMLSchema.py:3107: DeprecationWarning: object.__init__() takes no parameters tuple.__init__(self, args) IMPORT: http://service.soap.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://beans.soap.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://jira.mycompany.com:8080/rpc/soap/jirasoapservice-v2 no schemaLocation attribute in import IMPORT: http://schemas.xmlsoap.org/soap/encoding/ no schemaLocation attribute in import IMPORT: http://service.soap.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://beans.soap.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://exception.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://schemas.xmlsoap.org/soap/encoding/ no schemaLocation attribute in import IMPORT: http://beans.soap.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://jira.mycompany.com:8080/rpc/soap/jirasoapservice-v2 no schemaLocation attribute in import IMPORT: http://exception.rpc.jira.atlassian.com no schemaLocation attribute in import IMPORT: http://schemas.xmlsoap.org/soap/encoding/ no schemaLocation attribute in import
true
7,434,578
1.2
0
0
0
I changed the JIRA Python CLI code to use suds instead of SOAPpy a while ago and haven't looked back. SOAPpy is pretty old and seems unsupported now.
0
549
0
1
2011-09-15T16:53:00.000
python,soap,jira,soappy
i am getting error "no schemaLocation attribute in import" when using Python client accessing JIRA via SOAP
1
1
1
7,438,706
0
0
0
I'm trying to code my own Python 3 http library to learn more about sockets and the Http protocol. My question is, if a do a recv(bytesToRead) using my socket, how can I get only the header and then with the Content-Length information, continue recieving the page content? Isn't that the purpose of the Content-Length header? Thanks in advance
true
7,437,147
1.2
0
0
2
In the past to accomplish this, I will read a portion of the socket data into memory, and then read from that buffer until a "\r\n\r\n" sequence is encountered (you could use a state machine to do this or simply use the string.find() function. Once you reach that sequence you know all of the headers have been read and you can do some parsing of the headers and then read the entire content length. You may need to be prepared to read a response that does not include a content-length header since not all responses contain it. If you run out of buffer before seeing that sequence, simply read more data from the socket into your buffer and continue processing. I can post a C# example if you would like to look at it.
0
739
0
0
2011-09-15T20:36:00.000
python,http-headers
Http protocol, Content-Length, get page content Python
1
1
1
7,437,186
0
0
0
I am using an API key in some Python code which I am looking to distribute. This API key is for Google Maps. Are there any security issues with regards to distributing this API key and if so, how is it best to hide this?
false
7,458,227
0
0
0
0
The Google Maps API key is only for version 2, which has been officially deprecated since May 2010. Strongly suggest you use Version 3 of the API instead, which is much better, and has no need for an API key.
1
3,290
0
4
2011-09-17T22:21:00.000
python,api,google-maps,google-maps-api-3
Hiding API key in code (Python)
1
4
5
7,459,980
0
0
0
I am using an API key in some Python code which I am looking to distribute. This API key is for Google Maps. Are there any security issues with regards to distributing this API key and if so, how is it best to hide this?
false
7,458,227
0
0
0
0
if you're providing a tool for "power users" to use google maps, then it's reasonable to expect them to supply their own Google API key. If that's not an option for your users, you will need to have a web-service that your application accesses to act as a deputy so that your private key is not exposed. You will still have to devise a means of authenticating users, if that is applicable.
1
3,290
0
4
2011-09-17T22:21:00.000
python,api,google-maps,google-maps-api-3
Hiding API key in code (Python)
1
4
5
7,458,255
0
0
0
I am using an API key in some Python code which I am looking to distribute. This API key is for Google Maps. Are there any security issues with regards to distributing this API key and if so, how is it best to hide this?
false
7,458,227
0.039979
0
0
1
You cannot hide this. Your program needs to access it and a hacker will simply use a tool like a debugger, a virtual machine or a modified Python implementation if he/she really wants to know the API key. I don't think it's necessary to hide a Google Maps API key anyway, as a web page will also have this in its source code when the API is in use. You should refer to the documentation or the page where you obtained the key to see if it's a private key.
1
3,290
0
4
2011-09-17T22:21:00.000
python,api,google-maps,google-maps-api-3
Hiding API key in code (Python)
1
4
5
7,458,232
0
0
0
I am using an API key in some Python code which I am looking to distribute. This API key is for Google Maps. Are there any security issues with regards to distributing this API key and if so, how is it best to hide this?
false
7,458,227
0
0
0
0
You could obfuscate the key in various ways, but it's not worth the effort. Obfuscation is a weak way to protect information, and in this case your information's security isn't especially critical anyway. The point of the API key is largely so that the API vendor (Google, here) can monitor, throttle, and revoke your application's use of their service. It is meant to be private, and you shouldn't share it carelessly or intentionally, but it isn't the end of the world if somebody else gets their hands on it.
1
3,290
0
4
2011-09-17T22:21:00.000
python,api,google-maps,google-maps-api-3
Hiding API key in code (Python)
1
4
5
7,458,257
0
0
0
I'm doing some threaded asynchronous networking experiment in python, using UDP. I'd like to understand polling and the select python module, I've never used them in C/C++. What are those for ? I kind of understand a little select, but does it block while watching a resource ? What is the purpose of polling ?
true
7,459,408
1.2
0
0
12
If you do read or recv, you're waiting on only one connection. If you have multiple connections, you will have to create multiple processes or threads, a waste of system resource. With select or poll or epoll, you can monitor multiple connections with only one thread, and get notified when any of them has data available, and then you call read or recv on the corresponding connection. It may block infinitely, block for a given time, or not block at all, depending on the arguments.
0
13,909
0
13
2011-09-18T04:06:00.000
python,multithreading,sockets,polling,epoll
I can't understand polling/select in python
1
1
3
7,460,395
0
0
0
I'm trying to figure out how to make a server that can accept multiple clients at one time. While doing so, I need the client able to send and receive data from the server at the same time. Would i have to make a threaded server? And have a thread for listening for data. And then another thread for sending out information to the client? Then for the client side, do i need use threads to send/get info?
false
7,476,072
0.049958
0
0
1
You don't need threads for either client or server; you can instead select() to multiplex all the I/O inside a single thread.
1
4,923
0
2
2011-09-19T19:15:00.000
python,multithreading,sockets,tcp
simultaneously sending/receiving info from a server, in python?
1
1
4
7,496,206
0
1
0
I know the title is a bit off, but what I'm asking is if it is possible to have a python script on my website that can detect if my Android phone is connected to the computer Im using to view the page. I don't know if this is possible since I think python is server-side, but maybe this is possible to do with JavaScript? I'm fairly new to programming so I may not be as smart as you guys out there but if someone could just lead me in the right direction I would be grateful.
false
7,478,590
0.099668
0
0
1
What you're looking for isn't possible and should not be possible for security concerns. Do you want someone knowing what devices you have connected to your computer? You're essentially wanting a device sniffer but a website would not be capable of accessing the client's machine in this manner to access the desired information.
0
271
0
0
2011-09-19T23:46:00.000
javascript,android,python,django
Python on website to connect to phone connected to computer
1
2
2
7,478,626
0
1
0
I know the title is a bit off, but what I'm asking is if it is possible to have a python script on my website that can detect if my Android phone is connected to the computer Im using to view the page. I don't know if this is possible since I think python is server-side, but maybe this is possible to do with JavaScript? I'm fairly new to programming so I may not be as smart as you guys out there but if someone could just lead me in the right direction I would be grateful.
true
7,478,590
1.2
0
0
1
What you're looking for is not TOTALLY ABSOLUTELY impossible, but you may need some help from the computer you connect the device to, from the browser you're visiting your site with or from the device itself. Some possible options: have a program running on your computer which checks if the device is connected and pinging a certain URL on your website if it is. write a browser plugin which checks if the device is connected and exposing the information via some JS API - the JS code on your site will be able to use it. have a program running on your device which pings your site each time the device is connected to a computer. Admittedly, all the solutions are from the "tricky" category :)
0
271
0
0
2011-09-19T23:46:00.000
javascript,android,python,django
Python on website to connect to phone connected to computer
1
2
2
7,479,646
0
0
0
I have a folder called raw_files. Very large files (~100GB files) from several sources will be uploaded to this folder. I need to get file information from videos that have finished uploading to the folder. What is the best way to determine if a file is currently being downloaded to the folder (pass) or if the video has finished download (run script)? Thank you.
true
7,500,004
1.2
0
0
0
If you check those files, store the size of the files somewhere. When you are in the next round and the filesize is still the same, you can pretty much consider them as finished (depending on how much time is between first and second check). The time interval could e.g. be set to the timeout-interval of your uploading service(FTP, whatever). There is no special sign or content showing that a file is complete.
0
2,501
0
4
2011-09-21T12:48:00.000
python
How to determing if a file has finished downloading in python
1
2
2
7,500,125
0
0
0
I have a folder called raw_files. Very large files (~100GB files) from several sources will be uploaded to this folder. I need to get file information from videos that have finished uploading to the folder. What is the best way to determine if a file is currently being downloaded to the folder (pass) or if the video has finished download (run script)? Thank you.
false
7,500,004
0.197375
0
0
2
The most reliable way is to modify the uploading software if you can. A typical scheme would be to first upload each file into a temporary directory on the same filesystem, and move to the final location when the upload is finished. Such a "move" operation is cheap and atomic. A variation on this theme is to upload each file under a temporary name (e.g. file.dat.incomplete instead of file.dat) and then rename. You script will simply need to skip files called *.incomplete.
0
2,501
0
4
2011-09-21T12:48:00.000
python
How to determing if a file has finished downloading in python
1
2
2
7,500,205
0
1
0
BrowserID currently uses a Javascript shim, while browsers are still (hopefully) developing support for it. Is it possible to use BrowserID for clients that don't run javascript? I could read the 600 line JS shim, and figure out what navigator.id.getVerifiedEmail is meant to do, then replicate it on a server, but I was hoping there's an easier way. And even then, I don't think it would really work. OK, digging a bit deeper, this seems to be peripheral to what BrowserID is meant to do, and might require some kind custom BrowserID validator, but I'm hoping there's an easier way.
false
7,508,016
-0.066568
0
0
-1
One solution, use OpenID or hand-rolled email verification, but then I have 2 problems. :(
0
490
0
3
2011-09-21T23:50:00.000
python,graceful-degradation,noscript,browserid
BrowserID without Javascript (preferably in Python) - is it possible?
1
1
3
7,508,068
0
1
0
I have some test code (as a part of a webapp) that uses urllib2 to perform an operation I would usually perform via a browser: Log in to a remote website Move to another page Perform a POST by filling in a form I've created 4 separate, clean virtualenvs (with --no-site-packages) on 3 different machines, all with different versions of python but the exact same packages (via pip requirements file), and the code only works on the two virtualenvs on my local development machine(2.6.1 and 2.7.2) - it won't work on either of my production VPSs In the failing cases, I can log in successfully, move to the correct page but when I submit the form, the remote server replies telling me that there has been an error - it's an application server error page ('we couldn't complete your request') and not a webserver error. because I can successfully log in and maneuver to a second page, this doesn't seem to be a session or a cookie problem - it's particular to the final POST because I can perform the operation on a particular machine with the EXACT same headers and data, this doesn't seem to be a problem with what I am requesting/posting because I am trying the code on two separate VPS rented from different companies, this doesn't seem to be a problem with the VPS physical environment because the code works on 2 different python versions, I can't imagine it being an incompabilty problem I'm completely lost at this stage as to why this wouldn't work. I've even 'turned-it-off-and-turn-it-on-again' because I just can't see what the problem could be. I think it has to be something to do with the final POST coming from a VPS that the remote server doesn't like, but I can't figure out what that could be. I feel like there is something going on under the hood of URLlib that is causing the remote server to dislike the reply. EDIT I've installed the exact same Python version (2.6.1) on the VPS as is on my working local copy and it doesn't work remotely, so it must be something to do with originating from a VPS. How could this effect the Http request? Is it something lower level?
true
7,508,686
1.2
0
0
0
Well, it looks like I know why the problem was happening, but I'm not 100% the reason for it. I simply had to make the server wait (time.sleep()) after it sent the 2nd request (Move to another page) before doing the 3rd request (Perform a POST by filling in a form). I don't know is it because of a condition with the 3rd party server, or if it's some sort of odd issue with URLlib? The reason it seemed to work on my development machine is presumably because it was slower then the server at running the code?
0
397
0
5
2011-09-22T01:51:00.000
python,http,cookies,urllib2,mechanize
Inexplicable Urllib2 problem between virtualenv's.
1
2
4
7,681,165
0
1
0
I have some test code (as a part of a webapp) that uses urllib2 to perform an operation I would usually perform via a browser: Log in to a remote website Move to another page Perform a POST by filling in a form I've created 4 separate, clean virtualenvs (with --no-site-packages) on 3 different machines, all with different versions of python but the exact same packages (via pip requirements file), and the code only works on the two virtualenvs on my local development machine(2.6.1 and 2.7.2) - it won't work on either of my production VPSs In the failing cases, I can log in successfully, move to the correct page but when I submit the form, the remote server replies telling me that there has been an error - it's an application server error page ('we couldn't complete your request') and not a webserver error. because I can successfully log in and maneuver to a second page, this doesn't seem to be a session or a cookie problem - it's particular to the final POST because I can perform the operation on a particular machine with the EXACT same headers and data, this doesn't seem to be a problem with what I am requesting/posting because I am trying the code on two separate VPS rented from different companies, this doesn't seem to be a problem with the VPS physical environment because the code works on 2 different python versions, I can't imagine it being an incompabilty problem I'm completely lost at this stage as to why this wouldn't work. I've even 'turned-it-off-and-turn-it-on-again' because I just can't see what the problem could be. I think it has to be something to do with the final POST coming from a VPS that the remote server doesn't like, but I can't figure out what that could be. I feel like there is something going on under the hood of URLlib that is causing the remote server to dislike the reply. EDIT I've installed the exact same Python version (2.6.1) on the VPS as is on my working local copy and it doesn't work remotely, so it must be something to do with originating from a VPS. How could this effect the Http request? Is it something lower level?
false
7,508,686
0.049958
0
0
1
This is a total shot in the dark, but are your VPSs 64-bit and your home computer 32-bit, or vice versa? Maybe a difference in default sizes or accuracies of something could be freaking out the server. Barring that, can you try to find out any information on the software stack the web server is using?
0
397
0
5
2011-09-22T01:51:00.000
python,http,cookies,urllib2,mechanize
Inexplicable Urllib2 problem between virtualenv's.
1
2
4
7,550,771
0
0
0
I have been looking for a way to open a new default browser window from inside Python code. According to the documentation webbrowser.open_new(url) Should do that. Unfortunately in case Chrome is the default browser it only opens a new tab. Is there any way to open the default browser (without knowing what that browser is)?
false
7,521,729
0.099668
0
0
2
I have a feeling it's not Python's fault. Firefox and Chrome (and probably IE) all intercept calls to open new windows and changes them to new tabs. Check the settings in your browser for interpreting those calls.
0
5,789
0
8
2011-09-22T21:27:00.000
python,browser
How to open a new default browser window in Python when the default is Chrome
1
1
4
7,521,789
0
1
0
What is the best way to set up a system that checks for events daily and sends messages via email, Twitter, SMS, and possibly Facebook? Keep in mind, that I do not have access to a web server with root access (Using Rackspace Cloud). Would PHP have a solution for this? Would there be any drawbacks to using Google App Engine and Python?
false
7,535,544
0.197375
1
0
1
If you are using Google App Engine with Python you could use "Cron" to schedule a task to automatically run each day. GAE also allows you to send emails, just a little tip: make sure that you 'invite' the email address used to send mail to the application as an administrator so that you can programatically send emails etc.
0
150
0
0
2011-09-23T22:41:00.000
php,python,google-app-engine
Check for Event Daily, and Send Notification Messages
1
1
1
8,426,091
0
0
0
I am using xmpppy libary to write a XMPP IM robot. I want to act on disconnects, but I don't know how to detect disconnects. This could happen if your Jabber server crashes or if you have lost your internet connection. I found the callback, RegisterDisconnectHandler(self, DisconnectHandler), but it didn't work for the network failure, it only works when I explicitly call the method "disconnect". How do I detect a network failure or server crash?
false
7,536,732
0
0
0
0
Did you try waiting 30 minutes after the network failure? Depending on your network stack's settings, it could take this long to detect. However, if you're not periodically sending on the socket, you may never detect the outage. This is why many XMPP stacks periodically send a single space character, using an algorithm like: Set timer to N seconds On sending a stanza, reset the timer to N When the timer fires, send a space.
0
392
0
0
2011-09-24T03:08:00.000
python,xmpp,bots,xmpppy
A script im robot with xmpppy of python, how to detect network failure?
1
1
1
7,558,640
0
0
0
I just got started with ZMQ. I am designing an app whose workflow is: one of many clients (who have random PULL addresses) PUSH a request to a server at 5555 the server is forever waiting for client PUSHes. When one comes, a worker process is spawned for that particular request. Yes, worker processes can exist concurrently. When that process completes it's task, it PUSHes the result to the client. I assume that the PUSH/PULL architecture is suited for this. Please correct me on this. But how do I handle these scenarios? the client_receiver.recv() will wait for an infinite time when server fails to respond. the client may send request, but it will fail immediately after, hence a worker process will remain stuck at server_sender.send() forever. So how do I setup something like a timeout in the PUSH/PULL model? EDIT: Thanks user938949's suggestions, I got a working answer and I am sharing it for posterity.
false
7,538,988
1
0
0
10
The send wont block if you use ZMQ_NOBLOCK, but if you try closing the socket and context, this step would block the program from exiting.. The reason is that the socket waits for any peer so that the outgoing messages are ensured to get queued.. To close the socket immediately and flush the outgoing messages from the buffer, use ZMQ_LINGER and set it to 0..
0
82,000
1
77
2011-09-24T12:25:00.000
python,zeromq
zeromq: how to prevent infinite wait?
1
1
4
10,846,438
0
0
0
I am writing an IRC bot in Python using the Twisted library. To test my bot I need to connect several times to an IRC network as my bot requires a restart each time a change is made. Therefore I am often "banned" from these networks for a couple of minutes because I have made a lot of connections. This makes it annoying testing and writing the bot. Does anyone know of a better way to test the bot or any network which isn't as restrict with the amount of connections as QuakeNet is?
false
7,546,026
0
1
0
0
freenode is good.. You can create channels for yourself to test. Also check out this project called supybot, which is good for Python bots.
0
3,842
0
2
2011-09-25T14:09:00.000
python,testing,connection,irc,bots
Testing an IRC bot
1
1
3
7,546,076
0
1
0
I'm using a Python service that uses pickled messages as part of its protocol. I'd like to query this service from Java, but to do so, I need to pickle my message on the client (Java). Are there any implementations of pickle that run on the JVM (ideally with minimal dependencies)? Clarification: Modifying the server side is not an option, so while alternate serializations would be convenient, they won't solve the problem here.
false
7,558,389
0.099668
0
0
1
You can use a Java JSON serializer like GSON or Jackson to serilaise quite easily and use a python json pickle to deserialize
0
8,413
0
10
2011-09-26T16:47:00.000
java,python,serialization
How do I serialize a Java object such that it can be deserialized by pickle (Python)?
1
1
2
7,558,455
0
0
0
I need to fetch twitter historical data for a given set of keywords. Twitter Search API returns tweets that are not more than 9 days old, so that will not do. I'm currently using Tweepy Library (http://code.google.com/p/tweepy/) to call Streaming API and it is working fine except the fact that it is too slow. For example, when I run a search for "$GOOG" sometimes it takes more than an hour between two results. There are definitely tweets containing that keyword but it isn't returning result fast enough. What can be the problem? Is Streaming API slow or there is some problem in my method of accessing it? Is there any better way to get that data free of cost?
true
7,564,100
1.2
1
0
1
How far back do you need? To fetch historical data, you might want to keep the stream on indefinitely (the stream API allows for this) and store the stream locally, then retrieve historical data from your db. I also use Tweepy for live Stream/Filtering and it works well. The latency is typically < 1s and Tweepy is able to handle large volume streams.
0
1,072
0
0
2011-09-27T04:06:00.000
python,api,twitter,streaming,tweepy
Is there any better way to access twitter streaming api through python?
1
2
2
7,640,150
0
0
0
I need to fetch twitter historical data for a given set of keywords. Twitter Search API returns tweets that are not more than 9 days old, so that will not do. I'm currently using Tweepy Library (http://code.google.com/p/tweepy/) to call Streaming API and it is working fine except the fact that it is too slow. For example, when I run a search for "$GOOG" sometimes it takes more than an hour between two results. There are definitely tweets containing that keyword but it isn't returning result fast enough. What can be the problem? Is Streaming API slow or there is some problem in my method of accessing it? Is there any better way to get that data free of cost?
false
7,564,100
0
1
0
0
streaming API too fast you get message as soon as you post it, we use twitter4j. But streamer streams only current messages, so if you not listening on streamer the moment you send tweet then message is lost.
0
1,072
0
0
2011-09-27T04:06:00.000
python,api,twitter,streaming,tweepy
Is there any better way to access twitter streaming api through python?
1
2
2
7,569,606
0