Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I want to send and receive messages between two Python programs using sockets. I can do this using the private IPs when the computers are connected to the same router, but how do I do it when there are 2 NATs separating them?
Thanks (my first SO question) | false | 12,014,203 | 0 | 0 | 0 | 0 | Redis, could work but not the exact same functionality. | 1 | 668 | 0 | 6 | 2012-08-17T23:03:00.000 | python,sockets,nat | How do I communicate between 2 Python programs using sockets that are on separate NATs? | 1 | 1 | 3 | 12,020,661 | 0 |
1 | 0 | I'd like to write a script, preferably a Python code, to fill text areas in web pages and then click certain buttons. I've come across some solutions for this but none worked, mainly because cookies were not stored properly, for exmaple, there was a Python script to login to Facebook, which did seem to get it right in the shell screen, but when I opened Facebook in the browser it was logged out like nothing happened. Also, the code was hard coded for Facebook and I'm asking for something more general. So, please, if anyone had been successful with these kind of things, your advice is much needed. Open a web page, fill text in specified text elements, click a specified button, save cookies, that's all. Many thanks. | false | 12,022,570 | -0.132549 | 0 | 0 | -2 | You can also take a look at IEC which uses windows API to run an instance of Internet explorer and give commands to it. Although it may not be good for large scale automation, but it is very easy to use. | 0 | 4,599 | 0 | 1 | 2012-08-18T22:05:00.000 | python,facebook,text,login,fill | Script to open web pages, fill texts and click buttons | 1 | 1 | 3 | 28,718,663 | 0 |
0 | 0 | I want to find a link by its text but it's written in non-English characters (Hebrew to be precise, if that matters). The "find_element_by_link_text('link_text')" method would have otherwise suited my needs, but here it fails. Any idea how I can do that? Thanks. | false | 12,023,402 | 0 | 1 | 0 | 0 | In the future you need to pastebin a representative snippet of your code, and certainly a traceback. I'm going to assume that when you say "the code does not compile" that you mean that you get an exception telling you you haven't declared an encoding.
You need a line at the top of your file that looks like # -*- coding: utf-8 -*- or whatever encoding the literals you've put in your file are in. | 0 | 258 | 0 | 1 | 2012-08-19T00:55:00.000 | python,selenium,hyperlink | Selenium in Python: how to click non-English link? | 1 | 1 | 2 | 12,023,574 | 0 |
0 | 0 | I'm sorry if my question is too elementary. I have some python code, which makes the machine act as a transparent proxy server using "twisted" library. Basically I want my own transparent proxy OUTSIDE my internal network and since I want to be able to monitor traffic, I need to have my own server. So I need a machine that runs my script 24/7 and listens for http connections. What kind of server/host do I need? Any host provider suggestions? | true | 12,027,815 | 1.2 | 0 | 0 | 1 | Go for Amazon Ec2 instance, Ubantu server. If your process is not much memory consuming , you can go with Micro instance(617 Mb ram, 8 Gb HD) which is free for first year. Or you could go with small instance (1.7 GB ram and 8Gb HD), which might cost you little more.
For setting up the python code to run 24/7 , you can create a daemon process in the instance. You can also put the twisted library/ any other library in it. Should not take much time if you have worked with Amazon AWS. | 0 | 223 | 1 | 0 | 2012-08-19T15:49:00.000 | python,webserver,hosting | Web server to run python code | 1 | 1 | 2 | 12,027,902 | 0 |
0 | 0 | Is there any way to log http requests/responses using Selenium Webdriver (firefox)?
I guess it's possible to drive web traffic through proxy and log it, but maybe there is more simple "internal" selenium solution?
Asked this question on #selenium channel:
you will need to proxy it to capture the requests
so, looks like only way to setup proxy for it. | false | 12,034,013 | 0.291313 | 0 | 0 | 3 | No, WebDriver doesn't have any methods to examine or modify the HTTP traffic occurring between the browser and the website. The information you've already gotten from the Selenium IRC channel (likely even from a Selenium committer) is correct. A proxy is the correct approach here. | 0 | 1,705 | 0 | 3 | 2012-08-20T07:53:00.000 | python,selenium,webdriver | Is there any way to log http requests/responses using Selenium Webdriver (firefox)? | 1 | 1 | 2 | 12,036,058 | 0 |
1 | 0 | I have to make a html page with css and javascript that I have to enter a url in a form. With this url, I have to get some information from the html of the page with a Python 3.2 Script.
I start learning Python some days ago and I have some question:
I need CherryPy/Django to do that? (I'm asking because I executed a script to get the entire html without using CherryPy/Django and it works - no interaction with browser)
CherryPy examples have the html built in the python code. I must write the html in the python script or can I have an html page that call the script with Ajax (or anything else)?
If I can use Ajax, is XmlHttpRequest a good choice?
Thank you! :D | false | 12,043,333 | 0.197375 | 0 | 0 | 1 | No, you don't need a web framework, but in general it's a good idea. Django seems like brutal overkill for this. CherryPy or Pyramid or some micro framework seems better.
You can have an HTML page that calls the CherryPy server, but since this page obviously is a part of the system/service you are building, serving it from the server makes more sense.
Sure, why not. | 0 | 1,126 | 0 | 0 | 2012-08-20T18:49:00.000 | html,ajax,django,python-3.x,cherrypy | Call python script from html page | 1 | 1 | 1 | 12,043,550 | 0 |
0 | 0 | I've been using Paramiko today to work with a Python SSH connection, and it is useful.
However one thing I'd really like to be able to do over the SSH is to utilise some Pythonic sugar. As far as I can tell I can only use the inbuilt Paramiko functions, and if I want to anything using Python on the remote side I would need to use a script which I have placed on there, and call it.
Is there a way I can send Python commands over the SSH connection rather than having to make do only with the limitations of the Paramiko SSH connection? Since I am running the SSH connection through Paramiko within a Python script, it would only seem right that I could, but I can't see a way to do so. | true | 12,044,262 | 1.2 | 1 | 0 | 0 | Well, that is what SSH created for - to be a secure shell, and the commands are executed on the remote machine (you can think of it as if you were sitting at a remote computer itself, and that either doesn't mean you can execute Python commands in a shell, though you're physically interact with a machine).
You can't send Python commands simply because Python do not have commands, it executes Python scripts.
So everything you can do is a "thing" that will make next steps:
Wrap a piece of Python code into file.
scp it to the remote machine.
Execute it there.
Remove the script (or cache it for further execution).
Basically shell commands are remote machine's programs themselves, so you can think of those scripts like shell extensions (python programs with command-line parameters, e.g.). | 0 | 705 | 0 | 0 | 2012-08-20T19:57:00.000 | python,ssh,paramiko | using python commands within paramiko | 1 | 1 | 2 | 12,044,350 | 0 |
1 | 0 | Have been using ActivePython on windows7 and lxml seems working without an issue..
There were a lot of other third party packages I had & they were working too..
Until I wanted to use it inside Web2Py.
All the others seem to be working if I copy them directly inside c:/web2py/applications/myApp/modules
With lxml, seems I need to copy something else..
I have a third party module, which imports lxml like this : from lxml.etree import tostring
It ends up throwing - No module named lxml.etree
My test program outside web2py runs without an issue with both these modules.
When I do a pypm files lxml I see this :
%APPDATA%\Python\Python27\site-packages\lxml-2.3-py2.7.egg-info
What else should I copy along with the lxml directory into the modules directory ?
Pretty sure it's me doing something wrong instead of Web2py, but can't put a finger on..
web2py version = Version 1.99.7 (2012-03-04 22:12:08) stable | true | 12,046,683 | 1.2 | 0 | 0 | 1 | If you are using the Windows binary version of web2py, it comes with its own Python 2.5 interpreter and is self-contained, so it won't use your system's Python 2.7 nor see any of its modules. Instead, you should switch to running web2py from source. It's just as easy as the binary version -- just download the zip file and unzip it. You can then import lxml without moving anything to the application's /modules folder. | 0 | 277 | 0 | 0 | 2012-08-20T23:41:00.000 | python,lxml,web2py,activepython | python - web2py - can't seem to find lxml - ActivePython - windows7 | 1 | 1 | 1 | 12,047,285 | 0 |
0 | 0 | I have no way to wake up a thread being blocked by poll.poll() function . Could someone help me ? | false | 12,050,072 | 1 | 0 | 0 | 7 | The way to handle this is to have an extra file descriptor included in the list of descriptors passed to poll(). For that descriptor wait for a read to be ready. Have any other thread which wants to awaken the thread waiting on poll() write to that extra descriptor. At that point, the thread which called poll() resumes execution, sees that the extra descriptor is that which awakened it, and does whatever.
The normal way to get this extra file descriptor initially is to open an unnamed pipe with pipe(). That way you have two descriptors: the one you hand the read wait in poll() on and the other which you write to to awaken the thread waiting on poll(). | 1 | 4,068 | 0 | 5 | 2012-08-21T07:24:00.000 | python,sockets | How to wake up a thread being blocked by select.poll.poll() function from another thread in socket programming in python? | 1 | 2 | 3 | 17,333,359 | 0 |
0 | 0 | I have no way to wake up a thread being blocked by poll.poll() function . Could someone help me ? | false | 12,050,072 | -0.066568 | 0 | 0 | -1 | Use a timeout in your poll call, so it doesn't block indefinitely. N.B.: the timeout value is in milliseconds. | 1 | 4,068 | 0 | 5 | 2012-08-21T07:24:00.000 | python,sockets | How to wake up a thread being blocked by select.poll.poll() function from another thread in socket programming in python? | 1 | 2 | 3 | 12,050,224 | 0 |
0 | 0 | This is the problem I'm trying to solve,
I want to write an application that will read outbound http request packets on the same machine's network card. This would then be able to extract the GET url from it.On basis of this information, I want to be able to stop the packet, or redirect it , or let it pass.
However I want my application to be running in promiscuous mode (like wireshark does), and yet be able to eat up (stop) the outbound packet.
I have searched around a bit on this..
libpcap / pcap.h allows to me read packets at the network card, however I haven't yet been able to figure out a way to stop these packets or inject new ones into the network.
Certain stuff like twisted or scapy in python, allows me set up a server that is listening on some local port, I can then configure my browser to connect to it, using proxy configurations. This app can then do the stuff.. but my main purpose of being promiscuous is defeated here..
Any help on how I could achieve this would be greatly appreciated .. | false | 12,072,506 | 0 | 0 | 0 | 0 | Using pcap you cannot stop the packets, if you are under windows you must go down to the driver level... but you can stop only packets that your machine send.
A solution is act as a pipe to the destination machine: You need two network interfaces (without address possibly), when you get a packet that you does not found interesting on the source network card you simply send it on the destination network card. If the packet is interesting you does not send it, so you act as a filter. I have done it for multimedia performance test (adding jitter, noise, etc.. to video streaming) | 0 | 1,109 | 0 | 0 | 2012-08-22T11:53:00.000 | python,c,networking | Stop packets at the network card | 1 | 1 | 4 | 12,075,452 | 0 |
0 | 0 | I have a question. I have read various RFCs and so many info on internet.
I read that DNS through UDP has a 512 bytes limit. I want to write a python program that use this max limit to create a well generated DNS request. It is very important to use UDP and not the TCP DNS implementation.
I have tried using public libraries but they did not use the 512 bytes that can be use like the RFC says.
It is very important too, to use the ~ 512 bytes to sent so much data in a single request.
Thank you for your help!
Let's make it happens!! ;) | true | 12,083,628 | 1.2 | 0 | 0 | -1 | The 512 bytes limit is for a dns message, not specifically for a request, and the limitation is only valid for responses, which can contain Resource Records.
For a request you are limited to the 253 bytes of the domain name.
You might be able to manually create a query containing Resource Records, but it will probably be dropped by your local dns server. | 0 | 988 | 0 | 0 | 2012-08-23T01:37:00.000 | python,dns,size | Make a 512 UDP bytes DNS request | 1 | 1 | 2 | 12,087,906 | 0 |
0 | 0 | I need python script that can sniff and decode SIP messages in order to check their correctness.
As a base for this script I use python-libpcap library for packets sniffing. I can catch UDP packet and extract SIP payload from it, but I don't know how to decode it. Does python has any libraries for packets decoding? I've found only dpkt, but as I understood it can't decode SIP.
If there are no such libraries how can I do this stuff by hands?
Thank you in advance! | true | 12,094,148 | 1.2 | 0 | 0 | 4 | Finally I did this with help of pyshark from the sharktools (http://www.mit.edu/~armenb/sharktools/). In order to sniff IP packages I used scapy instead of libpcap. | 0 | 5,300 | 0 | 3 | 2012-08-23T14:38:00.000 | python,sip,decode,pcap | Decode SIP messages with Python | 1 | 1 | 3 | 12,708,904 | 0 |
1 | 0 | I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends. | false | 12,095,507 | 0 | 1 | 0 | 0 | As an employee of ProcessOne, the makers of ejabberd, I can tell you we run a lot of services over AWS, including mobile chat apps. We have industrialized our procedures. | 0 | 365 | 0 | 0 | 2012-08-23T15:48:00.000 | python,ruby,amazon-ec2,amazon-web-services,xmpp | Running XMPP on Amazon for a chat app | 1 | 2 | 2 | 12,095,630 | 0 |
1 | 0 | I'm building an Android IM chat app for fun. I can develop the Android stuff well but i'm not so good with the networking side so I am looking at using XMPP on AWS servers to run the actual IM side. I've looked at OpenFire and ejabberd which i could use. Does anyone have any experience with them or know a better one? I'm mostly looking for sending direct IM between friends and group IM with friends. | false | 12,095,507 | 0.099668 | 1 | 0 | 1 | Try to explore more about Amazon SQS( Simple Queuing Service) . It might come handy for your requirement. | 0 | 365 | 0 | 0 | 2012-08-23T15:48:00.000 | python,ruby,amazon-ec2,amazon-web-services,xmpp | Running XMPP on Amazon for a chat app | 1 | 2 | 2 | 12,095,743 | 0 |
1 | 0 | Trying to debug a website in IE9. I am running it via Python.
In chrome, safari, firefox, and opera, the site loads immediately, but in IE9 it seems to hang and never actually loads.
Could this possibly be an issue with http pipelining? Or something else? And how might I fix this? | false | 12,098,732 | 0.099668 | 0 | 0 | 1 | You need to specify what Python "Web server" you're using (e.g. bottle? Maybe Tornado? CherryPy?), but more important, you need to supply what request headers and what HTTP response go in and out when IE9 is involved.
You may lift them off the wire using e.g. ngrep, or I think you can use Developers Tools in IE9 (F12 key).
The most common quirks with IE9 that often do not bother Web browsers are mismatches in Content-Length (well, this DID bother Safari last time I looked), possibly Content-Type (this acts in reverse - IE9 sometimes correctly gleans HTML mimetype even if the Content-Type is wrong), Connection: Close.
So yes, it could be a problem with HTTP pipelining: specifically if you pipeline a request with invalid Content-Length and not even chunked-transfer-encoding, IE might wait for the request to "finish". This would happen in Web browsers too; but it could then be that this behavior, in IE, overrides the connection being flushed and closed, while in Web browsers it does not. These two hypotheses might match your observed symptoms.
To fix that, you either switch to chunked transfer encoding, which replaces Content-Length in a way, or correctly compute its value. How to do this depends on the server.
To verify quickly, you could issue a Content-Length surely too short (e.g. 100 bytes?) to see whether this results in IE un-hanging and displaying a partial web page. | 0 | 408 | 0 | 0 | 2012-08-23T19:30:00.000 | python,windows,debugging,internet-explorer-9,pipeline | IE9 and Python issues? | 1 | 1 | 2 | 12,098,951 | 0 |
0 | 0 | I have a script that is intended to be run by multiple users on multiple computers, and they don't all have their Dropbox folders in their respective home directories. I'd hate to have to hard code paths in the script. I'd much rather figure out the path programatically.
Any suggestions welcome.
EDIT:
I am not using the Dropbox API in the script, the script simply reads files in a specific Dropbox folder shared between the users. The only thing I need is the path to the Dropbox folder, as I of course already know the relative path within the Dropbox file structure.
EDIT:
If it matters, I am using Windows 7. | false | 12,118,162 | 0 | 0 | 0 | 0 | One option is you could go searching for the .dropbox.cache directory which (at least on Mac and Linux) is a hidden folder in the Dropbox directory.
I am fairly certain that Dropbox stores its preferences in an encrypted .dbx container, so extracting it using the same method that Dropbox uses is not trivial. | 0 | 12,977 | 0 | 20 | 2012-08-25T00:18:00.000 | python,directory,dropbox | How to determine the Dropbox folder location programmatically? | 1 | 1 | 8 | 12,118,287 | 0 |
0 | 0 | is snmp really required to manage devices ?
i'd like to script something with python to manage devices (mainly servers), such as disk usage, process list etc.
i'm learning how to do and many article speak about snmp protocole.
I can't use, for example, psutil, or subprocess or os modules, and send information via udp ?
Thanks a lot | true | 12,147,224 | 1.2 | 0 | 0 | 1 | The SNMP is a standard monitoring (and configuration) tool used widely in managing network devices (but not only). I don't understand your question fully - is it a problem that you cannot use SNMP because device does not support it (what does it support then?) To script anything you have to know what interface is exposed to you (if not a MIB files then what?). Did you read about NETCONF? | 0 | 197 | 1 | 0 | 2012-08-27T18:11:00.000 | python,monitoring,snmp | is snmp required | 1 | 2 | 2 | 12,147,545 | 0 |
0 | 0 | is snmp really required to manage devices ?
i'd like to script something with python to manage devices (mainly servers), such as disk usage, process list etc.
i'm learning how to do and many article speak about snmp protocole.
I can't use, for example, psutil, or subprocess or os modules, and send information via udp ?
Thanks a lot | false | 12,147,224 | 0.099668 | 0 | 0 | 1 | No, it's not required, but your question is sort of like asking if you're required to use http to serve web pages. Technically you don't need it, but if you don't use it you're giving up interoperability with a lot of existing client software. | 0 | 197 | 1 | 0 | 2012-08-27T18:11:00.000 | python,monitoring,snmp | is snmp required | 1 | 2 | 2 | 12,148,746 | 0 |
0 | 0 | I need to know what mechanism is more efficient (less RAM/CPU) to read and write files, especially write. Possibly a JSON data structure. The idea is perform these operations in a context of WebSockets (client -> server -> read/write file with data of the actual session -> response to client).... Best idea is to store the data in temporal variables and destroy vars when are not useful?
Any idea? | false | 12,154,854 | 0 | 0 | 0 | 0 | They will probably both be about the same. I/O is generally a lot slower than CPU, so the entire process of reading and writing files will depend on how fast your disk can handle the requests.
It also will depend on the data-processing approach you take. If you opt to read the whole file in at once, then of course it will use more memory than if you choose to read the file piece-by-piece.
So, the answer is: the performance will only (very minimally) depend on your choice of language. Choice of algorithm and I/O performance will easily account for the majority of CPU or RAM usage. | 1 | 335 | 0 | 0 | 2012-08-28T07:44:00.000 | python | Write/Read files: NodeJS vs Python | 1 | 1 | 1 | 12,154,925 | 0 |
1 | 0 | I have a project which requires a great deal of data scraping to be done.
I've been looking at Scrapy which so far I am very impressed with but I am looking for the best approach to do the following:
1) I want to scrape multiple URL's and pass in the same variable for each URL to be scraped, for example, lets assume I am wanting to return the top result for the keyword "python" from Bing, Google and Yahoo.
I would want to scrape http://www.google.co.uk/q=python, http://www.yahoo.com?q=python and http://www.bing.com/?q=python (not the actual URLs but you get the idea)
I can't find a way to specify dynamic URLs using the keyword, the only option I can think of is to generate a file in PHP or other which builds the URL and specify scrapy to crawl the links in the URL.
2) Obviously each search engine would have its own mark-up so I would need to differentiate between each result to find the corresponding XPath to extract the relevant data from
3) Lastly, I would like to write the results of the scraped Item to a database (probably redis), but only when all 3 URLs have finished scraping, essentially I am wanting to build up a "profile" from the 3 search engines and save the outputted result in one transaction.
If anyone has any thoughts on any of these points I would be very grateful.
Thank you | false | 12,160,673 | 0.197375 | 0 | 0 | 3 | 1) In the BaseSpider, there is an __init__ method that can be overridden in subclasses. This is where the declaration of the start_urls and allowed_domains variables are set. If you have a list of urls in mind, prior to running the spider, than you can insert them dynamically here.
For example, in a few of the spiders I have built, I pull in preformatted groups of URL's from MongoDB, and insert them into the start_urls list in once bulk insert.
2)This might be a little bit more tricky, but you could easily see the crawled URL by looking in the response object (response.url). You should be able to check to see if the url contains 'google', 'bing', or 'yahoo', and then use the prespecified selectors for a url of that type.
3) I am not so sure that #3 is possible, or at least not without some difficulty. As far as I know, the url's in the start_urls list are not crawled orderly, and they each arrive in the pipeline independently. I am not sure that without some serious core hacking, you will be able to collect a group of response objects and pass them into a pipeline together.
However, you might consider serializing the data to disk temporarily, and then bulk-saving the data later on to your database. One of the crawlers I built receives groups of URLs that are around 10000 in number. Rather than making 10000 single item database insertions, I store the urls (and collected data) in BSON, and than insert it into MongoDB later. | 0 | 2,710 | 0 | 1 | 2012-08-28T13:47:00.000 | python,scrapy | Scrapy approach to scraping multiple URLs | 1 | 1 | 3 | 12,161,314 | 0 |
1 | 0 | Is there any way to generate various events like:
filling an input field
submitting a form
clicking a link
Handling redirection etc
via python beautiful soup library. If not what's the best way to do above (basic functionality). | true | 12,161,140 | 1.2 | 0 | 0 | 1 | BeautifulSoup is a tool for parsing and analyzing HTML. It cannot talk to web servers, so you'd need another library to do that, like urllib2 (builtin, but low-level) or requests (high-level, handles cookies, redirection, https etc. out of the box). Alternatively, you can look at mechanize or windmill, or if you also require JavaScript code to be executed, phantomjs. | 0 | 102 | 0 | 0 | 2012-08-28T14:10:00.000 | python,web-scraping,beautifulsoup | Generate various events in 'Web scraping with beautiful soup' | 1 | 1 | 1 | 12,163,450 | 0 |
0 | 0 | Which should be used and for what?
Is there any advantage to one over the other? | true | 12,184,372 | 1.2 | 0 | 0 | 5 | It is only a matter of abstraction levels. In most cases, you will want to use the highest level API.
Layer1 API is a direct mapping of Amazon's API
layer2 API add some nice abstractions like a generator for scan and query results as well as answer cleaning.
When you call layer2, it calls layer1 which ends up generating HTTP calls. | 0 | 271 | 0 | 3 | 2012-08-29T18:35:00.000 | python,boto,amazon-dynamodb | In Python Boto's API for DynamoDB, what are the differences between Layer1 and Layer2? | 1 | 1 | 1 | 12,184,769 | 0 |
0 | 0 | Is it possible for a client to establish a SSL connection to a server using the server's certificate already exchanged through other means?
The point would be to encrypt the connection using the certificate already with the client and not have to rely on the server to provide it. The server would still have the private key for the certificate the client uses.
This question isn't language specific, but answers specific to python and twisted are appreciated. | false | 12,195,063 | 0 | 0 | 0 | 0 | In TLS, the server (the side which listen's for connections) always needs a certificate. Client-side certificates may be used only for peer authentication, but not for the channel encryption.
Keep in mind also, that you can't simply "encrypt" a connection without some infrastructure to verify the certificates in some way (using certification authorities, or trust databases for example). Encryption without certificate validity verification does not hold against an active adversary (google for 'man in the middle attack' for more details on this). | 0 | 2,594 | 0 | 3 | 2012-08-30T10:42:00.000 | python,encryption,twisted,ssl | SSL connection with client-side certificate | 1 | 1 | 2 | 12,195,181 | 0 |
0 | 0 | Is it possible to change the file name of a file with selenium in python before/after it downloads? I do not want to use the os module. Thanks!
For example, what if my file was being download to C:\foo\bar, and its name is foo.csv, could i change it to bar.csv. | false | 12,201,074 | 0.53705 | 0 | 0 | 3 | No this is not possible with any Selenium library.
Use the normal Python method of renaming a file. | 0 | 1,061 | 0 | 0 | 2012-08-30T16:05:00.000 | python,selenium | changing the name of a downloaded file selenium | 1 | 1 | 1 | 12,201,383 | 0 |
1 | 0 | I just have a simple question here. I'm making a total of 10 calls to the Twitch TV API and indexing them, which is rather slow (15 seconds - 25 seconds slow).
Whenever I make these calls browser side (i.e. throw them into my url), they load rather quickly. Since I am coding in python, is there any way I could fetch/index multiple URL's using say, jinja2?
If not, is there anything else I could do?
Thank you! | true | 12,202,815 | 1.2 | 0 | 0 | 1 | If you don't expect them to change constantly, you can cache the results in memcache and only hit the real API when necessary.
On top of that, if you think that the API calls are predictable, you can do this using a backend, and memcache the results (basically scraping), so that users can get at the cached results rather than having to hit the real API. | 0 | 159 | 0 | 1 | 2012-08-30T18:00:00.000 | python,api,google-app-engine,jinja2 | API Call Is Extremely Slow Server Side on GAE but Fast Browser Side | 1 | 1 | 1 | 12,203,940 | 0 |
1 | 0 | Are there any generally accepted practices to get around this? Specifically, for user-submitted images uploaded to a web service. My application is running in Python.
Some hacked solutions that came to mind:
Display the uploaded image from a local directory until the S3 image is ready, then "hand it off" and update the database to reflect the change.
Display a "waiting" progress indicator as a background gif and the image will just appear when it's ready (w/ JavaScript) | false | 12,241,945 | 0.197375 | 0 | 1 | 1 | I'd save time and not do anything. The wait times are pretty fast.
If you wanted to stall the end-user, you could just show a 'success' page without the image. If the image isn't available, most regular users will just hit reload.
If you really felt like you had to... I'd probably go with a javascript solution like this:
have a 'timestamp uploaded' column in your data store
if the upload time is under 1 minute, instead of rendering an img=src tag... render some javascript that polls the s3 bucket in 15s intervals
Again, chances are most users will never experience this - and if they do, they won't really care. The UX expectations of user generated content are pretty low ( just look at Facebook ); if this is an admin backend for an 'enterprise' service that would make workflow better, you may want to invest time on the 'optimal' solution. For a public facing website though, i'd just forget about it. | 0 | 323 | 0 | 0 | 2012-09-03T04:22:00.000 | python,amazon-s3,amazon-web-services | What are some ways to work with Amazon S3 not offering read-after-write consistency in US Standard? | 1 | 1 | 1 | 12,242,133 | 0 |
0 | 0 | I have a CherryPy application running successfully using the built-in digest authentication tool and no session support. Now, I would like to expose additional features to certain users. Is it possible to obtain the currently-authenticated user from the authorization system? | false | 12,251,490 | 0.066568 | 0 | 0 | 1 | Found the user name encoded in the HTTP request header Authorization. I am able to parse it from there. If there's a "better" place to obtain the username, I'm open to improvements! | 0 | 2,411 | 0 | 2 | 2012-09-03T16:37:00.000 | python,http,authentication,cherrypy | How to get username with CherryPy digest authentication | 1 | 1 | 3 | 12,255,276 | 0 |
1 | 0 | I'm trying to use Selenium for some app testing, and I need it to plug a variable in when filling a form instead of a hardcoded string. IE:
this works
name_element.send_keys("John Doe")
but this doesnt
name_element.send_keys(username)
Does anyone know how I can accomplish this? Pretty big Python noob, but used Google extensively to try and find out. | false | 12,289,700 | 0 | 0 | 0 | 0 | I think username might be a variable in the python library you are using. Try calling it something else like Username1 and see if it works?? | 0 | 9,992 | 0 | 7 | 2012-09-05T21:00:00.000 | python,selenium | Passing variables through Selenium send.keys instead of strings | 1 | 2 | 5 | 60,626,053 | 0 |
1 | 0 | I'm trying to use Selenium for some app testing, and I need it to plug a variable in when filling a form instead of a hardcoded string. IE:
this works
name_element.send_keys("John Doe")
but this doesnt
name_element.send_keys(username)
Does anyone know how I can accomplish this? Pretty big Python noob, but used Google extensively to try and find out. | false | 12,289,700 | 0.039979 | 0 | 0 | 1 | Try this.
username = r'John Doe'
name_element.send_keys(username)
I was able to pass the string without casting it just fine in my test. | 0 | 9,992 | 0 | 7 | 2012-09-05T21:00:00.000 | python,selenium | Passing variables through Selenium send.keys instead of strings | 1 | 2 | 5 | 47,662,083 | 0 |
0 | 0 | Are there any Tools / Chrome plugins like "Advanced REST Plugin" for Chrome that I can use to Test Twisted RPC JSON calls?
Or do I have to write some Python code to do this? | false | 12,292,723 | 0.379949 | 0 | 0 | 2 | When it comes to testing, you sould always use code, so that if you change something in the future or if you have to fix a bug, testing your changes will be a matter of seconds:
python mytestsuite.py | 0 | 76 | 0 | 0 | 2012-09-06T03:53:00.000 | python,django,testing,twisted,twisted.web | Tools to Test RPC twisted calls? | 1 | 1 | 1 | 12,486,382 | 0 |
1 | 0 | I need to export a website(.html page) to a XML file. The website contains a table with some data which i require for using in my web project. The table in the website is formed using some javascript, so i cannot get the data by getting the page source. Please tell me how I can export the table in the website to a XML file using php/python/javascript/perl. | true | 12,295,834 | 1.2 | 0 | 0 | 4 | You could try to reverse engineer the javascript code. Maybe it's making an ajax request to a service, that delivers the data as json. Use your browsers developer tools/network tab to see what's going on. | 0 | 2,036 | 0 | 1 | 2012-09-06T08:20:00.000 | php,javascript,python,xml,perl | Export a website to an XML Page | 1 | 1 | 1 | 12,295,930 | 0 |
0 | 0 | I need nodes in different shapes in NetworkX, some rectangle and triangle nodes. Does anybody know if it is possible?
regards | true | 12,317,388 | 1.2 | 0 | 0 | 4 | Found it, there is a property node_shape that allows changing shape. | 0 | 1,795 | 0 | 4 | 2012-09-07T11:38:00.000 | python,networkx | changing node shape in NetworkX | 1 | 1 | 1 | 12,317,478 | 0 |
0 | 0 | In selenium i am trying to enter text into the tinymce text area, but i am having trouble selecting the text area to input the text. Is there a way to select the text area behind tinymce or anyway to select tinymce so i can enter text. thanks | false | 12,320,273 | 0.066568 | 0 | 0 | 1 | Use command: runScript
Target: tinyMCE.get('text_area_id').setContent('Your text here')
or you can use tinyMCE.activeEditor.setContent('Your text here') which will select either the first or the last mceEditor, I forget.. | 0 | 525 | 0 | 1 | 2012-09-07T14:39:00.000 | python,selenium,tinymce | Having trouble inputting into tinymce via selenium | 1 | 1 | 3 | 22,871,270 | 0 |
0 | 0 | I have read this example from AutobahnPython: https://github.com/tavendo/AutobahnPython/tree/master/examples/websocket/broadcast
It looks pretty easy to understand and practice. But I want to add a little more. Members who submit the correct secret string can send the messages, anyone else can only view the information transmitted. Any idea?
Thanks! | false | 12,321,301 | 0.099668 | 1 | 0 | 1 | Well it is purely your logic in the code. When you receive the message you are simply broadcasting it, what you have to do is to pass this onto a custom function, and there, do a check:
Create a temporary array that contains list of active authenticated users. when user logs on, it should send this special string, match it, if OK, add this user to this active user list array, if not don't add it. Later, call the bradcast function but rather then taking all online users, use this custom array instead.
That is all that you have to do.
Make sure when someone logs out, remove him from this array. | 0 | 1,310 | 0 | 1 | 2012-09-07T15:39:00.000 | python,websocket,broadcasting,autobahn | Python - Broadcasting with WebSocket using AutobahnPython | 1 | 1 | 2 | 14,835,403 | 0 |
0 | 0 | I currently have the problem that I have a server script running on one computer as localhost:12123. I can connect to it using the same computer but using another computer in the same network does not connect to it (says it does not exist). Firewall is disabled.
Does it have to do with permissions?
The socket is created by a python file using BaseHTTPServer. | false | 12,333,831 | 0.099668 | 0 | 0 | 1 | Bind to 0.0.0.0 or the outside IP address instead, obviously. | 0 | 1,991 | 0 | 0 | 2012-09-08T19:34:00.000 | python,sockets,port | How to make HTTP server with port not 80 available for network? | 1 | 1 | 2 | 12,333,846 | 0 |
0 | 0 | I created a program which listens to particular socket in python, however I ctrl+c'd script which resulted in .close() nor called, however how can I free the socket now. | false | 12,336,611 | 0.197375 | 0 | 0 | 2 | The socket is closed when the process exits. The port it was using may hang around for a couple of minutes, that's normal, then it will disappear. If you need to re-use the port immediately, set SO_REUSEADDR before binding or connecting. | 0 | 142 | 0 | 0 | 2012-09-09T04:57:00.000 | python,sockets | closing a previously opened socket | 1 | 1 | 2 | 12,336,669 | 0 |
1 | 0 | I want to scrape a webpage that changes its content via a <select> tag. When I select a different option, the content of the page dynamically changes. I want to know if there is a way that I can change the option from a python script so I can get the content from all different pages of all different options in <select> tag. | false | 12,349,295 | 0 | 0 | 0 | 0 | I assume you use some library like urllib to do the scraping. You already know the website's content changes dynamically. I also assume that the dynamic content uses server-side interaction. This means, using javascript (ajax) the browser requests new data from the server, based on the value from the selection).
If so, then you could try to emulate the ajax call to the server in your web scraping library.
First, using a browser debugging tool find out the url to the server that is being invoked.
Split the parameters parts in the ajax call
Perform the same call to lookup the options in the select tag. | 0 | 2,430 | 0 | 3 | 2012-09-10T09:57:00.000 | python,web-scraping | How to scrape a webpage that changes content from tag | 1 | 2 | 2 | 12,349,430 | 0 |
1 | 0 | I want to scrape a webpage that changes its content via a <select> tag. When I select a different option, the content of the page dynamically changes. I want to know if there is a way that I can change the option from a python script so I can get the content from all different pages of all different options in <select> tag. | false | 12,349,295 | 0 | 0 | 0 | 0 | As @Tichodroma said, when the select is changed, either:
Some content previously hidden on the page is made visible, or:
An ajax call is made to retrieve some additional content and add it to the DOM
In both cases, JavaScript is involved. Have a look at it, and depending on what is happening (case #1 or #2), you should:
Scrape the whole page, since all the content you want is already in it, or:
Make several calls to the file usually called using ajax to retrieve the content you want for each value of the <select> | 0 | 2,430 | 0 | 3 | 2012-09-10T09:57:00.000 | python,web-scraping | How to scrape a webpage that changes content from tag | 1 | 2 | 2 | 12,349,434 | 0 |
1 | 0 | Backdrop: Am building a shopify app using a test store provided by shopify. #Python #Django-
Problem: I have setup shopify webhooks for my test store using the python API for the topics "products/update" and "products/delete". But my endpoints are not called by shopify when I manually update or delete a product on my test store.
My detective work so far: I have checked the following:
I have confirmed that the webhooks were successfully created using the API. I simply listed all the existing webhooks using the API for the store and mine are there.
The address/URL I specified in the webhook for shopify to call in the event of a product update or delete is a public url, as in it is not on my localhost. (not 127.0.0.1:8000 etc.)
My webhook endpoint is fine. When I manually call my endpoint in a test case, it does what it should.
I contacted the shopify apps support guys, and I was asked to post this issue here.
Another minor issue is that I cannot find in the shopify API docs exactly what JSON/XML the webhook will POST to my URL in the event it should. So I do not know what that JSON will look like...
Any help would be appreciated! | true | 12,354,189 | 1.2 | 0 | 0 | 1 | Thanks for the answers guys, but I found out that the issue was something else.
I forgot to make a CSRF exemption for the POST request URL that Shopify calls and also forgot to add a trailing slash '/' at the end of the URL I told the webhook to call.
I guess I would have caught these errors if I used something like postcatcher.in as suggested in the comments above. I din't bother doing that as it looked like too much of a hassle. | 0 | 2,272 | 0 | 2 | 2012-09-10T14:50:00.000 | python,django,shopify,webhooks | Shopify webhook not working when product updated/deleted | 1 | 2 | 2 | 12,423,594 | 0 |
1 | 0 | Backdrop: Am building a shopify app using a test store provided by shopify. #Python #Django-
Problem: I have setup shopify webhooks for my test store using the python API for the topics "products/update" and "products/delete". But my endpoints are not called by shopify when I manually update or delete a product on my test store.
My detective work so far: I have checked the following:
I have confirmed that the webhooks were successfully created using the API. I simply listed all the existing webhooks using the API for the store and mine are there.
The address/URL I specified in the webhook for shopify to call in the event of a product update or delete is a public url, as in it is not on my localhost. (not 127.0.0.1:8000 etc.)
My webhook endpoint is fine. When I manually call my endpoint in a test case, it does what it should.
I contacted the shopify apps support guys, and I was asked to post this issue here.
Another minor issue is that I cannot find in the shopify API docs exactly what JSON/XML the webhook will POST to my URL in the event it should. So I do not know what that JSON will look like...
Any help would be appreciated! | false | 12,354,189 | 0.099668 | 0 | 0 | 1 | I don't have the creds to comment apparently, so I'll put this in an "answer" - to use the term very loosely - instead. I ran into something similar with the Python API, but soon realized that I was doing it wrong. In my case, it was toggling the fulfillment status, which then fires off an email notifying customers of a download location for media.
What I was doing wrong was this: I was modifying the fulfillment attribute of the order object directly. Instead, the correct method was to fetch / create a fulfillment object, modify that, point the order attribute to this object, than save() it. This worked.
I don't know if this is your issue as there's no code posted, but I hope this helps.
--Matt | 0 | 2,272 | 0 | 2 | 2012-09-10T14:50:00.000 | python,django,shopify,webhooks | Shopify webhook not working when product updated/deleted | 1 | 2 | 2 | 12,389,770 | 0 |
0 | 0 | Is there any way to count the number of currently connected clients in a zeromq socket? If that's not possible, is there any way to determine whether the socket has no client connected to it?
Thanks. | true | 12,368,120 | 1.2 | 0 | 0 | 2 | You could simply implement an counter with a second socket.
Each time you have an active client or you close your socket, send a message on your "socket counter".
ZeroMQ is made to combine sockets. | 0 | 1,053 | 0 | 3 | 2012-09-11T10:54:00.000 | python,zeromq | Counting The Number of Connection in a ZeroMQ Socket | 1 | 1 | 1 | 12,396,881 | 0 |
1 | 0 | I'm thinking if a user submits a message and they click a 'suggest tags' button, their message would be analyzed and a form field populated wIthaca random words from their post.
Is it possible to do this on a scalable level? Would JavaScript be able to handle it or better to Ajax back to python?
I'm thinking certain common words would be excluded (a, the, and, etc) and maybe the 10 longest words or just random not common words would be added to a form field like "tag1, tag2, tag3" | false | 12,372,258 | 0 | 0 | 0 | 0 | Agree with @unwind , it depends on the content length of the text and your algorithm to grab the tags(scalability) | 0 | 60 | 0 | 0 | 2012-09-11T14:39:00.000 | javascript,python,tags | Is it possible to automatically pull random "tags" from a long string of text? | 1 | 2 | 2 | 12,372,353 | 0 |
1 | 0 | I'm thinking if a user submits a message and they click a 'suggest tags' button, their message would be analyzed and a form field populated wIthaca random words from their post.
Is it possible to do this on a scalable level? Would JavaScript be able to handle it or better to Ajax back to python?
I'm thinking certain common words would be excluded (a, the, and, etc) and maybe the 10 longest words or just random not common words would be added to a form field like "tag1, tag2, tag3" | false | 12,372,258 | 0 | 0 | 0 | 0 | Of course it's possible, you pretty much described the algorithm to test, and it doesn't seem to contain any obviously non-computable steps:
Split the message into words
Filter out the common words
Sort the words by length
Pick the top ten and present them as tags
Not sure what you mean by "scalable level", this sounds client-side to me. Unless the messages are very long, i.e. not typed in by a human, I don't think there will be any problems just doing it. | 0 | 60 | 0 | 0 | 2012-09-11T14:39:00.000 | javascript,python,tags | Is it possible to automatically pull random "tags" from a long string of text? | 1 | 2 | 2 | 12,372,295 | 0 |
0 | 0 | I am trying to download emails using imaplib with Python. I have tested the script using my own email account, but I am having trouble doing it for my corporate gmail (I don't know how the corporate gmail works, but I go to gmail.companyname.com to sign in). When I try running the script with imaplib.IMAP4_SSL("imap.gmail.companyname.com", 993), I get an error gaierror name or service not known. Does anybody know how to connect to a my company gmail with imaplib? | false | 12,375,113 | 0.197375 | 1 | 0 | 2 | IMAP server is still imap.gmail.com -- try with that? | 0 | 321 | 0 | 1 | 2012-09-11T17:43:00.000 | python,gmail,gmail-imap,imaplib | IMAP in Corporate Gmail | 1 | 1 | 2 | 12,375,120 | 0 |
0 | 0 | I would like to know why doesn't python2.7 drop blocking operations when ctrl+c is pressed, I am unable to kill my threaded application, there are several socket waits, semaphore waits and so on. In python3 ctrl+c dropped every blocking operation and garbage-collected everything, released all the sockets and whatsoever ... Is there (I am convinced there is, I just yet don't know how) a way to acomplish this? Signal handle? Thanks guys | true | 12,382,229 | 1.2 | 0 | 0 | 0 | I guess you are launching the threads and then the main thread is waiting to join them on termination.
You should catch the exception generated by Ctrl-C in the main thread, in order to signal the spawned threads to terminate (changing a flag in each thread, for instance). In this manner, all the children thread will terminate and the main thread will complete the join call, reaching the bottom of your main. | 0 | 487 | 0 | 0 | 2012-09-12T06:19:00.000 | python,multithreading,python-2.7,exit,kill | Terminate python application waiting on semaphore | 1 | 1 | 1 | 12,390,276 | 0 |
1 | 0 | I am trying to fetch the HTML content of a website using urllib2. The site has a body onload event that submit a form on this site and hence it goes to a destination site and render the details I need.
response = urllib2.urlopen('www.xyz.com?var=999-999')
www.xyz.com contains a form that is posted to "www.abc.com", this
action value varies depending upon the content in url 'var=999-999'
which means action value will change if the var value changes to
'888-888'
response.read()
this still gives me the html content of "www.xyz.com" , but I want
that of resulting action url. Any suggestions of fetching the html
content from the final page?
Thanks in advance | false | 12,384,056 | 0.197375 | 0 | 0 | 1 | You have to figured out the call to that second page, including parameters sent, so you can make that call yourself from your python code, best way is navigate first page with google chrome page inspector opened, then go to Network tab where the POST call would be captured and you can see the parameters sent and all. Then just recreate that same POST call from urllib2. | 0 | 296 | 0 | 0 | 2012-09-12T08:29:00.000 | python,urllib2 | Fetch html content from a destination url that is on onload of the first site in urllib2 | 1 | 1 | 1 | 12,384,339 | 0 |
0 | 0 | I want to parse application layer protocols from network trace using Google protocol buffer and replay the trace (I am using python). I need suggestions to automatically generate protocol message description (in .proto file) from a network trace. | false | 12,398,517 | 0 | 0 | 0 | 0 | So you want to reconstruct what .proto messages were being passed over the application-layer protocol?
This isn't as easy as it sounds. First, .proto messages can't be sent raw over the wire, as the receiver needs to know how long they are. They need to be encapsulated somehow, maybe in an HTTP POST or with a raw 4-byte size prepended. I don't know what it would be for your application, but you'll need to deal with that.
Second, you can't reconstruct the full .proto from the messages alone. You only get tag numbers and types, not names. In addition, you will lose information about submessages - submessages and plain strings are encoded identically (you could probably tell which is which by eyeballing them, but I don't think you could do it automatically). You also will never know about optional items that never got sent. But you could parse the buffer without the proto and get some reasonable data (ints, repeated strings, and such).
Third, you need to reconstruct the application byte stream from the pcap log. I'm not sure how to do that, but I suspect there are tools that would do that for you. | 0 | 455 | 0 | 0 | 2012-09-13T02:12:00.000 | python,protocol-buffers,pcap | Google protocol buffer for parsing Text and Binary protocol messages in network trace (PCAP) | 1 | 1 | 1 | 12,399,100 | 0 |
0 | 0 | I have a consumer which listens for messages, if the flow of messages is more than the consumer can handle I want to start another instance of this consumer.
But I also want to be able to poll for information from the consumer(s), my thought was that I could use RPC to request this information from the producers by using a fanout exchange so all the producers gets the RPC-call.
My question is first of all is this possible and secondly is it reasonable? | true | 12,407,485 | 1.2 | 1 | 0 | 1 | After some researching it seems that this is not possible. If you look at the tutorial on RabbitMQ.com you see that there is an id for the call which, as far as I understand gets consumed.
I've choosen to go another way, which is reading the log-files and aggregating the data. | 0 | 2,271 | 0 | 3 | 2012-09-13T13:31:00.000 | python,rabbitmq,messaging,pika | RPC calls to multiple consumers | 1 | 1 | 2 | 12,478,098 | 0 |
1 | 0 | I am building a screen clipping app.
So far:
I can get the html mark up of the part of the web page the user has selected including images and videos.
I then send them to a server to process the html with BeautifulSoup to sanitize the html and convert all relative paths if any to absolute paths
Now I need to render the part of the page. But I have no way to render the styling. Is there any library to help me in this matter or any other way in python ?
One way would be to fetch the whole webpage with urllib2 and remove the parts of the body I don't need and then render it.
But there must be a more pythonic way :)
Note: I don't want a screenshot. I am trying to render proper html with styling.
Thanks :) | true | 12,436,551 | 1.2 | 0 | 0 | 1 | Download the complete webpage, extract the style elements and the stylesheet link elements and download the files referenced the latter. That should give you the CSS used on the page. | 0 | 118 | 0 | 0 | 2012-09-15T10:24:00.000 | python,web-scraping | Python : Rendering part of webpage with proper styling from server | 1 | 1 | 1 | 12,437,002 | 0 |
1 | 0 | Scenario: User loads a page, image is being generated, show loading bar, notification event sent to browser.
I am using python code to generate the image. Would it be ideal to have a web server that launches the script or embed a webserver code into the python script? Once the image is finished rendering, the client should receive a message saying it's successful and display the image.
How can this be architected to support concurrent users as well? Would simply launching the python script for each new user that navigates to the web page suffice?
Would it be overkill to have real-time web application for this scenario? Trying to decide whether simple jQuery AJAX will suffice or Socket.io should be used to have a persistent connection between server and client.
Any libraries out there that fit my needs? | false | 12,498,694 | 0.099668 | 0 | 0 | 1 | I personally love Socket.IO and I would to it with it.
Because it would be simpler a way. But that may be a bit too much work to set up just for that. Especially since it is not that simple in Python from what I heard compare to node where it really is about 10 lines server side.
Without Socket.IO could do a long polling to get the status of the image processing, and get the url at the end, of the image in base64 if that is what you want. | 0 | 88 | 0 | 0 | 2012-09-19T16:11:00.000 | php,javascript,python | Javascript-Python: serve dynamically generated images to client browser? | 1 | 1 | 2 | 12,498,821 | 0 |
1 | 0 | I have written many scrapers but I am not really sure how to handle infinite scrollers. These days most website etc, Facebook, Pinterest has infinite scrollers. | false | 12,519,074 | 0.066568 | 0 | 0 | 1 | Finding the url of the ajax source will be the best option but it can be cumbersome for certain sites. Alternatively you could use a headless browser like QWebKit from PyQt and send keyboard events while reading the data from the DOM tree. QWebKit has a nice and simple api. | 0 | 29,452 | 0 | 31 | 2012-09-20T18:56:00.000 | python,screen-scraping,scraper | scrape websites with infinite scrolling | 1 | 1 | 3 | 12,529,766 | 0 |
1 | 0 | Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks | false | 12,570,465 | 0 | 0 | 1 | 0 | Given that encryption at rest is a much desired data standard now, smart_open does not support this afaik | 0 | 52,339 | 0 | 38 | 2012-09-24T18:09:00.000 | python,amazon-s3,amazon | How to upload a file to S3 without creating a temporary local file | 1 | 2 | 12 | 56,126,467 | 0 |
1 | 0 | Is there any feasible way to upload a file which is generated dynamically to amazon s3 directly without first create a local file and then upload to the s3 server? I use python. Thanks | false | 12,570,465 | 0.033321 | 0 | 1 | 2 | I assume you're using boto. boto's Bucket.set_contents_from_file() will accept a StringIO object, and any code you have written to write data to a file should be easily adaptable to write to a StringIO object. Or if you generate a string, you can use set_contents_from_string(). | 0 | 52,339 | 0 | 38 | 2012-09-24T18:09:00.000 | python,amazon-s3,amazon | How to upload a file to S3 without creating a temporary local file | 1 | 2 | 12 | 12,570,568 | 0 |
0 | 0 | What is the best way to communicate between a Python 3.x and a Python 2.x program?
We're writing a web app whose front end servers will be written in Python 3 (CherryPy + uWSGI) primarily because it is unicode heavy app and Python 3.x has a cleaner support for unicode.
But we need to use systems like Redis and Boto (AWS client) which don't yet have Python 3 support.
Hence we need to create a system in which we can communicate between Python 3.x and 2.x programs.
What do you think is the best way to do this? | false | 12,597,394 | 0.379949 | 1 | 0 | 2 | The best way? Write everything in Python 2.x. It's a simple question: can I do everything in Python 2.x? Yes! Can I do everything in Python 3.x? No. What's your problem then?
But if you really, really have to use two different Python versions ( why not two different languages for example? ) then you will probably have to create two different servers ( which will be clients at the same time ) which will communicate via TCP/UDP or whatever protocol you want. This might actually be quite handy if you think about scaling the application in the future. Although let me warn you: it won't be easy at all. | 0 | 1,491 | 0 | 3 | 2012-09-26T08:15:00.000 | python,python-3.x,python-2.x | communication between Python 3 and Python 2 | 1 | 1 | 1 | 12,599,590 | 0 |
0 | 0 | Any web server might have to handle a lot of requests at the same time. As python interpreter actually has GIL constraint, how concurrency is implemented?
Do they use multiple processes and use IPC for state sharing? | false | 12,603,678 | 0.132549 | 0 | 0 | 2 | You usually have many workers(i.e. gunicorn), each being dispatched with independent requests. Everything else(concurrency related) is handled by the database so it is abstracted from you.
You don't need IPC, you just need a "single source of truth", which will be the RDBMS, a cache server(redis, memcached), etc. | 1 | 2,640 | 0 | 16 | 2012-09-26T14:11:00.000 | python,django,concurrency,webserver | How does a python web server overcomes GIL | 1 | 2 | 3 | 12,604,317 | 0 |
0 | 0 | Any web server might have to handle a lot of requests at the same time. As python interpreter actually has GIL constraint, how concurrency is implemented?
Do they use multiple processes and use IPC for state sharing? | false | 12,603,678 | 0.066568 | 0 | 0 | 1 | As normal. Web serving is mostly I/O-bound, and the GIL is released during I/O operations. So either threading is used without any special accommodations, or an event loop (such as Twisted) is used. | 1 | 2,640 | 0 | 16 | 2012-09-26T14:11:00.000 | python,django,concurrency,webserver | How does a python web server overcomes GIL | 1 | 2 | 3 | 12,603,848 | 0 |
0 | 0 | I've got an XML file I want to parse with python. What is best way to do this? Taking into memory the entire document would be disastrous, I need to somehow read it a single node at a time.
Existing XML solutions I know of:
element tree
minixml
but I'm afraid they aren't quite going to work because of the problem I mentioned. Also I can't open it in a text editor - any good tips in generao for working with giant text files? | false | 12,612,229 | 0.197375 | 0 | 0 | 2 | The best solution will depend in part on what you are trying to do, and how free your system resources are. Converting it to a postgresql or similar database might not be a bad first goal; on the other hand, if you just need to pull data out once, it's probably not needed. When I have to parse large XML files, especially when the goal is to process the data for graphs or the like, I usually convert the xml to S-expressions, and then use an S-expression interpreter (implemented in python) to analyse the tags in order and build the tabulated data. Since it can read the file in a line at a time, the length of the file doesn't matter, so long as the resulting tabulated data all fits in memory. | 0 | 3,511 | 0 | 3 | 2012-09-27T00:04:00.000 | python,xml,xml-parsing,large-files | Parsing a large (~40GB) XML text file in python | 1 | 1 | 2 | 12,613,046 | 0 |
0 | 0 | I'm writing some XML with element tree.
I'm giving the code an empty template file that starts with the XML declaration:<?xml version= "1.0"?> when ET has finished making its changes and writes the completed XML its stripping out the declarion and starting with the root tag. How can I stop this?
Write call:
ET.ElementTree(root).write(noteFile) | false | 12,612,648 | 1 | 0 | 0 | 6 | There are different versions of ElementTree.
Some of them accept the xml_declaration argument, some do not.
The one I happen to have does not. It emits the declaration if and only if encoding != 'utf-8'. So, to get the declaration, I call write(filename, encoding='UTF-8'). | 0 | 13,577 | 0 | 10 | 2012-09-27T01:06:00.000 | python,xml,elementtree | Python - Element Tree is removing the XML declaration | 1 | 1 | 2 | 19,738,566 | 0 |
0 | 0 | Here's what I need to do:
I need to copy files over the network. The files to be copied is in the one machine and I need to send it to the remote machines. It should be automated and it should be made using python. I am quite familiar with os.popen and subprocess.Popen of python. I could use this to copy the files, BUT, the problem is once I have run the one-liner command (like the one shown below)
scp xxx@localhost:file1.txt yyy@]192.168.104.XXX:file2.txt
it will definitely ask for something like
Are you sure you want to connect (yes/no)?
Password :
And if im not mistaken., once I have sent this command (assuming that I code this in python)
conn.modules.os.popen("scp xxx@localhost:file1.txt yyy@]192.168.104.XXX:file2.txt")
and followed by this command
conn.modules.os.popen("yes")
The output (and I'm quite sure that it would give me errors) would be the different comparing it to the output if I manually type it in in the terminal.
Do you know how to code this in python? Or could you tell me something (a command etc.) that would solve my problem
Note: I am using RPyC to connect to other remote machines and all machines are running on CentOS | false | 12,613,552 | 0 | 0 | 0 | 0 | per my experience, use sftp the first time will prompt user to accept host public key, such as
The authenticity of host 'xxxx' can't be established.
RSA key fingerprint is xxxx. Are you sure you want to continue connecting
(yes/no)?
once you input yes, the public key will be saved in ~/.ssh/known_hosts, and next time you will not get such prompt/alert.
To avoid this prompt/alert in batch script, you can use turn strict host check off use
scp -Bqpo StrictHostKeyChecking=no
but you are vulnerable to man-in-the-middle attack.
you can also choose to connect to target server manually and save host public key before deploy your batch script. | 0 | 1,825 | 1 | 0 | 2012-09-27T03:14:00.000 | python,shell,terminal,centos | How to automate the sending of files over the network using python? | 1 | 1 | 3 | 12,613,729 | 0 |
0 | 0 | I am finding Neo4j slow to add nodes and relationships/arcs/edges when using the REST API via py2neo for Python. I understand that this is due to each REST API call executing as a single self-contained transaction.
Specifically, adding a few hundred pairs of nodes with relationships between them takes a number of seconds, running on localhost.
What is the best approach to significantly improve performance whilst staying with Python?
Would using bulbflow and Gremlin be a way of constructing a bulk insert transaction?
Thanks! | false | 12,643,662 | 0.07983 | 0 | 1 | 2 | Well, I myself had need for massive performance from neo4j. I end up doing following things to improve graph performance.
Ditched py2neo, since there were lot of issues with it. Besides it is very convenient to use REST endpoint provided by neo4j, just make sure to use request sessions.
Use raw cypher queries for bulk insert, instead of any OGM(Object-Graph Mapper). That is very crucial if you need an high-performant system.
Performance was not still enough for my needs, so I ended writing a custom system that merges 6-10 queries together using WITH * AND UNION clauses. That improved performance by a factor of 3 to 5 times.
Use larger transaction size with atleast 1000 queries. | 0 | 12,651 | 0 | 18 | 2012-09-28T16:15:00.000 | python,neo4j,py2neo | Fastest way to perform bulk add/insert in Neo4j with Python? | 1 | 1 | 5 | 31,026,259 | 0 |
0 | 0 | I've been inspecting two similar solutions for supporting web sockets via sockJS using an independent Python server, and so far I found two solutions.
I need to write a complex, scalable web socket based web application, and I'm afraid it will be hard to scale Tornado, and it seems Vertx is better with horizontal scaling of web sockets.
I also understand that Redis can be used in conjunction with Tornado for scaling a pub/sub system horizontally, and HAproxy for scaling the SockJS requests.
Between Vertx and Tornado, what is the preferred solution for writing a scalable system which supports SockJS? | false | 12,652,336 | 0.291313 | 0 | 0 | 3 | Vertx has build-in clustering support. I haven't tried it with many nodes, but it seemed to work well with a few. Internally it uses hazelcast to organise the nodes.
Vertx also runs on a JVM, which has already many monitoring/admin tools which might be useful. So Vertx seems to me like the "batteries included" solution. | 0 | 1,306 | 1 | 2 | 2012-09-29T11:32:00.000 | python,tornado,vert.x,sockjs | Vertx SockJS server vs sockjs-tornado | 1 | 1 | 2 | 13,562,205 | 0 |
1 | 0 | I currently have been assigned to create a web crawler to automate some reporting tasks I do. This web crawler would have to login with my credentials, search specific things in different fields (some in respect to the the current date), download CSVs that contain the data if there is any data available, parse the CSVs quickly to get a quick number count, create an email with the CSVs attached and send it.
I currently know C++ and Python very well, am in the process of learning C, but I was told that Ruby or Ruby on Rails was a great way to do this. Is Ruby on Rails solely for creating web apps, and if so, does my task fit the description of a web app, or can I just make a standalone program that runs and does it all?
I would like to know which language would be the easiest to code with (has easy to use modules), has a good library/module relative to these tasks. What would I need to take into account before undergoing this task? I have till the end of December to make this, and I only work here for around 12 hours per week (I'm a student, and this is for my internship). Is this feasible?
Thanks. | false | 12,693,054 | 0.066568 | 0 | 0 | 1 | Adding to mechanize:
if your page has a javascript component that mechanize cant handle, selenium drives an actual web browser. If you're hellbent on using ruby, you can also use WATIR, but selenium has both ruby and python bindings. | 0 | 321 | 0 | 3 | 2012-10-02T15:08:00.000 | c++,python,ruby-on-rails,web-applications,web-crawler | Easiest way to tackle this web crawling task? | 1 | 2 | 3 | 12,697,280 | 0 |
1 | 0 | I currently have been assigned to create a web crawler to automate some reporting tasks I do. This web crawler would have to login with my credentials, search specific things in different fields (some in respect to the the current date), download CSVs that contain the data if there is any data available, parse the CSVs quickly to get a quick number count, create an email with the CSVs attached and send it.
I currently know C++ and Python very well, am in the process of learning C, but I was told that Ruby or Ruby on Rails was a great way to do this. Is Ruby on Rails solely for creating web apps, and if so, does my task fit the description of a web app, or can I just make a standalone program that runs and does it all?
I would like to know which language would be the easiest to code with (has easy to use modules), has a good library/module relative to these tasks. What would I need to take into account before undergoing this task? I have till the end of December to make this, and I only work here for around 12 hours per week (I'm a student, and this is for my internship). Is this feasible?
Thanks. | false | 12,693,054 | 0 | 0 | 0 | 0 | While this is not a great Stackoverflow question, since you are a student and it's for an internship, it seems like it would be in poor form to flag it, or down-vote it. :)
Basically, you can pretty much accomplish this task with any of the languages you listed. If you want learning Ruby as a part of your experience for your internship, then this might be a great project and a way of learning it. But, python would work great, also (you could probably use Mechanize). I should probably disclose that I'm a Python developer and I love it. I think it's a great language with great support and tools. I'm sure the Ruby guys feel the same about their language. Again, I think it's what you want to try to accomplish during your internship. What experience do you want to take away, etc. Best of luck. | 0 | 321 | 0 | 3 | 2012-10-02T15:08:00.000 | c++,python,ruby-on-rails,web-applications,web-crawler | Easiest way to tackle this web crawling task? | 1 | 2 | 3 | 12,693,218 | 0 |
0 | 0 | my network does not support the ipv6 hence i have no access to ipv6 servers, is there any solution to connect to them using sockets that uses 'AF_INET' domain? or any kind of other solutions? is there any server on the Internet that does such a convert for free?
i can reed python and c++. | false | 12,698,862 | 0.291313 | 0 | 0 | 3 | No; you cannot connect to an IPv6 server without some form of IPv6 transit.
Depending on your network, you may be able to set up a 6to4 gateway. This is a server configuration change, though, and is outside the scope of Stack Overflow. | 0 | 557 | 0 | 2 | 2012-10-02T21:47:00.000 | c++,python,sockets,networking,network-programming | can i connect to a ipv6 address via a AF_INET domain socket? | 1 | 1 | 2 | 12,698,891 | 0 |
0 | 0 | I'm writing up an IRC bot from scratch in Python and it's coming along fine.
One thing I can't seem to track down is how to get the bot to send a message to a user that is private (only viewable to them) but within a channel and not a separate PM/conversation.
I know that it must be there somewhere but I can't find it in the docs.
I don't need the full function, just the command keyword to invoke the action from the server (eg PRIVMSG).
Thanks folks. | true | 12,707,239 | 1.2 | 1 | 0 | 2 | Are you looking for /notice ? (see irchelp.org/irchelp/misc/ccosmos.html#Heading227) | 0 | 2,449 | 0 | 0 | 2012-10-03T11:08:00.000 | python,protocols,irc | IRC msg to send to server to send a "whisper" message to a user in channel | 1 | 1 | 1 | 12,721,513 | 0 |
0 | 0 | I'm trying to track several keywords at once, with the following url:
https://stream.twitter.com/1.1/statuses/filter.json?track=twitter%2C%20whatever%2C%20streamingd%2C%20
But the stream only returns results for the first keyword?! What am I doing wrong? | true | 12,708,573 | 1.2 | 1 | 0 | 0 | Try without spaces (ie. the %20). Doh! | 0 | 163 | 0 | 0 | 2012-10-03T12:34:00.000 | python,twitter,urlencode | Twitter Public Stream URL when several track keywords? | 1 | 1 | 1 | 12,708,663 | 0 |
1 | 0 | I have a server which files get uploaded to, I want to be able to forward these on to s3 using boto, I have to do some processing on the data basically as it gets uploaded to s3.
The problem I have is the way they get uploaded I need to provide a writable stream that incoming data gets written to and to upload to boto I need a readable stream. So it's like I have two ends that don't connect. Is there a way to upload to s3 with a writable stream? If so it would be easy and I could pass upload stream to s3 and it the execution would chain along.
If there isn't I have two loose ends which I need something in between with a sort of buffer, that can read from the upload to keep that moving, and expose a read method that I can give to boto so that can read. But doing this I'm sure I'd need to thread the s3 upload part which I'd rather avoid as I'm using twisted.
I have a feeling I'm way over complicating things but I can't come up with a simple solution. This has to be a common-ish problem, I'm just not sure how to put it into words very well to search it | true | 12,714,965 | 1.2 | 0 | 1 | 3 | boto is a Python library with a blocking API. This means you'll have to use threads to use it while maintaining the concurrence operation that Twisted provides you with (just as you would have to use threads to have any concurrency when using boto ''without'' Twisted; ie, Twisted does not help make boto non-blocking or concurrent).
Instead, you could use txAWS, a Twisted-oriented library for interacting with AWS. txaws.s3.client provides methods for interacting with S3. If you're familiar with boto or AWS, some of these should already look familiar. For example, create_bucket or put_object.
txAWS would be better if it provided a streaming API so you could upload to S3 as the file is being uploaded to you. I think that this is currently in development (based on the new HTTP client in Twisted, twisted.web.client.Agent) but perhaps not yet available in a release. | 0 | 636 | 1 | 4 | 2012-10-03T18:54:00.000 | python,stream,twisted,boto | Boto reverse the stream | 1 | 1 | 2 | 12,716,129 | 0 |
0 | 0 | I am trying to send commands to a server via a python script. I can see the socket connection being established on the server. But the commands I am sending across , do not seem to make it through(server does a read on the socket).
The server currently supports a telnet command interpreter. ie: you telnet to the command address and port, and you can start sending
string commands.
My question is , is there anything fundamentally different from sending strings over a tcp socket, as opposed to using telnet.
I have used both raw sockets as well as the Twisted framework. | true | 12,730,293 | 1.2 | 0 | 0 | 23 | Telnet is a way of passing control information about the communication channel. It defines line-buffering, character echo, etc, and is done through a series of will/wont/do/dont messages when the connection starts (and, on rare occasions, during the session).
That's probably not what your server documentation means. Instead, it probably means that you can open a TCP socket to the port using a program like "Telnet" and interact with a command interpreter on the server.
When the Telnet program connects, it typically listens for these control messages before responding in kind and so will work with TCP/socket connections that don't actually use the telnet protocol, reverting to a simple raw pipe. The server must do all character echo, line buffering, etc.
So in your case, the server is likely using a raw TCP stream with no telnet escape sequences and thus there is no difference. | 0 | 30,360 | 0 | 22 | 2012-10-04T15:06:00.000 | python,sockets,network-programming,telnet | How does telnet differ from a raw tcp connection | 1 | 1 | 3 | 12,730,703 | 0 |
1 | 0 | I would like to translate a few hundred words for an application I'm writing. This is a simple, one-off project, so I'm not willing to pay for the the google translate API.
Is there another web service which will do this?
Another idea is to just send a search to Google, and scrape the result from the first result. For example, google 'translate food to spanish'.
However, the page is a mess of obfuscated javascript, and I would need help scraping the result.
I think python would be good for this, but any language will do. | false | 12,730,426 | 0 | 0 | 0 | 0 | Or go to MicrosoftTranslator.com, and paste your text in one box, have it translate and cut and paste the result?
Failing that, the MS Translator API can be used for up to 2 million characters a month for free...so maybe use that? | 0 | 184 | 0 | 0 | 2012-10-04T15:14:00.000 | python,web-scraping,translation | Automate translation for personal use | 1 | 1 | 3 | 16,074,293 | 0 |
0 | 0 | I'm unit testing a URL fetcher, and I need a test url which always causes urllib2.urlopen() (Python) to time out. I've tried making a php page with just sleep(10000) in it, but that causes 500 internal server error.
How would I make a resource that causes a connection timeout in the client whenever it is requested? | true | 12,753,527 | 1.2 | 1 | 0 | 0 | While there have been some good answers here, I found that a simple php sleep() call with an override to Apache's timeout was all I needed.
I know that unit tests should be in isolation, but the server this endpoint is hosted on is no going anywhere. | 0 | 615 | 0 | 3 | 2012-10-05T20:22:00.000 | php,python,apache,url,timeout | How should I create a test resource which always times out | 1 | 2 | 5 | 12,941,867 | 0 |
0 | 0 | I'm unit testing a URL fetcher, and I need a test url which always causes urllib2.urlopen() (Python) to time out. I've tried making a php page with just sleep(10000) in it, but that causes 500 internal server error.
How would I make a resource that causes a connection timeout in the client whenever it is requested? | false | 12,753,527 | 0.039979 | 1 | 0 | 1 | Connection timeout? Use, for example, netcat. Listen on some port (nc -l), and then try to download data from that port.. http://localhost:port/. It will open connection, which will never reply. | 0 | 615 | 0 | 3 | 2012-10-05T20:22:00.000 | php,python,apache,url,timeout | How should I create a test resource which always times out | 1 | 2 | 5 | 12,753,554 | 0 |
0 | 0 | I have a list for followers:
lof = [31536003, 15066760, 75862029]
I can get the follower count for each follower in the list:
user = tweepy.api.get_user(31536003)
print user.followers_count
However, I am trying to write a list comprehension that can return a list in python. The list should be a list of the follower count of my
lof = [31536003, 15066760, 75862029] and will look something like [100,200,300]
which means user 31536003 has 100 followers, user 15066760 has 200 followers and so on.
How to accomplish this using list comprehension? | true | 12,755,611 | 1.2 | 0 | 0 | 5 | Found the answer....
loc = [tweepy.api.get_user(friend).followers_count for friend in lof] | 0 | 1,505 | 0 | 2 | 2012-10-06T00:21:00.000 | twitter,python-2.7 | Find the number of follower using tweepy | 1 | 1 | 1 | 12,755,702 | 0 |
1 | 0 | I just deployed a Flask app on Webfaction and I've noticed that request.remote_addr is always 127.0.0.1. which is of course isn't of much use.
How can I get the real IP address of the user in Flask on Webfaction?
Thanks! | false | 12,770,950 | 0.158649 | 0 | 0 | 4 | The problem is there's probably some kind of proxy in front of Flask. In this case the "real" IP address can often be found in request.headers['X-Forwarded-For']. | 0 | 42,164 | 0 | 33 | 2012-10-07T17:09:00.000 | python,flask,webfaction | Flask request.remote_addr is wrong on webfaction and not showing real user IP | 1 | 1 | 5 | 12,770,986 | 0 |
0 | 0 | I want to be able to access the elements of a webpage with python. I use Python 2.5 with Windows XP. Currently, I am using pywinauto to try to control IE, but I could switch to a different browser or module if it is needed. I need to be able to click the buttons the page has and type in text boxes. So far the best I've gotten is I was able to bring IE to the page I wanted. If it is possible without actually clicking the coords of the buttons and text boxes on the page, please tell me. Thanks! | false | 12,780,635 | 0.197375 | 0 | 0 | 2 | I think for interacting with webserver better is to use cUrl. All webservers function are responses for GET or POST request (or both). in order to call them, just call the urls that buttons are linked to and/or send POST data attaching that data to appropiate request obj before calling send method. cUrl is able to retrieve and do some processing on webserver responce (web site code) without displaying it, what delivers knowledge about url adresses contained in the web site, which are called when clicking certain buttons. Also possible to know html fields which carry POST data to get their names.
Python has curl lib, which I hope is so powerful as PHP curl, tool what I used and presented.
Hope you are on the track now.
BR
Marek | 0 | 694 | 0 | 0 | 2012-10-08T11:17:00.000 | python,webpage,pywinauto | How to control and interact with elements on a webpage displayed with IE or a different browser with pywinauto | 1 | 1 | 2 | 13,868,001 | 0 |
0 | 0 | I'm using Python to write a simple client to Move users, Reset password, Extend user account using Tim Golden's active_directory module.
Currently I'm using the module with the default domain that I logged in with, and it works perfectly. But now I can't find any way to connect to another domain using the same module, when I use active_directory.AD("DC_name") it simply returns "pywintypes.com_error: (-2147463168, 'OLE error 0x80005000', None, None)"
I guess this have to do with authentication, because we have to do it when we access our AD (using ADExplorer). Can I do this with active_directory module, or with win32com API in general.
I know another python-ldap module that did it beautifully, but it can't move user from one OU to another. Any submission to use another module that does the job are welcome :)
Thanks | true | 12,796,443 | 1.2 | 0 | 0 | 0 | It seems that active_directory is using default Win32 API, which doesn't support user/pass binding to a different DC.
You may have to use ldap module and find a workaround | 0 | 913 | 0 | 0 | 2012-10-09T09:05:00.000 | python,active-directory | Connect to AD domain with authentication using Python | 1 | 1 | 1 | 14,350,855 | 0 |
0 | 0 | I am trying to run the functional tests in parallel with multiprocess plugin which gives me random TimeoutException sometimes
my tests are really simple, each of them just goes to a webpage and check if certain element exists.
does anybody know what might be the cause?
thanks | false | 12,808,978 | 1 | 0 | 0 | 7 | Try running nosetests with the --process-timeout value set to something higher than your tests would reasonably take:
nosetests --processes=2 --process-timeout=120 | 0 | 1,972 | 0 | 4 | 2012-10-09T21:52:00.000 | python,selenium,multiprocessing,nosetests,parallel-testing | parallel testing with selenium + nose | 1 | 1 | 1 | 14,650,568 | 0 |
0 | 0 | I am uploading videos to YouTube by using YouTube Data API (Python client library). Is it possible to set monetizing for that video from API rather than going to my account on the YouTube website and manually setting monetization for that uploaded video? If yes, then how can I do it from API? I am unable to find it in the API documentation, and Googling doesn't help either. | false | 12,840,696 | 0.379949 | 0 | 0 | 2 | That's not something that's supported as part of the public YouTube Data API. | 0 | 1,037 | 0 | 7 | 2012-10-11T13:29:00.000 | python,youtube-api | Enable Monetization on YouTube video using YouTube API | 1 | 1 | 1 | 12,842,420 | 0 |
0 | 0 | I have developed my own program, but I would like to be able to dynamically tell the user that there is an update available for the program. This program is designed to be cross-platform, and is written in python. There will be two types of updates:
1) data updates (an xml file that includes the information required to run the program was updates) - This type of update will occur much more often
2) System updates (the actual executable [a python program compiled] is updated - This type of update will not occur as often
I need to be able to connect to an online database and compare the version posted there to the current program version to see if there is any update. If there is an update from the server, the program needs to be able to download and install that update with minimal user input. My question is how would I tell my program to systematically check for the update and then download and install it without crashing the program or asking the user to manually download and install it? | false | 12,842,693 | 0.462117 | 0 | 0 | 5 | Set up a simple web service that returns the current version number of the xml file and the executable. Have your program dial home to this web service on startup, and if the web service returns a newer version number then what the program has in its local copy then display a message to the user saying they could update (or start an automated update process). | 1 | 7,942 | 0 | 5 | 2012-10-11T15:03:00.000 | python,xml | python - Check for updates for a program | 1 | 1 | 2 | 12,842,752 | 0 |
0 | 0 | How can a client both react to user input and data received from the server?
I created a UDP server, which can handle multiple clients and can react to the data received from each clients. So far the clients only react to user input.
Is it possible, that the clients check for both user input and data on a specific port simultaneously? | false | 12,851,908 | 0 | 0 | 0 | 0 | Consider using threads. Python threading is restricted; only one thread runs at a time within the interpreter, but if a thread is waiting for I/O (or a 'sleep') then other threads can run.
You still need to use queues and semaphores and so forth. See the 'threading' module in the library. | 0 | 90 | 0 | 0 | 2012-10-12T03:44:00.000 | python,sockets,input,udp,port | Socketprogramming python | 1 | 1 | 2 | 12,851,967 | 0 |
0 | 0 | I am trying to run a python file from the telnet session
Steps:
Dailyscript.py
Telnetting in to montavista
from the telnet session I am trying to run another python file "python sample.py"
sample.py
Importing TestLib (in this file)
But, when I run directly form my linux box, it is running fine.
Is there any thing I need? | false | 12,862,260 | 0 | 1 | 0 | 0 | Most likely the problem is that TestLib.py isn't in your working directory. Make sure your Dailyscript.py sets its directory to wherever you ran it from (over SSH) before executing python sample.py.
Also, if you have SSH access, why aren't you just using SSH? | 1 | 581 | 0 | 0 | 2012-10-12T15:22:00.000 | python | Import Error: No Module found named TestLIb | 1 | 1 | 1 | 12,863,073 | 0 |
0 | 0 | Hello from what i understand that using Python i can dynamically send kml data to Google earth client. i was wondering is this possible for collada files? i need it in combination to load models in Google earth. so far i can do it manually. | false | 12,874,266 | 0 | 0 | 0 | 0 | You can do this, you would probably do it by creating a KML NetworkLink file that loads a KMZ that contains a KML file that points to a .dae file that is collada. That can all be autogenerated from Python. There are libraries to generate it or you can roll your own. Just search generate collada python and you'll find some resources that will work for you. | 0 | 356 | 0 | 0 | 2012-10-13T14:55:00.000 | python,kml,google-earth,collada | Dynamically generate collada files | 1 | 1 | 1 | 12,878,019 | 0 |
0 | 0 | I'm writing a client-server app with Python. The idea is to have a main server and thousands of clients that will connect with it. The server will send randomly small files to the clients to be processed and clients must do the work and update its status to the server every minute. My problem with this is that for the moment I only have an small and old home server so I think that it can't handle so many connections. Maybe you could help me with this:
How can increase the number of connections in my server?
How can I balance the load from client-side?
How could I improve the communication? I mean, I need to have a list of clients on the server with its status (maybe in a DB?) and this updates will be received time to time, so I don't need a permanent connection. Is a good idea to use UDP to send the updates? If not, do I have to create a new thread every time that I receive an update?
EDIT: I updated the question to explain a little better the problem but mainly to be clear enough for people with the same problems. There is actually a good solution in @TimMcNamara answer. | false | 12,877,643 | 0.099668 | 0 | 0 | 1 | There is generally speaking no problem with having even tens of thousands of sockets at once on even a very modest home server.
Just make sure you do not create a new thread or process for each connection. | 0 | 3,588 | 0 | 5 | 2012-10-13T22:36:00.000 | python,sockets,client-server | Handle multiple socket connections | 1 | 1 | 2 | 12,877,653 | 0 |
1 | 0 | I want to catch the 1 thousand most viewed youtube videos through gdata youtube api.
However, only the first 84 are being returned.
If I use the following query, only 34 records are returned (plus the first 50).
Anyone knows what is wrong?
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=1"
returns 25 records
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=26"
returns 25 records
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=51"
returns 25 records
"http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?start-index=76"
returns 9 records | false | 12,879,754 | 0.197375 | 0 | 0 | 1 | YouTube will not provide this to you. They intentionally rate limit their feeds to prevent abuse. | 0 | 164 | 0 | 0 | 2012-10-14T06:11:00.000 | python,youtube,gdata | Youtube GData 2 thousand most viewed | 1 | 1 | 1 | 12,892,498 | 0 |
1 | 0 | I have a remote method created via Python web2py. How do I test and invoke the method from Java?
I was able to test if the method implements @service.xmlrpc but how do i test if the method implements @service.run? | false | 12,890,137 | 0.049958 | 1 | 0 | 1 | I'd be astonished if you could do it at all. Java RMI requires Java peers. | 0 | 2,053 | 0 | 0 | 2012-10-15T06:06:00.000 | java,python,rmi,rpc,web2py | Using Java RMI to invoke Python method | 1 | 1 | 4 | 12,890,526 | 0 |
1 | 0 | I've started to learn python the past couple of days. I want to know the equivalent way of writing crawlers in python.
so In ruby I use:
nokogiri for crawling html and getting content through css tags
Net::HTTP and Net::HTTP::Get.new(uri.request_uri).body for getting JSON data from a url
what are equivalents of these in python? | false | 12,890,897 | 0.148885 | 0 | 0 | 3 | Between lxml and beautiful soup, lxml is more equivalent to nokogiri
because it is based on libxml2 and it has xpath/css support.
The equivalent of net/http is urllib2 | 0 | 5,706 | 0 | 2 | 2012-10-15T07:18:00.000 | python,ruby,web-crawler | Going from Ruby to Python : Crawlers | 1 | 1 | 4 | 12,891,191 | 0 |
0 | 0 | I want to access with Selenium (through) Python, a URL that demands authentication.
When visit the URL, manually a new authentication window pops up, on which I need to fill in a username and password. Only after clicking on “OK” this window disappears and I return to the original site.
As I want to visit this URL on an interval base to download information and want to automatize this process in python.
In my current effort I use Selenium, but none of the examples that I found seem to do what I need. Thinks I tried but do not work are:
driver.get("https://username:[email protected]/")
selenium.FireEvent("OK", "click")
driver.find_element_by_id("UserName")
I do not know the actual element id’s
What I did manage is to load my Firefox profile that stores the authentication information, but I still need to confirm the authentication by clicking “ok”.
Is there any way to prevent this screen to pop up?
If not how to access this button on the authentication form, from which I cannot obtain id-information? | false | 12,893,264 | 0.197375 | 0 | 0 | 1 | Using driver.get("https://username:[email protected]/") should directly log you in, without the popup being displayed,
What about this did not work for you?
EDIT
I am not sure this will work but after
driver.get("https://username:[email protected]/")
Try accepting alert.
For the alert - @driver.switch_to.alert.accept in Ruby or driver.switchTo().alert().accept(); in Java | 0 | 3,449 | 0 | 2 | 2012-10-15T09:57:00.000 | python,firefox,authentication,selenium,form-submit | How to Submit Https authentication with Selenium in python | 1 | 1 | 1 | 12,893,410 | 0 |
1 | 0 | I have 10000 files in a s3 bucket.When I list all the files it takes 10 minutes. I want to implement a search module using BOTO (Python interface to AWS) which searches files based on user input. Is there a way I can search specific files with less time? | false | 12,904,326 | 0.291313 | 0 | 1 | 3 | There are two ways to implement the search...
Case 1. As suggested by john - you can specify the prefix of the s3 key file in your list method. that will return you result of S3 key files which starts with the given prefix.
Case 2. If you want to search the S3 key which are end with specific suffix or we can say extension then you can specify the suffix in delimiter. Remember it will give you correct result only in the case if you are giving suffix for the search item which is end with that string.
Else delimiter is used for path separator.
I will suggest you Case 1 but if you want to faster search with specific suffix then you can try case 2 | 0 | 5,534 | 0 | 2 | 2012-10-15T21:29:00.000 | python,amazon-s3,boto | Search files(key) in s3 bucket takes longer time | 1 | 1 | 2 | 12,907,767 | 0 |
0 | 0 | As the title says, I'm curious if the functionality of editing a sent chat message can be replicated programatically using the Skype4Py api. I'm not sure when Skype added this feature, and the API, as far as I know, hasn't been maintained in some years, so I was just curious.
I don't see anything that looks like it would do it in the docs, but,I figured I check here before I give up. | false | 12,908,271 | 0 | 0 | 0 | 0 | Looking at the docs, each chat message has a body (the text of the message) and a IsEditable property. So long as IsEditable=True you should be able to do m.Body = "some new text" | 0 | 667 | 0 | 0 | 2012-10-16T05:44:00.000 | python,skype4py | Is it possible to edit a previously sent chat message using the Skype4Py api? | 1 | 1 | 1 | 15,720,075 | 0 |
0 | 0 | I am writing a python script to copy python(say ABC.py) files from one directory to another
directory with the same folder name(say ABC) as script name excluding .py.
In the local system it works fine and copying the files from one directory to others by
creating the same name folder.
But actually I want copy these files from my local system (windows XP) to the remote
system(Linux) located in other country on which I execute my script. But I am getting
the error as "Destination Path not found" means I am not able to connect to remote
that's why.
I use SSH Secure client.
I use an IP Address and Port number to connect to the remote server.
Then it asks for user id and password.
But I am not able to connect to the remote server by my python script.
Can Any one help me out how can I do this?? | false | 12,909,334 | -0.099668 | 1 | 0 | -1 | I used the same script, but my host failed to respond. My host is in different network.
WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond | 0 | 9,452 | 1 | 2 | 2012-10-16T07:14:00.000 | python | How to Transfer Files from Client to Server Computer by using python script? | 1 | 1 | 2 | 66,757,126 | 0 |
0 | 0 | I'm working on a suite of tests using selenium webdriver (written in Python). The page being tested contains a form that changes its displayed fields based upon what value is selected in one of its select boxes. This select box has about 250 options. I have a test (running via nose, though that's probably irrelevant) that iterates through all the options in the select box, verifying that the form has the correct fields displayed for each selected option.
The problem is that for each option, it calls through selenium:
click for the option to choose
find_element and is_displayed for 7 fields
find_elements for the items in the select box
get_attribute and text for each option in the select box
So that comes out to (roughly) 250 * (7 * 2 + 1 + 2 * 250), or 128,750 distinct requests to the webdriver server the test is running on, all within about 10 or 15 minutes. This is leading to client port exhaustion on the machine running the test in some instances. This is all running through a test framework that abstracts away things like how the select box is parsed, when new page objects are created, and a few other things, so optimizations in the test code either mean hacking that all to hell or throwing away the framework for this test and doing everything manually (which, for maintainability of our test code, is a bad idea).
Some ideas I've had for solutions are:
Trying to somehow pool or reuse the connection to the webdriver server
Somehow tweaking the configuration of urllib2 or httplib at runtime so that the connections opened by selenium timeout or are killed more quickly
System independent (or at least implementable for all systems with an OS switch or some such) mechanism for actively tracking and closing the ports being opened by selenium
As I mentioned above, I can't do much to tweak the way the page is parsed or handled, but I do have control over subclassing/tweaking WebDriver or RemoteConnection any way I please. Does anyone have any suggestions on how to approach any of the above ideas, or any ideas that I haven't come up with? | false | 12,936,533 | 0.132549 | 0 | 0 | 2 | In as much as a small amount of plastic explosive solves the problem of forgetting your house keys, I implemented a solution. I created a class that tracks a list of resources and the time they were added to it, blocks when a limit is reached, and removes entries when their timestamp passes beyond a timeout value. I then created an instance of this class setup with a limit of 32768 resources and a timeout of 240 seconds, and had my test framework add an entry to the list every time webdriver.execute() is called or a few other actions (db queries, REST calls) are performed. It's not particularly elegant and it's quite arbitrary, but at least so far it seems to be keeping my tests from triggering port exhaustion while not noticeably slowing tests that weren't causing port exhaustion before. | 0 | 2,907 | 0 | 2 | 2012-10-17T14:23:00.000 | python,selenium,webdriver,urllib2,tcp-ip | tcp/ip port exhaustion with selenium webdriver | 1 | 1 | 3 | 12,940,958 | 0 |
0 | 0 | I am trying to download only files modified in the last 30 minutes from a URL. Can you please guide me how to proceed with this. Please let me know if I should use shell scripting or python scripting for this. | false | 12,940,223 | 0.379949 | 1 | 0 | 2 | If the server supports if-modified-since, you could send the request with If-Modified-Since: (T-30 minutes) and ignore the 304 responses. | 0 | 170 | 0 | 0 | 2012-10-17T17:45:00.000 | python,bash,shell | Download files modified in last 30 minutes from a URL | 1 | 1 | 1 | 12,940,327 | 0 |
0 | 0 | Basically I want a Java, Python, or C++ script running on a server, listening for player instances to: join, call, bet, fold, draw cards, etc and also have a timeout for when players leave or get disconnected.
Basically I want each of these actions to be a small request, so that players could either be processes on same machine talking to a game server, or machines across network.
Security of messaging is not an issue, this is for learning/research/fun.
My priorities:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Speed. I'm going to be playing millions of these hands as fast as I can.
Run on a shared server instance (I may have limited access to ports or things that need root)
My questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
Any good frameworks to work off of?
Message types? I'm thinking JSON or Protocol Buffers.
How to make it FAST?
Thanks guys - just looking for some pointers and suggestions. I think it is a cool problem with a lot of neat things to learn doing it. | false | 12,945,278 | 0.132549 | 1 | 0 | 2 | Anything else? Maybe a cup of coffee to go with your question :-)
Answering your question from the ground up would require several books worth of text with topics ranging from basic TCP/IP networking to scalable architectures, but I'll try to give you some direction nevertheless.
Questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
I would venture that if you're not clear on the definition of each of these maybe designing an implementing a service that will be "be playing millions of these hands as fast as I can" is a bit hmm, over-reaching? But don't let that stop you as they say "ignorance is bliss."
Any good frameworks to work off of?
I think your project is a good candidate for Node.js. There main reason being that Node.js is relatively scaleable and it is good at hiding the complexity required for that scalability. There are downsides to Node.js, just Google search for 'Node.js scalability critisism'.
The main point against Node.js as opposed to using a more general purpose framework is that scalability is difficult, there is no way around it, and Node.js being so high level and specific provides less options for solving though problems.
The other drawback is Node.js is Javascript not Java or Phyton as you prefer.
Message types? I'm thinking JSON or Protocol Buffers.
I don't think there's going to be a lot of traffic between client and server so it doesn't really matter I'd go with JSON just because it is more prevalent.
How to make it FAST?
The real question is how to make it scalable. Running human vs human card games is not computationally intensive, so you're probably going to run out of I/O capacity before you reach any computational limit.
Overcoming these limitations is done by spreading the load across machines. The common way to do in multi-player games is to have a list server that provides links to identical game servers with each server having a predefined number of slots available for players.
This is a variation of a broker-workers architecture were the broker machine assigns a worker machine to clients based on how busy they are. In gaming users want to be able to select their server so they can play with their friends.
Related:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Since this is in human time scales (seconds as opposed to miliseconds) the client should send keepalives say every 10 seconds with say 30 second session timeout.
The keepalives would be JSON messages in your application protocol not HTTP which is lower level and handled by the framework.
The framework itself should provide you with HTTP 1.1 connection management/pooling which allows several http sessions (request/response) to go through the same connection, but do not require the client to be always connected. This is a good compromise between reliability and speed and should be good enough for turn based card games. | 0 | 2,321 | 0 | 2 | 2012-10-18T00:04:00.000 | java,python,optimization,webserver,multiplayer | Multiplayer card game on server using RPC | 1 | 2 | 3 | 12,946,896 | 0 |
0 | 0 | Basically I want a Java, Python, or C++ script running on a server, listening for player instances to: join, call, bet, fold, draw cards, etc and also have a timeout for when players leave or get disconnected.
Basically I want each of these actions to be a small request, so that players could either be processes on same machine talking to a game server, or machines across network.
Security of messaging is not an issue, this is for learning/research/fun.
My priorities:
Have a good scheme for detecting when players disconnect, but also be able to account for network latencies, etc before booting/causing to lose hand.
Speed. I'm going to be playing millions of these hands as fast as I can.
Run on a shared server instance (I may have limited access to ports or things that need root)
My questions:
Listen on ports or use sockets or HTTP port 80 apache listening script? (I'm a bit hazy on the differences between these).
Any good frameworks to work off of?
Message types? I'm thinking JSON or Protocol Buffers.
How to make it FAST?
Thanks guys - just looking for some pointers and suggestions. I think it is a cool problem with a lot of neat things to learn doing it. | false | 12,945,278 | 0.066568 | 1 | 0 | 1 | Honestly, I'd start with classic LAMP. Take a stock Apache server, and a mysql database, and put your Python scripts in the cgi-bin directory. The fact that they're sending and receiving JSON instead of HTTP doesn't make much difference.
This is obviously not going to be the most flexible or scalable solution, of course, but it forces you to confront the actual problems as early as possible.
The first problem you're going to run into is game state. You claim there is no shared state, but that's not right—the cards in the deck, the bets on the table, whose turn it is—that's all state, shared between multiple players, managed on the server. How else could any of those commands work? So, you need some way to share state between separate instances of the CGI script. The classic solution is to store the state in the database.
Of course you also need to deal with user sessions in the first place. The details depend on which session-management scheme you pick, but the big problem is how to propagate a disconnect/timeout from the lower level up to the application level. What happens if someone puts $20 on the table and then disconnects? You have to think through all of the possible use cases.
Next, you need to think about scalability. You want millions of games? Well, if there's a single database with all the game state, you can have as many web servers in front of it as you want—John Doe may be on server1 while Joe Schmoe is on server2, but they can be in the same game. On the other hand, you can a separate database for each server, as long as you have some way to force people in the same game to meet on the same server. Which one makes more sense? Either way, how do you load-balance between the servers. (You not only want to keep them all busy, you want to avoid the situation where 4 players are all ready to go, but they're on 3 different servers, so they can't play each other…).
The end result of this process is going to be a huge mess of a server that runs at 1% of the capacity you hoped for, that you have no idea how to maintain. But you'll have thought through your problem space in more detail, and you'll also have learned the basics of server development, both of which are probably more important in the long run.
If you've got the time, I'd next throw the whole thing out and rewrite everything from scratch by designing a custom TCP protocol, implementing a server for it in something like Twisted, keeping game state in memory, and writing a simple custom broker instead of a standard load balancer. | 0 | 2,321 | 0 | 2 | 2012-10-18T00:04:00.000 | java,python,optimization,webserver,multiplayer | Multiplayer card game on server using RPC | 1 | 2 | 3 | 12,963,229 | 0 |
1 | 0 | I like to use Python's SimpleHTTPServer for local development of all kinds of web applications which require loading resources via Ajax calls etc.
When I use query strings in my URLs, the server always redirects to the same URL with a slash appended.
For example /folder/?id=1 redirects to /folder/?id=1/ using a HTTP 301 response.
I simply start the server using python -m SimpleHTTPServer.
Any idea how I could get rid of the redirecting behaviour? This is Python 2.7.2. | false | 12,953,542 | 0.321513 | 0 | 0 | 5 | The right way to do this, to ensure that the query parameters remain as they should, is to make sure you do a request to the filename directly instead of letting SimpleHTTPServer redirect to your index.html
For example http://localhost:8000/?param1=1 does a redirect (301) and changes the url to http://localhost:8000/?param=1/ which messes with the query parameter.
However http://localhost:8000/index.html?param1=1 (making the index file explicit) loads correctly.
So just not letting SimpleHTTPServer do a url redirection solves the problem. | 0 | 6,427 | 0 | 13 | 2012-10-18T11:26:00.000 | python,simplehttpserver,webdev.webserver | Why does SimpleHTTPServer redirect to ?querystring/ when I request ?querystring? | 1 | 1 | 3 | 25,786,569 | 0 |
0 | 0 | For the last few days I have been trying ti install the Native Client SDK for chrome in Windows and/or Ubuntu.
I'm behind a corporate network, and the only internet access is through an HTTP proxy with authentication involved.
When I run "naclsdk update" in Ubuntu, it shows
"urlopen error Tunnel connection failed: 407 Proxy Authentication Required"
Can anyone please help ? | true | 12,964,666 | 1.2 | 0 | 0 | 0 | I got a solution-
not a direct one, though.
managed to use a program to redirect the HTTPS traffic through the HTTP proxy.
I used the program called "proxifier". Works great. | 0 | 1,326 | 1 | 1 | 2012-10-18T22:21:00.000 | python,google-nativeclient | Installing Chrome Native Client SDK | 1 | 1 | 2 | 13,452,816 | 0 |
1 | 0 | I'm using the python framework Boto to interact with AWS DynamoDB. But when I use "item_count" it returns the wrong value. What's the best way to retrieve the number of items in a table? I know it would be possible using the Scan operation, but this is very expensive on resources and can take a long time if the table is quiet large. | true | 12,999,262 | 1.2 | 0 | 0 | 5 | The item_count value is only updated every six hours or so. So, I think boto is returning you the value as it is returned by the service but that value is probably not up to date. | 0 | 592 | 0 | 3 | 2012-10-21T15:33:00.000 | python,amazon-web-services,boto,amazon-dynamodb | Boto's DynamoDB API returns wrong value when using item_count | 1 | 1 | 1 | 13,003,481 | 0 |
1 | 0 | I have a cherrypy web server that needs to be able to receive large files over http post. I have something working at the moment, but it fails once the files being sent gets too big (around 200mb). I'm using curl to send test post requests, and when I try to send a file that's too big, curl spits out "The entity sent with the request exceeds the maximum allowed bytes." Searching around, this seems to be an error from cherrypy.
So I'm guessing that the file being sent needs to be sent in chunks? I tried something with mmap, but I couldn't get it too work. Does the method that handles the file upload need to be able to accept the data in chunks too? | false | 13,002,676 | 0 | 0 | 0 | 0 | Huge file uploads always problematic. What would you do when connection closes in the middle of uploading? Use chunked file upload method instead. | 0 | 4,776 | 0 | 3 | 2012-10-21T22:07:00.000 | python,http,post,upload,cherrypy | Python: sending and receiving large files over POST using cherrypy | 1 | 1 | 2 | 26,299,500 | 0 |
1 | 0 | I'm extremely new to Python, read about half a beginner book for Python3. I figure doing this will get me going and learning with something I actually want to do instead of going through some "boring" exercises.
I'm wanting to build an application that will scrape Reddit for the top URL's and then post these onto my own page. It would only check a couple times a day so no hammering at all here.
I want to parse the Reddit json (http://www.reddit.com/.json) and other subreddits json into URL's that I can organize into my own top list and have my own categories as well on my page so I don't have to keep visiting Reddit.
The website will be a Wordpress template with the DB hosted on it's own server (mysql). I will be hosting this on AWS using RDS, ELB, Auto-scaling, and EC2 instances for the webservers.
My questions are:
-Would it make sense to keep the Python scraper application running on it's own server, which then writes the scraped URL's to the database?
-I heard it may make sense to split the application and one does the reading while the other does the writing, whats this about?
-What would the flow of the Python code look like? I can fumble my way around writing it but I just am not entirely sure on how it should flow.
-What else am I not thinking of here, any tips? | true | 13,040,048 | 1.2 | 0 | 0 | 2 | Would it make sense to keep the Python scraper application running on
it's own server, which then writes the scraped URL's to the database?
Yes, that is a good idea. I would set up a cron job to run the program every so often. Depending on the load you're expecting, it doesn't necessarily need to be on its own server. I would have it as its own application.
I heard it may make sense to split the application and one does the
reading while the other does the writing, whats this about?
I am assuming the person who said this meant that you should have an application to write to your database (your python script) and an application to read URLs from the database (your WordPress wrapper, or perhaps another Python script to write something WordPress can understand).
What would the flow of the Python code look like? I can fumble my way
around writing it but I just am not entirely sure on how it should
flow.
This is a somewhat religious matter among programmers. However I feel that your program should be simple enough. I would simply grab the JSON and have a query that inserts into the database if the entry doesn't exist yet.
What else am I not thinking of here, any tips?
I personally would use urllib2 and MySQLdb modules for the Python script. Good luck! | 0 | 570 | 0 | 2 | 2012-10-23T22:04:00.000 | python,json,reddit | Scraping news sites with Python | 1 | 1 | 1 | 13,040,391 | 0 |
0 | 0 | I have a file which contains raw IP packets in binary form. The data in the file contains a full IP header, TCP\UDP header, and data. I would like to use any language (preferably python) to read this file and dump the data onto the line.
In Linux I know you can write to some devices directly (echo "DATA" > /dev/device_handle). Would using python to do an open on /dev/eth1 achieve the same effect (i.e. could I do echo "DATA" > /dev/eth1) | false | 13,040,834 | 0.197375 | 0 | 0 | 2 | No; there is no /dev/eth1 device node -- network devices are in a different namespace from character/block devices like terminals and hard drives. You must create an AF_PACKET socket to send raw IP packets. | 0 | 2,540 | 1 | 3 | 2012-10-23T23:25:00.000 | python,linux,networking,packet | Writing raw IP data to an interface (linux) | 1 | 1 | 2 | 13,040,908 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.