Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | Which SNMP library offers an API to get single properties of captured (tcpdump) SNMP packets like the request-ID or the protocol version?
I found that pySNMP offers such a low-level API but only for v1/v2c versions. But I need both v2c and v3. | false | 13,055,699 | 0 | 0 | 0 | 0 | what exactly you want to know. whether :
python net-snmp api supports v3 or not
or
oid for finding request-id of a snmp packet | 0 | 624 | 0 | 1 | 2012-10-24T18:46:00.000 | python,snmp | How to get the request ID of a SNMP packet in Python | 1 | 1 | 1 | 13,191,755 | 0 |
0 | 0 | Is there a way to dynamically download and install a package like AWS API from a PHP or Python script at runtime?
Thanks. | false | 13,070,759 | 0.197375 | 1 | 0 | 1 | Not at runtime - this would make no sense due to the overheads involved and the risk of the download failing. | 0 | 42 | 0 | 0 | 2012-10-25T14:24:00.000 | php,python | Python/PHP - Downloading and installing AWS API | 1 | 1 | 1 | 13,070,806 | 0 |
1 | 0 | I am not sure if this is possible, but I was wondering if it would be possible to write a script or program that would automatically open up my web browser, go to a certain site, fill out information, and click "send"? And if so, where would I even begin? Here's a more detailed overview of what I need:
Open browser
Go to website
Fill out a series of forms
Click OK
Fill out more forms
Click OK
Thank you all in advance. | true | 13,073,147 | 1.2 | 0 | 0 | 1 | There are a number of tools out there for this purpose. For example, Selenium, which even has a package on PyPI with Python bindings for it, will do the job. | 0 | 1,062 | 0 | 1 | 2012-10-25T16:31:00.000 | python | script to open web browser and enter data | 1 | 1 | 2 | 13,073,177 | 0 |
0 | 0 | I am not sure if this is the right forum to ask, but I give it a try.
A device is sending an E-Mail to my code in which I am trying to receive the email via a socket in python, and to decode the E-Mail with Messsage.get_payload() calls. However. I always have a \n.\n at the end of the message.
If the same device send the same message to a genuine email client (e.g. gmail), I get the correct original message without the \n.\n.
I would like to know what it is with this closing set of special characters in SMTP/E-Mail handling/sending, and how to encode it away. | false | 13,108,615 | 0 | 1 | 0 | 0 | These are simply newline characters. In GMail they'll be processed and "displayed" so you don't see them. But they are still part of the email text message so it makes sense that get_payload() returns them. | 0 | 92 | 0 | 0 | 2012-10-28T11:56:00.000 | python,sockets,smtp | Mysterious characters at the end of E-Mail, received with socket in python | 1 | 1 | 2 | 13,108,630 | 0 |
1 | 0 | There are several components involved in auth and the discovery based service api.
How can one test request handlers wrapped with decorators used from oauth2client (eg oauth_required, etc), httplib2, services and uploads?
Are there any commonly available mocks or stubs? | true | 13,115,599 | 1.2 | 1 | 0 | 1 | There are the mock http and request classes that the apiclient package uses for its own testing. They are in apiclient/http.py and you can see how to use them throughout the test suite. | 0 | 259 | 0 | 2 | 2012-10-29T03:42:00.000 | python,google-app-engine,google-drive-api,google-api-python-client | How can one test appengine/drive/google api based applications? | 1 | 1 | 1 | 13,125,588 | 0 |
0 | 0 | I am parsing a large file (>9GB) and am using iterparse of lxml in Python to parse the file while clearing as I go forward. I was wondering, is there a way to parse backwards while clearing? I could see I how would implement this independently of lxml, but it would be nice to use this package.
Thank you in advance! | true | 13,140,185 | 1.2 | 0 | 0 | 0 | iterparse() is strictly forward-only, I'm afraid. If you want to read a tree in reverse, you'll have to read it forward, while writing it to some intermediate store (be it in memory or on disc) in some form that's easier for you to parse backwards, and then read that. I'm not aware of any stream parsers that allow XML to be parsed back-to-front.
Off the top of my head, you could use two files, one containing the data and the other an index of offsets to the records in the data file. That would make reading backwards relatively easy once it's been written. | 1 | 1,552 | 0 | 1 | 2012-10-30T13:34:00.000 | python,lxml | lxml, parsing in reverse | 1 | 1 | 2 | 13,355,413 | 0 |
1 | 0 | I'm trying to automate a process that collects data on one (or more) AWS instance(s), uploads the data to S3 hourly, to be retrieved by a decoupled process for parsing and further action. As a first step, I whipped up some crontab-initiated shell script (running in Ubuntu 12.04 LTS) that calls the boto utility s3multiput.
For the most part, this works fine, but very occasionally (maybe once a week) the file fails to appear in the s3 bucket, and I can't see any error or exception thrown to track down why.
I'm using the s3multiput utility included with boto 2.6.0. Python 2.7.3 is the default python on the instance. I have an IAM Role assigned to the instance to provide AWS credentials to boto.
I have a crontab calling a script that calls a wrapper that calls s3multiput. I included the -d 1 flag on the s3multiput call, and redirected all output on the crontab job with 2>&1 but the report for the hour that's missing data looks just like the report for the hour before and the hour after, each of which succeeded.
So, 99% of the time this works, but when it fails I don't know why and I'm having trouble figuring where to look. I only find out about the failure later when the parser job tries to pull the data from the bucket and it's not there. The data is safe and sound in the directory it should have uploaded from, so I can do it manually, but would rather not have to.
I'm happy to post the ~30-40 lines of related code if helpful, but wondered if anybody else had run into this and it sounded familiar.
Some grand day I'll come back to this part of the pipeline and rewrite it in python to obviate s3multiput, but we just don't have dev time for that yet.
How can I investigate what's going wrong here with the s3multiput upload? | false | 13,203,745 | 0.197375 | 1 | 0 | 1 | First, I would try updating boto; a commit to the development branch mentions logging when a multipart upload fails. Note that doing so will require using s3put instead, as s3multiput is being folded into s3put. | 0 | 515 | 0 | 3 | 2012-11-02T22:13:00.000 | python,crontab,boto | Silent failure of s3multiput (boto) upload to s3 from EC2 instance | 1 | 1 | 1 | 13,203,938 | 0 |
1 | 0 | I'm developing an application that displays a users/friends photos. For the most part I can pull photos from the album, however for user/album cover photos, all that is given is the object ID for which the following URL provides the image:
https://graph.facebook.com/1015146931380136/picture?access_token=ABCDEFG&type=picture
Which when viewed redirects the user to the image file itself such as:
https://fbcdn-photos-a.akamaihd.net/hphotos-ak-ash4/420455_1015146931380136_78924167_s.jpg
My question is, is there a Pythonic or GraphAPI method to resolve the final image path an avoid sending the Access Token to the end user? | false | 13,228,847 | 0.066568 | 0 | 0 | 1 | Make a Graph API call like this and you get the real URL:
https://graph.facebook.com/[fbid]?fields=picture
Btw, you don´t need an access token for this. | 0 | 872 | 0 | 0 | 2012-11-05T09:02:00.000 | python,django,facebook-graph-api | Resolve FB GraphAPI picture call to the final URL | 1 | 1 | 3 | 13,228,977 | 0 |
0 | 0 | In Python 2.7 is there a way to get information on all open sockets similar to what netstat/ss does in linux?
I am interested in writing a small program (similar to EtherApe) that tracks when my computer opens a connection to a server. | false | 13,245,772 | 0 | 0 | 0 | 0 | Sockets are handled and controlled by OS. Any programming language what do is just put the data to buffer in OS. So in order to check the open sockets you just have to read them by operating system. | 0 | 3,146 | 0 | 0 | 2012-11-06T06:43:00.000 | python,sockets | Show all open sockets in Python | 1 | 1 | 3 | 13,245,863 | 0 |
0 | 0 | Problem
I need a way to store and collect JSON data in an entirely offline(!) web application, hosted on a local (shared) machine. Several people will access the app but it will never actually be online.
I'd like the app to:
Read and write JSON data continuously and programmatically (i.e. not using a file-upload type schema)
Preferably not require any software installation other than the browser, specifically I'd like not to use local server. (edit: I may be willing to learn a bit of Python if that helps)
The amount of data I need to store is small so it is very much overkill to use some sort of database.
Solution?
My first thought was to let the html5 file API, just read/parse and write my JSON object to a local txt file, but this appears not to be possible?!
Local storage is not applicable here right, when several people - each with their own browser - need to access the html?
Any ideas?
note
I know this topic is not entirely novel, but I think my situation may be slightly different than in other threads. And I've spent the better part of the last couple hours googling this and I'm none the wiser.. | false | 13,284,566 | 0 | 0 | 0 | 0 | My suggestion would be something like WampServer (Windows, Apache, MySQL, PHP). I've seen a few tutorials about adding Python to that mix.
You would have access to reading and writing JSON data to the local storage or placing your data in a local database. | 1 | 2,518 | 0 | 1 | 2012-11-08T07:38:00.000 | javascript,python,json,storage,offlineapps | How to read and write JSON offline on local machine? | 1 | 2 | 6 | 13,284,691 | 0 |
0 | 0 | Problem
I need a way to store and collect JSON data in an entirely offline(!) web application, hosted on a local (shared) machine. Several people will access the app but it will never actually be online.
I'd like the app to:
Read and write JSON data continuously and programmatically (i.e. not using a file-upload type schema)
Preferably not require any software installation other than the browser, specifically I'd like not to use local server. (edit: I may be willing to learn a bit of Python if that helps)
The amount of data I need to store is small so it is very much overkill to use some sort of database.
Solution?
My first thought was to let the html5 file API, just read/parse and write my JSON object to a local txt file, but this appears not to be possible?!
Local storage is not applicable here right, when several people - each with their own browser - need to access the html?
Any ideas?
note
I know this topic is not entirely novel, but I think my situation may be slightly different than in other threads. And I've spent the better part of the last couple hours googling this and I'm none the wiser.. | false | 13,284,566 | 0 | 0 | 0 | 0 | I know you said you don't want to opt for local server, but nodejs could be the solution. If you know JavaScript, then it's very simple to set one server up and let everybody access to the server from any browser. Since it's entirely JavaScript you don't even have conversion issues with the JSON format.
For storing the JSON you can use the FileSystem built-in library of nodejs which lets you read and write from a file, so you don't even need a database. | 1 | 2,518 | 0 | 1 | 2012-11-08T07:38:00.000 | javascript,python,json,storage,offlineapps | How to read and write JSON offline on local machine? | 1 | 2 | 6 | 13,285,068 | 0 |
1 | 0 | Subj.I want to get page (nested) level in scrapy on each page(url, request) in spider, is there any way to do that? | true | 13,286,049 | 1.2 | 0 | 0 | 1 | After some time we found the solution - response.meta['depth'] | 0 | 294 | 0 | 0 | 2012-11-08T09:22:00.000 | python,scrapy,web-crawler | Get page (nested) level scrapy on each page(url, request) in spider | 1 | 1 | 2 | 13,641,429 | 0 |
1 | 0 | I am trying to scrape a website which serves different page depending upon the geolocation of the IP sending the request. I am using an amazon EC2 located in US(which means it serves up a page meant for US) but I want the page that will be served in India. Does scrapy provide a way to work around this somehow? | false | 13,298,788 | 0.379949 | 0 | 0 | 2 | If the site you are scraping does IP based detection, your only option is going to be to change your IP somehow. This means either using a different server (I don't believe EC2 operates in India) or proxying your server requests. Perhaps you can find an Indian proxy service? | 0 | 722 | 0 | 1 | 2012-11-08T22:16:00.000 | python,web-crawler,scrapy | fake geolocation with scrapy crawler | 1 | 1 | 1 | 13,299,332 | 0 |
0 | 0 | I am playing around with parsing RSS feeds looking for references to countries. At the moment I am using Python, but I think this question is fairly language agnostic (in theory).
Let's say I have three lists (all related)
Countries - Nouns (i.e. England, Norway, France )
Countries - Adjectives (i.e. English, Norwegian, French)
Cities (i.e. London, Newcastle, Birmingham)
My aim is to begin by parsing the feeds for these strings.
So for example if 'London' was found, the country would be 'England', if 'Norwegian' was found it would be 'Norway' etc.
What would be the optimal method for working with this data? Would it be jason and pulling it all in to create nested dictionaries? sets? or some type of database?
At the moment this is only intended to be used on a local machine. | false | 13,331,665 | 0 | 0 | 0 | 0 | I would suggest to merge the 3 lists of data into one dictionary, which maps names to country names, e.g., it maps "England" -> "England", "English" -> "England", "London" -> "England". It can be easily stored in a database or a file and retrieved.
Then I would search for the keys in the dictionary, and label the item with the value from the dictionary. | 1 | 55 | 0 | 0 | 2012-11-11T13:44:00.000 | python,language-agnostic | Data storage for query | 1 | 2 | 2 | 13,441,013 | 0 |
0 | 0 | I am playing around with parsing RSS feeds looking for references to countries. At the moment I am using Python, but I think this question is fairly language agnostic (in theory).
Let's say I have three lists (all related)
Countries - Nouns (i.e. England, Norway, France )
Countries - Adjectives (i.e. English, Norwegian, French)
Cities (i.e. London, Newcastle, Birmingham)
My aim is to begin by parsing the feeds for these strings.
So for example if 'London' was found, the country would be 'England', if 'Norwegian' was found it would be 'Norway' etc.
What would be the optimal method for working with this data? Would it be jason and pulling it all in to create nested dictionaries? sets? or some type of database?
At the moment this is only intended to be used on a local machine. | true | 13,331,665 | 1.2 | 0 | 0 | 0 | It is a very debatable question. There can be multiple solutions for this. If I were you, I would simply a small DB in Mongodb with three tables like these
Country:
Columns: id, name
Country-adj:
Columns: id, name, country_id
Cities:
Columns: id, name, country_id
then simple queries would give your desired results. | 1 | 55 | 0 | 0 | 2012-11-11T13:44:00.000 | python,language-agnostic | Data storage for query | 1 | 2 | 2 | 13,331,819 | 0 |
0 | 0 | I have the following line of code using imaplib
M = imaplib.IMAP4('smtp.gmail.com', 587)
I get the following error from imaplib:
abort: unexpected response: '220 mx.google.com ESMTP o13sm12303588vde.21'
However from reading elsewhere, it seems that that response is the correct response demonstrating that the connection was made to the server successfully at that port.
Why is imaplib giving this error? | false | 13,385,981 | 0.197375 | 1 | 0 | 2 | You are connecting to the wrong port. 587 is authenticated SMTP, not IMAP; the IMAP designated port number is 143 (or 993 for IMAPS). | 0 | 2,034 | 0 | 2 | 2012-11-14T19:33:00.000 | python,email,response,imaplib | python imaplib unexpected response 220 | 1 | 2 | 2 | 13,387,035 | 0 |
0 | 0 | I have the following line of code using imaplib
M = imaplib.IMAP4('smtp.gmail.com', 587)
I get the following error from imaplib:
abort: unexpected response: '220 mx.google.com ESMTP o13sm12303588vde.21'
However from reading elsewhere, it seems that that response is the correct response demonstrating that the connection was made to the server successfully at that port.
Why is imaplib giving this error? | true | 13,385,981 | 1.2 | 1 | 0 | 2 | I realized I needed to do IMAP4_SSL() - has to be SSL for IMAP and for using IMAP I need the IMAP server for gmail which is imap.googlemail.com. I ultimately got it work without specifying a port. So, final code is:
M = imaplib.IMAP4_SSL('imap.googlemail.com') | 0 | 2,034 | 0 | 2 | 2012-11-14T19:33:00.000 | python,email,response,imaplib | python imaplib unexpected response 220 | 1 | 2 | 2 | 13,399,991 | 0 |
0 | 0 | I'm trying to make a chat server thing. Basically, I want more than one client to be able to connect at the same time.
I want it so it's always listening. Whenever anyone tries to connect, it instantly accepts them and adds them to a list of connections.
I could have a listen(1) then a timeout, and keep appending them to a list then closing the socket, and making a new one, then listening with a timeout, etc. Although, this seems very slow, and I'm not even sure it would work
Keep in mind, it does not HAVE to be socket. If there is any other kind of network interface, it would work just as well. | false | 13,388,177 | 0 | 0 | 0 | 0 | I want it so it's always listening. Whenever anyone tries to connect, it instantly accepts them and adds them to a list of connections.
So you just have:
An accept() loop that does nothing but accept() new connections and start a new thread to handle each one.
A thread per connection that reads with a long timeout, whatever you want your session idle timeout to be. If the timeout expires, you close the socket and exit the thread.
If the server runs out of FDs, which it will if there are enough simultaneous connections, accept() will start failing with a corresponding errno: in this case you just ignore it and keep looping. Maybe you decrease the idle timeout in this situation, and put it back when accepts start to work again. | 0 | 930 | 0 | 0 | 2012-11-14T22:04:00.000 | python,sockets,networking | Having an infinite amount of sockets connected? | 1 | 2 | 4 | 13,389,747 | 0 |
0 | 0 | I'm trying to make a chat server thing. Basically, I want more than one client to be able to connect at the same time.
I want it so it's always listening. Whenever anyone tries to connect, it instantly accepts them and adds them to a list of connections.
I could have a listen(1) then a timeout, and keep appending them to a list then closing the socket, and making a new one, then listening with a timeout, etc. Although, this seems very slow, and I'm not even sure it would work
Keep in mind, it does not HAVE to be socket. If there is any other kind of network interface, it would work just as well. | false | 13,388,177 | 0.099668 | 0 | 0 | 2 | Consider whether you actually need to maintain separate sockets for every connection. Would something like connectionless UDP be appropriate? That way you could support an arbitrary number of users while only using one OS socket.
Of course, with this approach you'd need to maintain connection semantics internally (if your application cares about such things); figure out which user each datagram has been sent either by looking at their IP/port or by examining some envelope data within your network protocol, send occasional pings to see whether the other side is still alive, etc. But this approach should nicely separate you from any OS concerns RE: number of sockets your process is allowed to keep open at once. | 0 | 930 | 0 | 0 | 2012-11-14T22:04:00.000 | python,sockets,networking | Having an infinite amount of sockets connected? | 1 | 2 | 4 | 13,388,385 | 0 |
0 | 0 | I am writing a simple test script(python) to test a web server's performance(all this server does is HTTP redirect). Socket is set to non-blocking, and registered to an epoll instance.
How can I know the connect() is failed because the server can't accept more connections? I am currently using EPOLLERR as the indicator. Is this correct?
Edit:
Assumptions:
1) IP layer network unreachability will not be considered. | false | 13,390,082 | 0.099668 | 0 | 0 | 1 | You can't know it 'failed because the server can't accept more connections', because there is no specific protocol for that condition. You can only know it failed for the usual reasons: ECONNREFUSED, connection timeout, EUNREACH, etc. | 0 | 988 | 0 | 0 | 2012-11-15T01:05:00.000 | python,linux,sockets,connect,epoll | How is connect() failure notified in epoll? | 1 | 2 | 2 | 13,390,556 | 0 |
0 | 0 | I am writing a simple test script(python) to test a web server's performance(all this server does is HTTP redirect). Socket is set to non-blocking, and registered to an epoll instance.
How can I know the connect() is failed because the server can't accept more connections? I am currently using EPOLLERR as the indicator. Is this correct?
Edit:
Assumptions:
1) IP layer network unreachability will not be considered. | false | 13,390,082 | 0.099668 | 0 | 0 | 1 | That catches the case of Connection Refused and other socket errors. Since I assume you are registering for read/write availability (success) upon the pending socket as well, you should also manually time-out those connections which have failed to notify you of read, write, or error availability on the associated file descriptor within an acceptable time limit.
ECONNREFUSED is generally only returned when the server's accept() queue exceeds its limit or when the server isn't even bound to a socket at the remote port. ENETDOWN, EHOSTDOWN, ENETUNREACH, and EHOSTUNREACH are only returned when a lower layer than TCP (e.g., IP) knows it cannot reach the host for some reason, and so they are not particularly helpful for stress testing a web server's performance.
Thus, you need to also bound the time taken to establish a connection with a timeout to cover the full gamut of stress test failure scenarios. Choice of timeout value is up to you. | 0 | 988 | 0 | 0 | 2012-11-15T01:05:00.000 | python,linux,sockets,connect,epoll | How is connect() failure notified in epoll? | 1 | 2 | 2 | 13,390,310 | 0 |
0 | 0 | I am doing some stress test to a simple HTTP redirect server in Python scripts. The script is set up with epoll (edge trigger mode) with non-blocking sockets. But I observed something that I don't quite understand,
1) epoll can get both ECONNREFUSED and ETIMEOUT errno while the connect() is in process. Don't they both indicates the remote server can't accept the connection? How are they different, how does a client tell the difference?
2) sometimes when EPOLLIN is notified by epoll, socket.recv() returns empty string without any exception thrown (or errno in C), I can keep reading the socket without getting any exception or error, it just always returns empty string. Why is that?
Thanks, | true | 13,404,538 | 1.2 | 0 | 0 | 1 | ECONNREFUSED signals that the connection was refused by the server, while ETIMEOUT signals that the attempt to connect has timed out, i.e. that no indication (positive or negative) about the connection attempt was received from the peer.
socket.recv() returning an empty string without error is simply the EOF condition, corresponding to an empty read in C. This happens when the other side closes the connection, or shuts it down for writing. It is normal that EPOLLIN is notified when EOF occurs, because you want to know about the EOF (and because you can safely recv from the socket without it hanging). | 0 | 1,087 | 0 | 1 | 2012-11-15T19:16:00.000 | python,sockets,tcp,epoll | TCP non-blocking socket.connect() and socket.recv() Error questions. (Python or C) | 1 | 1 | 1 | 13,404,886 | 0 |
0 | 0 | I'm trying to write a class with the same interface of Python 2 Standard Library's socket.socket.
I've problems trying to reproduce the behavior the object should have when a program tries to call select.select() on it.
The documentation in the entry for select.select says:
You may also define a wrapper class yourself, as long as it has an appropriate fileno() method (that really returns a file descriptor, not just a random integer).
I would like to try something like this: creating a file-like object that can be controlled by a thread of my program with select, while another thread of my program can set it when my object is ready for reading and writing. How can I do it? | true | 13,420,405 | 1.2 | 0 | 0 | 3 | The fileno() function needs to return a kernel file descriptor, so that it can be passed to the select system call (or poll/epoll/whatever). The multiplexing done by select-like operations is fundamentally an OS operation which must work on OS objects.
If you want to implement this for an object not based on an actual file descriptor you can do the following:
Create a pipe
Return the read end of the pipe from fileno()
Write a byte to the other end when you want to mark your object as "ready". This will wake any select calls.
Remember to read that byte from your real "read" implementation.
This pipe trick should be fairly portable. | 0 | 486 | 0 | 1 | 2012-11-16T16:20:00.000 | python,file,networking | Imitating the behavior of fileno() and select | 1 | 1 | 1 | 13,420,569 | 0 |
0 | 0 | Suppose I was giving this list of urls:
website.com/thispage
website.com/thatpage
website.com/thispageagain
website.com/thatpageagain
website.com/morepages
... could possibly be over say 1k urls.
What is the best/easiest way to kinda loop through this list and check whether or not the page is up? | false | 13,424,753 | 0.049958 | 0 | 0 | 1 | Open a pool of threads, open a Url for each, wait for a 200 or a 404. Rinse and repeat. | 0 | 2,657 | 0 | 1 | 2012-11-16T21:35:00.000 | python | Given a big list of urls, what is a way to check which are active/inactive? | 1 | 1 | 4 | 13,424,798 | 0 |
1 | 0 | So I am trying to scrape something that is behind a login system. I tried using CasperJS, but am having issues with the form, so maybe that is not the way to go; I checked the source code of the site and the form name is "theform" but I can never login must be doing something wrong. Does any have any tutorials on how to do this correctly using CasperJS, I've looked at the API and google and nothing really works.
Or does someone have any recommendations on how to do web scraping easily. I have to be able to check a simple conditional state and click a few buttons, that is all. | false | 13,434,664 | 0 | 0 | 0 | 0 | Because you mentioned CasperJS I can assume that web site generate some data by using JavaScript. My suggestion would be check WebKit. It is a browser "engine", that will let you do what ever you want with web-site.
You can use PyQt4 framework, which is very good, and has a good documentation. | 0 | 684 | 0 | 2 | 2012-11-17T20:52:00.000 | python,web-scraping,casperjs | Web scraping - web login issue | 1 | 1 | 5 | 13,437,094 | 0 |
1 | 0 | I have a lot of different sites I want to scrape using scrapy. I was wondering what is the best way of doing this?
Do you use a different "project" for each site you want to scrape, or do you use a different "spider", or neither?
Any input would be appreciated | true | 13,445,585 | 1.2 | 0 | 0 | 0 | different PROJECT for each site is a WORST idea .
different SPIDER for each site is a GOOD idea .
if you can adjust multiple sites in one SPIDER (based of there nature) is a BEST idea .
but again all depends on your Requirements. | 0 | 252 | 0 | 1 | 2012-11-18T23:08:00.000 | python,search,scrapy,web-crawler | Scrapy Python Crawler - Different Spider for each? | 1 | 1 | 1 | 13,451,254 | 0 |
0 | 0 | I'm trying to use Firefox portable for my tests in python. In plain webdriver it works, but i was wondering how to do it in remote webdriver.
All i could find is how to pass firefox profile, but how to specify to webdriver which binary to use? | false | 13,460,288 | 0 | 0 | 0 | 0 | Very hacky, but you could modify the Webdriver Firefox plugin to point to your binary. | 0 | 620 | 0 | 3 | 2012-11-19T18:57:00.000 | python,selenium,webdriver | How to specify browser binary in selenium remote webdriver? | 1 | 1 | 2 | 13,460,962 | 0 |
0 | 0 | I created a couchDB on my computer, that is I used the Python line server = couchdb.Server('http://localhost:5984')
I want to share this database with two other colleagues. How can I make it available to them? For the moment, I am comfortable giving them full admin privileges until I get a better handle on this.
I tried to read the relevant parts of CouchDB: The Definitive Guide, but I still don't get it.
How would they access it? They can't just type in my computer's IP address? | true | 13,462,448 | 1.2 | 0 | 0 | 2 | In order to avoid NAT problems, I would use an external service like Cloudant or Iris Couch. You can replicate your local database against the common DB in the cloud and your colleagues can connect to it. | 0 | 2,338 | 0 | 1 | 2012-11-19T21:20:00.000 | database,python-2.7,couchdb,couchdb-python | How to access CouchDB server from another computer? | 1 | 1 | 3 | 13,473,004 | 0 |
1 | 0 | I want to do some web crawling with scrapy and python. I have found few code examples from internet where they use selenium with scrapy.
I don't know much about selenium but only knows that it automates some web tasks. and browser actually opens and do stuff. but i don't want the actual browser to open but i want everything to happen from command line.
Can i do that in selenium and scrapy | false | 13,468,755 | 1 | 0 | 0 | 8 | Updated: PhantomJS is abandoned, and you can use headless browsers directly now, such like Firefox and Chrome!
Use PhantomJS instead.
You can do browser = webdriver.PhantomJS() in selenium v2.32.0. | 0 | 2,960 | 0 | 2 | 2012-11-20T07:53:00.000 | python,selenium,scrapy | Can i use selenium with Scrapy without actual browser opening with python | 1 | 1 | 2 | 16,050,387 | 0 |
1 | 0 | I will be using scrapy to crawl a domain. I plan to store all that information into my db with sqlalchemy. It's pretty simple xpath selectors per page, and I plan to use HttpCacheMiddleware.
In theory, I can just insert data into my db as soon as I have data from the spiders (this requires hxs to be instantiated at least). This will allow me to bypass instantiating any Item subclasses so there won't be any items to go through my pipelines.
I see the advantages of doing so as:
Less CPU intensive since there won't be any CPU processing for the pipelines
Prevents memory leaks.
Disk I/O is a lot faster than Network I/O so I don't think this will impact the spiders a lot.
Is there a reason why I would want to use Scrapy's Item class? | false | 13,469,321 | 1 | 0 | 0 | 7 | If you insert directly inside a spider, then your spider will block until the data is inserted. If you create an Item and pass it to the Pipeline, the spider can continue to crawl while the data is inserted. Also, there might be race conditions if multiple spiders try to insert data at the same time. | 0 | 1,529 | 0 | 1 | 2012-11-20T08:38:00.000 | python,scrapy | Scrapy why bother with Items when you can just directly insert? | 1 | 1 | 2 | 13,469,554 | 0 |
0 | 0 | What I mean is, if I go to "www.yahoo.com/thispage", and yahoo has set up a filter to redirect /thispage to /thatpage. So whenever someone goes to /thispage, s/he will land on /thatpage.
If I use httplib/requests/urllib, will it know that there was a redirection? What error pages?
Some sites redirect user to /errorpage whenever the page cannot be found. | false | 13,482,777 | 0.049958 | 0 | 0 | 1 | It depends on how they are doing the redirection. The "right" way is to return a redirected HTTP status code (301/302/303). The "wrong" way is to place a refresh meta tag in the HTML.
If they do the former, requests will handle it transparently. Note that any sane error page redirect will still have an error status code (e.g. 404) which you can check as response.status_code. | 0 | 17,050 | 0 | 18 | 2012-11-20T21:47:00.000 | python,httplib,python-requests | When I use python requests to check a site, if the site redirects me to another page, will I know? | 1 | 2 | 4 | 13,483,006 | 0 |
0 | 0 | What I mean is, if I go to "www.yahoo.com/thispage", and yahoo has set up a filter to redirect /thispage to /thatpage. So whenever someone goes to /thispage, s/he will land on /thatpage.
If I use httplib/requests/urllib, will it know that there was a redirection? What error pages?
Some sites redirect user to /errorpage whenever the page cannot be found. | false | 13,482,777 | 1 | 0 | 0 | 16 | To prevent requests from following redirects use:
r = requests.get('http://www.yahoo.com/thispage', allow_redirects=False)
If it is in indeed a redirect, you can check the redirect target location in r.headers['location']. | 0 | 17,050 | 0 | 18 | 2012-11-20T21:47:00.000 | python,httplib,python-requests | When I use python requests to check a site, if the site redirects me to another page, will I know? | 1 | 2 | 4 | 13,483,018 | 0 |
0 | 0 | I have a problem and can't solve it. Maybe I'm making it too hard or complex or I'm just going in the wrong direction and thinking of things that don't make sense. Below is a description of what happens. (Multiple tabs opened in a browser or a page that requests some other pages at the same time for example.)
I have a situation where 3 requests are received by the web application simultaneously and new user session has to be created. This session is used to store notification, XSRF token and login information when the user logs in. The application uses threads to handle requests (CherryPy under Bottle.py).
The 3 threads (or processes in case or multiple application instances) start handling the 3 requests. They check the cookie, no session exists, and create a new unique token that is stored in a cookie and in Redis. This will all happen at the same time and they don't know if a session already has been created by another thread, because all 3 tokens are unique.
These unused sessions will expire eventually, but it's not neat. It means everytime a client simultaneously does N requests and a new session needs to be created, N-1 session are useless.
If there is a property that can be used to identify a client, like an IP address, it would be a lot easier, but an IP address is not safe to use in this case. This property can be used to atomically store a session in Redis and other requests would just pick up that session. | false | 13,495,606 | 0 | 0 | 0 | 0 | If this is through a browser and is using cookies then this shouldn't be an issue at all. The cookie will, from what I can tell, the last session value that it is set to. If the client you are using does not use cookies then of course it will open a new session for each connection. | 0 | 484 | 0 | 1 | 2012-11-21T14:39:00.000 | python,http,session-state | What are good ways to create a HTTP session with simultaneous requests? | 1 | 1 | 1 | 13,495,800 | 0 |
1 | 0 | I am trying to build a web-app that has both a Python part and a Node.js part. The Python part is a RESTful API server, and the Node.js will use sockets.io and act as a push server. Both will need to access the same DB instance (Heroku Postgres in my case). The Python part will need to talk to the Node.js part in order to send push messages to be delivered to clients.
I have the Python and DB parts built and deployed, running under a "web" dyno. I am not sure how to build the Node part -- and especially how the Python part can talk to the Node.js part.
I am assuming that the Node.js will need to be a new Heroku app, so that it too can run on a 'web' dyno, so that it benefits from the HTTP routing stack, and clients can connect to it. In such a case, will my Python dynos will be accessing it using just like regular clients?
What are the alternatives? How is this usually done? | true | 13,498,828 | 1.2 | 0 | 0 | 2 | After having played around a little, and also doing some reading, it seems like Heroku apps that need this have 2 main options:
1) Use some kind of back-end, that both apps can talk to. Examples would be a DB, Redis, 0mq, etc.
2) Use what I suggested above. I actually went ahead and implemented it, and it works.
Just thought I'd share what I've found. | 0 | 2,109 | 0 | 3 | 2012-11-21T17:31:00.000 | python,node.js,heroku,cedar | Heroku Node.js + Python | 1 | 1 | 1 | 13,785,484 | 0 |
1 | 0 | I am experiencing slow crawl speeds with scrapy (around 1 page / sec).
I'm crawling a major website from aws servers so I don't think its a network issue. Cpu utilization is nowhere near 100 and if I start multiple scrapy processes crawl speed is much faster.
Scrapy seems to crawl a bunch of pages, then hangs for several seconds, and then repeats.
I've tried playing with:
CONCURRENT_REQUESTS = CONCURRENT_REQUESTS_PER_DOMAIN = 500
but this doesn't really seem to move the needle past about 20. | false | 13,505,194 | 0.379949 | 0 | 0 | 2 | Are you sure you are allowed to crawl the destination site at high speed? Many sites implement download threshold and "after a while" start responding slowly. | 0 | 4,117 | 0 | 8 | 2012-11-22T02:45:00.000 | python,http,scrapy,web-crawler | Scrapy Crawling Speed is Slow (60 pages / min) | 1 | 1 | 1 | 13,585,472 | 0 |
0 | 0 | I developed REST server. I hosted that one my virtual machine nginx server. Now i want to do bench marking by sending 10,000 concurrent request per second. So any solution for this ? | true | 13,513,103 | 1.2 | 0 | 0 | -1 | 10,000 per second? You'll need lots of machines to do this.
Write a client that can POST requests serially and then replicate it on several machines. | 1 | 2,276 | 0 | 3 | 2012-11-22T12:49:00.000 | python,testing,load-testing,performance-testing | How to send concurrent 10,000 post request using python? | 1 | 2 | 4 | 13,513,162 | 0 |
0 | 0 | I developed REST server. I hosted that one my virtual machine nginx server. Now i want to do bench marking by sending 10,000 concurrent request per second. So any solution for this ? | false | 13,513,103 | 0 | 0 | 0 | 0 | programmatically you can create threads and do a url fetch by every thread but not sure if you can create 10,000 requests. | 1 | 2,276 | 0 | 3 | 2012-11-22T12:49:00.000 | python,testing,load-testing,performance-testing | How to send concurrent 10,000 post request using python? | 1 | 2 | 4 | 13,514,261 | 0 |
0 | 0 | I'm writing a peer-to-peer program that requires the network be fully connected. However, when I test this locally and bring up about 20 nodes, some nodes successfully create a socket to other nodes, but when writing immediately after a broken pipe error occurs. This only happens when I start all nodes one right after the other; if I sleep about a second I don't see this problem.
I have logic to deal with two nodes that both open sockets to eachother, which may be buggy, though I do see it operating properly with less nodes. Is this a limitation of testing locally? | false | 13,514,805 | 0.291313 | 0 | 0 | 3 | 'Broken pipe' means you have written to a connection that has already been closed by the other end. So, you must have done that somehow. | 0 | 2,008 | 0 | 1 | 2012-11-22T14:29:00.000 | python,networking,pipe | Reason for broken pipes | 1 | 1 | 2 | 13,519,364 | 0 |
0 | 0 | I have Python 2.7.3 installed alongside Python 3.2.3 on an Ubuntu system.
I've installed urllib3 using pip and can import it from the python shell. When I open the python3 shell, I get a can't find module error when trying to import urllib3. help('modules') from within the shell also doesn't list urllib3.
Any ideas on how to get python3 to recognize urllib3? | true | 13,517,991 | 1.2 | 0 | 0 | 3 | You need to install it for each version of Python you have - if pip installs it for Python 2.7, it won't be accessible from 3.2.
There doesn't seem to be a pip-3.2 script, but you can try easy_install3 urllib3. | 1 | 4,054 | 0 | 1 | 2012-11-22T18:02:00.000 | python-3.x,urllib3 | Python3 can't find urllib3 | 1 | 1 | 1 | 13,520,455 | 0 |
1 | 0 | I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams.
my player is the playbin2 element
When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed.
The song is definetly playing. (state is not NULL)
Does anyone have any experience with this?
PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error. | false | 13,519,086 | 0.066568 | 0 | 0 | 1 | What does "you'll need to thread your own gst object" mean? And what does "wait until the query succeeds" mean?
State changes from NULL to PAUSED or PLAYING state are asynchronous. You will usually only be able to do a successful duration query once the pipeline is prerolled (so state >= PAUSED). When you get an ASYNC_DONE message on the pipeline's (playbin2's) GstBus, then you can query. | 0 | 700 | 0 | 0 | 2012-11-22T19:38:00.000 | python,gstreamer,python-gstreamer | element playbin2 query_position always returns query failed | 1 | 3 | 3 | 13,529,688 | 0 |
1 | 0 | I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams.
my player is the playbin2 element
When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed.
The song is definetly playing. (state is not NULL)
Does anyone have any experience with this?
PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error. | false | 13,519,086 | 0 | 0 | 0 | 0 | I found it on my own. Problem was with threading. Apparently, you'll need to thread your gst object and just wait until the query succeeds. | 0 | 700 | 0 | 0 | 2012-11-22T19:38:00.000 | python,gstreamer,python-gstreamer | element playbin2 query_position always returns query failed | 1 | 3 | 3 | 13,529,066 | 0 |
1 | 0 | I'm developing a media player that streams mp3 files. I'm using the python gstreamer module to play the streams.
my player is the playbin2 element
When I want to query the position (with query_position(gst.FORMAT_TIME,None)), it always returns a gst.QueryError: Query failed.
The song is definetly playing. (state is not NULL)
Does anyone have any experience with this?
PS: I also tried replacing gst.FORMAT_TIME with gst.Format(gst.FORMAT_TIME), but gives me the same error. | false | 13,519,086 | 0 | 0 | 0 | 0 | From what source are you streaming? If you query the position from the playbin2 I'd say you do everything right. Can you file a bug for gstreamer, include a minimal python snippet that exposes the problem and tell from which source you stream - ideally its public. | 0 | 700 | 0 | 0 | 2012-11-22T19:38:00.000 | python,gstreamer,python-gstreamer | element playbin2 query_position always returns query failed | 1 | 3 | 3 | 13,525,709 | 0 |
0 | 0 | Can someone give me a brief idea on how to transfer large files over the internet?
I tried with sockets, but it does not work. I am not sure what the size of receiving sockets should be. I tried with 1024 bytes. I send the data from one end and keep receiving it at the other end.
Is there any other way apart from sockets, I can use in Python? | true | 13,523,789 | 1.2 | 0 | 0 | 1 | I encountered the same problem, and i solved it by chopping the file up and then sending the parts separately (load the file, send file[0:512], then send file[512:1024] and so on. Before sending the file i sent the length of the file to the receiver so the it would know when its done.
I know this probably isn't the best way to do this, but i hope it will help you. | 0 | 1,410 | 0 | 0 | 2012-11-23T05:55:00.000 | python,python-3.x | Transfer a Big File Python | 1 | 1 | 1 | 13,533,315 | 0 |
1 | 0 | I have a script where I want to check if a file exists in a bucket and if it doesn't then create one.
I tried using os.path.exists(file_path) where file_path = "/gs/testbucket", but I got a file not found error.
I know that I can use the files.listdir() API function to list all the files located at a path and then check if the file I want is one of them. But I was wondering whether there is another way to check whether the file exists. | true | 13,525,482 | 1.2 | 1 | 0 | 1 | I guess there is no function to check directly if the file exists given its path.
I have created a function that uses the files.listdir() API function to list all the files in the bucket and match it against the file name that we want. It returns true if found and false if not. | 0 | 52,000 | 0 | 40 | 2012-11-23T08:39:00.000 | python,google-cloud-storage,file-exists | How to check if file exists in Google Cloud Storage? | 1 | 1 | 13 | 13,644,827 | 0 |
1 | 0 | I have deployed a web2py application on a server that is running on Apache web server.
All seems to be working fine, except for the fact that the web2py modules are not able to connect to an external website.
in web2py admin page, i get the following errors :
1. Unable to check for upgrades
2. Unable to download because:
I am using web2py 1.9.9, CentOS 5
I am also behind an institute proxy. I am guessing that the issue has to do something with the proxy configurations. | false | 13,544,715 | 0 | 0 | 0 | 0 | Try testing the proxy theory by ssh -D tunneling to a server outside the proxy and seeing if that works for you. | 0 | 412 | 0 | 0 | 2012-11-24T19:23:00.000 | python,linux,apache,web2py | Web2py unable to access internet [connection refused] | 1 | 1 | 1 | 13,545,399 | 0 |
1 | 0 | I am learning to make spiders and crawlers. This spidering is my passion and I am going to do that for a long time. For parsing I am thinking of using BeautifulSoup. But some people say that if I use lxml, I will have more control.
Now I don't know much. But I am ready to work hard even if using lxml is harder. But if that gives me full control then I am ready for it.
So what is your opinion? | true | 13,577,922 | 1.2 | 0 | 0 | 3 | I don't really think this question makes a whole lot of sense. You need to give more explanation of what exactly your goals are. BeautifulSoup and lxml are two tools that in large part do the same things, but have different features and API philosophies and structure.
It's not a matter of "which gives you more control," but rather "which is the right tool for the job?" I use both. I prefer the BeautifulSoup syntax, as I find it more natural, but I find that lxml is better when I'm trying to parse unknown quantities on the fly based on variables--e.g., generating XPath strings that include variable values, which I will then use to extract specific elements from varying pages.
So really, it depends on what you're trying to do.
TL;DR
I find BeautifulSoup easier and more natural to use but lxml ultimately to be more powerful and versatile. Also, lxml wins the speed contest, no question. | 0 | 120 | 0 | 1 | 2012-11-27T05:28:00.000 | python,parsing,beautifulsoup,lxml | Will I have more control over my spider if I use lxml over BeautifulSoup? | 1 | 1 | 1 | 13,578,055 | 0 |
1 | 0 | I am new to Scrapy and quite confused about crawler and spider. It seems that both of them can crawl the website and parse items.
There are a Crawler class(/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py) and a CrawlerSpider class(/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/crawl.py) in Scrapy. Does anyone could tell me the differences between them? And which one should I use in what conditions?
Thanks a lot in advance! | true | 13,578,170 | 1.2 | 0 | 0 | 2 | CrawlerSpider is a sub-class of BaseSpider : This is the calls you need to extend if you want your spider to follow links according to the "Rule" list.
"Crawler" is the main crawler sub-classed by CrawlerProcess.
You will have to sub-class CrawlerSpider in you spider but I don't think you will have to touch Crawler. | 0 | 1,470 | 0 | 3 | 2012-11-27T05:55:00.000 | python,scrapy | differences between scrapy.crawler and scrapy.spider? | 1 | 1 | 1 | 13,584,851 | 0 |
0 | 0 | I hope this doesn't cross into superuser territory.
So I have an embedded linux, where system processes are naturally quite stripped. I'm not quite sure which system process monitors to physical layer and starts a dhcp client when network cable is plugged in, but i made one myself.
¨
The problem is, that if i have a python script, using http connections, running before i have an IP address, it will never get a connection. Even after i have a valid IP, the python still has
"Temporary error in name resolution"
So how can I get the python to realize the new connection available, without restarting the script?
Alternatively , am I missing some normal procedure Linux runs normally at network cable connect.
The dhcp client I am using is udhcpc and python version is 2.6. Using httplib for connections. | false | 13,606,584 | 0 | 0 | 0 | 0 | After alot more research, the glibc problem jedwards suggested, seemed to be the problem. I did not find a solution, but made workaround for my usecase.
Considering I only use one URL, I added my own "resolv.file" .
A small daemon gets the IP address of the URL when PHY reports cable connected. This IP is saved to "my own resolv.conf". From this file the python script retrieves the IP to use for posts.
Not really a good solution, but a solution. | 0 | 772 | 1 | 1 | 2012-11-28T13:47:00.000 | python,networking,httplib | Python not getting IP if cable connected after script has started | 1 | 1 | 3 | 13,643,155 | 0 |
1 | 0 | I have a crawler that automates the login and crawling for a website, but since the login was changed it is not working anymore.
I am wondering, can I feed the browser cookie (aka, I manually log-in) to my HTTP request? Is there anything particularly wrong in principle that wouldn't make this work? How do I find the browser cookies relevant for the website?
If it works, how do I get the "raw" cookie strings I can stick into my HTTP request?
I am quite new to this area, so forgive my ignorant questions. I can use either PYthon or Java | true | 13,628,190 | 1.2 | 0 | 0 | 0 | When you send the login information (and usually in response to many other requests) the server will set some cookies to the client, you must keep track of them and send them back to the server for each subsequent request.
A full implementation would also keep track of the time they are supposed to be stored. | 0 | 925 | 0 | 0 | 2012-11-29T14:42:00.000 | java,python,cookies,http-headers,httpwebrequest | How to use browser cookies programmatically | 1 | 1 | 1 | 13,628,291 | 0 |
0 | 0 | I need a way to programmatically create Twitter Applications/API keys. I could make something on my own, but does anyone know of a pre-made solution? | false | 13,652,514 | 0.291313 | 1 | 0 | 3 | Assuming you're referring to the consumer key and consumer secret, you're not supposed to be able to create those programmatically. That's why you have to sign in to a web page with a CAPTCHA in order to create one. | 0 | 1,280 | 0 | 0 | 2012-11-30T20:16:00.000 | php,python,twitter,twitter-oauth | Is there an unofficial API for creating Twitter applications/api keys? | 1 | 2 | 2 | 13,652,765 | 0 |
0 | 0 | I need a way to programmatically create Twitter Applications/API keys. I could make something on my own, but does anyone know of a pre-made solution? | false | 13,652,514 | 0 | 1 | 0 | 0 | Not sure what you mean, but there are plenty of libraries that abstracts twitter API (https://dev.twitter.com/docs/twitter-libraries) | 0 | 1,280 | 0 | 0 | 2012-11-30T20:16:00.000 | php,python,twitter,twitter-oauth | Is there an unofficial API for creating Twitter applications/api keys? | 1 | 2 | 2 | 13,652,757 | 0 |
0 | 0 | Assume I'm using a Chrome extension that gives me a nice summary of content on a webpage. Rather than writing my own program to mimic the services of the extension, I'd like to create a script that then uses the summary information that the extension generates, capturing it in a variable that I can manipulate.
Is this possible to write a script that could achieve this? If so, what would be good a starting point? I'd like to write the script in perhaps unix or python. | false | 13,653,605 | 0 | 0 | 0 | 0 | Well you will have to locate where it stores the summary information you want. Is it in RAM temporarily or is it persistant across reboots and the sort? What kind of information? If the information is stored temporarily, like per session, it could make what you wish to accomplish a little more difficult. If the summary data is stored locally this could be achieved a bit easier. For instance if it was stored locally, you could write some python to open the file that would contain your summary information, read it into a variable and then parse that information into whatever format you need it in.
Pretty open ended question though, can you offer any more details? | 0 | 128 | 0 | 0 | 2012-11-30T21:41:00.000 | python,shell,unix,google-chrome-extension | How can I access the content of a Chrome extension from outside the browser - e.g. I'd the content to be use in a script I'm writing? | 1 | 1 | 1 | 13,653,956 | 0 |
1 | 0 | How can I add preferences to the browser so it launches without javascript? | false | 13,655,486 | 0 | 0 | 0 | 0 | You can disable javascript directly from the browser. Steps:
Type About:config in url
Click I'll be careful, I promise
Search for javascript.enabled
Right click -> Toggle
Value = false | 0 | 5,735 | 0 | 3 | 2012-12-01T01:35:00.000 | python | How can I disable javascript in firefox with selenium? | 1 | 1 | 2 | 29,955,598 | 0 |
1 | 0 | Novice to programming. I have most of my experience in python. I am comparind this to C#. I have created small web apps using Web2py, and have read 'learn python the hard way'. I have limited to no C# experience besides setting up and playing in VS.
My end goal is to be able to develop web apps (So far I do like web2py), and even some web automation programs using GUI's. For example, an application that will allow me to put / get information in a database from my GUI, and then post it to my site's either via a database connection, or post to other sites that are not mine, through automation.
I really like python so far, but I feel like since I do want to work with GUI applications, that C# may be the best bet...
More specifically, does Python even compare, or have modules/library that will help me do GUI web & browser automation, versus C#? How about with just basic scraping? Pulling data from numerous sites to display in a database. Does Python still have an edge?
Thanks. I hope this question has some objectivity to it considering the different libraries and modules available. If it is too subjective , please accept my apologies. | false | 13,672,346 | 0.197375 | 0 | 0 | 1 | Selenium is a pretty good library for automation if you want to scrape information off of javascript enabled pages. It has bindings for a number of languages. If you only want basic scraping though, I would go with Mechanize; no need to open a browser. | 0 | 155 | 0 | 2 | 2012-12-02T18:28:00.000 | c#,asp.net,python,web2py,browser-automation | Advice on which language to persue for browser automation & scraping | 1 | 1 | 1 | 13,672,402 | 0 |
0 | 0 | There is a webpage that my browser can access, but urllib2.urlopen() (Python) and wget both return HTTP 403 (Forbidden). Is there a way to figure out what happened?
I am using the most primitive form, like urllib2.urlopen("http://test.com/test.php"), using the same url (http://test.com/test.php) for both the browser and wget. I have cleared all my cookies in browser before the test.
Thanks a lot! | false | 13,677,625 | 0.197375 | 0 | 0 | 2 | The Python library urllib has a default user-agent string that includes the word Python in it and wget uses "wget/VERSION". If the site you are cionnectiing checks the user-agent info, it will probably reject these two. Google, for instance, will do so.
It's easy enough to fix.. for wget, use the -U parameter and for urllib, create a URLOpener with an appropriate string. | 0 | 2,333 | 0 | 1 | 2012-12-03T05:24:00.000 | python,http-headers,httprequest,urllib2,http-status-code-403 | urllib2 and wget returns HTTP 403 (forbidden), while browser returns OK | 1 | 1 | 2 | 13,685,106 | 0 |
1 | 0 | I'm using beatbox API to update/insert data in to salesforce opportunity object.
upsert() throws INVALID FIELD error when I pass Id as externalIDFieldName. Currently I'm using another unique external Id and it's working fine but I want to use the salesforce Id.
Please shed some light on what I'm missing. | true | 13,695,322 | 1.2 | 0 | 0 | 3 | If you already know the salesforce Id of the record, just call update instead. | 0 | 565 | 0 | 0 | 2012-12-04T02:48:00.000 | python,api,salesforce | upsert throws error when I pass Id as externalIdFieldName in Beatbox API while contacting Salesforce | 1 | 1 | 2 | 13,695,617 | 0 |
0 | 0 | How can I send one XMPP message to all connected clients/resources using a Python libraries for example:
xmpppy, jabber.py, jabberbot. Any other commandline solution is well.
So far I've only been able to send an echo or a single message to only one client.
The purpose is to send a message to all resources/clients connected, not grouped.
This might be triggered by a command but is not 'really' necessary.
Thank you. | false | 13,734,601 | 0.099668 | 0 | 0 | 1 | I cannot give you a specific python example, but I explain how the logic works.
When you send a message to a bare Jid then it depends on the server software or configuration how its routed. Some servers send the message to the "most available resource", and some servers send it to all resources. E.g. Google Talk sends it to all resources.
If you control the server software and it allows you to route messages to a bare Jid to all connected resources then this would be the easiest way.
When your code must work on any server then you should collect all available resources of your contacts. You get them with the presence, most libraries have a callback for this. Then you can send out the messages to full Jids (with resources) in a loop. | 0 | 1,722 | 0 | 1 | 2012-12-05T23:44:00.000 | python,xmpp,message | Send an xmpp message to all connected clients/resources | 1 | 1 | 2 | 13,739,615 | 0 |
0 | 0 | The situation:
I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part?
Thanks, any comments are the most welcomed. | false | 13,751,271 | 0 | 1 | 0 | 0 | Have WAMP server. It is the easiest and quickest way. The web server will support php, python , http etc.
If you are using Linux , the easiest tool for serial communication is php.
But in windows php cannot read data from serial communication. Hence use python / perl etc.
Thanks | 0 | 544 | 1 | 1 | 2012-12-06T19:43:00.000 | python,arduino,interprocess,python-multithreading | python daemon + interprocess communication + web server | 1 | 2 | 2 | 16,685,053 | 0 |
0 | 0 | The situation:
I have a python script to connect/send signals to serial connected arduino's. I wanted to know the best way to implement a web server, so that i can query the status of the arduinos. I want that both the "web server" part and serial connection runs on the same script. Is it possible, or do i have to break it into a daemon and a server part?
Thanks, any comments are the most welcomed. | true | 13,751,271 | 1.2 | 1 | 0 | 0 | For those wondering what I have opted for; I have decoupled the two part:
The Arduino daemon
I am using Python with a micro web framework called [Bottle][1] which handles the API calls and I have used PySerial to communicate with the Arduino's.
The web server
The canonical Apache and PHP; are used to make API calls to the Arduino daemon. | 0 | 544 | 1 | 1 | 2012-12-06T19:43:00.000 | python,arduino,interprocess,python-multithreading | python daemon + interprocess communication + web server | 1 | 2 | 2 | 16,689,821 | 0 |
0 | 0 | I use Python 3.2. I have an API which uses SOAP. I need to perform a number of SOAP calls to modify some objects in a database. I'm trying to install a SOAP library which would work with Python 3.2 (or 2.7 if that's what it takes) in order to do my task.
If someone could give me some guidance how to go through with what library to install and how to install, I would be very grateful. I would be able to continue with the rest of my development.
Note: I heard about SOAPy but it looks like it's discontinued. I've downloaded an executable which asks me to point where I want it installed and I'm given no choices...
I'm a little lost. | false | 13,760,852 | 0 | 0 | 0 | 0 | Open your command prompt (If python already installed and you have set python path in environment variable) then
c:> pip install zeep
c:> pip install lxml==3.7.3 zeep
c:> pip install zeep[xmlsec]
c:> pip install zeep[async]
Now you are ready to create SOAP call using python
1. c:/> Python
2. >>> from zeep import Client
3. >>> client = Client('your WSDL URL');
4. result = client.service.method_name(parameters if required)
5. >>> print (result) | 0 | 1,697 | 0 | 1 | 2012-12-07T10:04:00.000 | python,soap,windows-7 | How to make SOAP calls in Python on Windows? | 1 | 1 | 1 | 45,009,606 | 0 |
0 | 0 | I am currently a student and next semester I am taking a network programming course in python. I need to propose a network project on which I'll work during the semester before it starts. I wanted to ask if anybody knows any good sources of the project for my topic. I can include anything related to networking and security. | true | 13,779,114 | 1.2 | 0 | 0 | 1 | I think this might be off-topic for StackOverflow; however, you can consider implementing a simple online chat room.
The main advantage of this topic is that it's as probably as simple as you can get to demonstrate application of core concepts in networking and security.
You'll be able to do a bit of architecture as well:
Server Backend: Django, or another framework for Python? Is event-driven architecture appropriate here? Publish-subscribe?
Client UI/Model: You should technically still use the Model-View-Controller pattern, even though the model here would just be a "proxy" for the model on the server.
Serialized Mediums: JSON, YAML, YAML?
ORM for user accounts / history
It's also a safe project because, while it's not overly ambitious, you can keep adding features to it as long as you have more time left; I'm sure you can think of many possible features for a chatroom =) | 1 | 814 | 0 | 1 | 2012-12-08T15:57:00.000 | python,networking | Python network programming project sources | 1 | 1 | 1 | 13,779,203 | 0 |
1 | 0 | Is there a way to capture visible webpage content or text as if copying from a browser display to parse later (maybe using regular expression etc)? I don't mean to clean the html tags, javascript, etc and only show leftover text. I would like to copy all visible text, since some style elements may hide some of the html text while showing others when displayed in the browser. So far I have looked into nltk, lxml Cleaner, and selenium without luck. Maybe I can capture a screenshot in selenium and then extract text using ocr, but that seems computer intensive? Thanks for any help! | false | 13,785,630 | 0.379949 | 0 | 0 | 2 | Sure. Use Selenium and just loop through all visible, displayable elements. | 0 | 781 | 0 | 0 | 2012-12-09T07:48:00.000 | python,selenium,screen-scraping,web-scraping,screenshot | Capture Visible Webpage Content (or text) as if Copying from Browser | 1 | 1 | 1 | 13,787,570 | 0 |
1 | 0 | I have a written an app, with python acting as a simple web server (I am using bottle framework for this) and an HTML + JS client. The whole thing runs locally. The web page acts as GUI in this case.
In my code I have implemented a file browser interface so I can access local file structure from JavaScript.
The server accepts only local connections, but what bothers me is that: if for example somebody knows that I am running my app locally, and forges a site with AJAX request to localhost? and I visit his site in some way, will my local files be visible to the attacker?
My main question is: is there any way to secure this? I mean that my server will know for sure that the request came from my locally served file? | true | 13,788,172 | 1.2 | 0 | 0 | 1 | The most direct way to protect against this attack is to just have a long complex secret key being required for every request. Just make your local code authenticate itself before processing the request. This is essentially how web services on the Internet are protected.
You might also want to consider having inter process communication in some other form like DBUS or unix sockets. I'm not sure which OS you are on but there are many options for inter process communication that wouldn't make you vulnerable in this way. | 0 | 320 | 0 | 2 | 2012-12-09T14:14:00.000 | javascript,python,ajax,localhost | Secure localhost JavaScript ajax requests | 1 | 1 | 1 | 13,788,959 | 0 |
0 | 0 | I'm writing some scripts for our sales people to query an index with elastic search through python. (Eventually the script will update lead info in our Salesforce DB.)
I have been using the urllib2 module, with simplejson, to pull results. The problem is that this seems to be a not-so-good approach, evidenced by scripts which are taking longer and longer to run.
Questions:
Does anyone have any opinions (opinions, on the internet???) about Elastic Search clients for Python? Specifically, I've found pyes and pyelasticsearch, via elasticsearch.org---how do these two stack up?
How good or bad is my current approach of dynamically building the query and running it via self.raw_results = simplejson.load(urllib2.urlopen(self.query))?
Any advice is greatly appreciated! | false | 13,823,554 | 0 | 0 | 0 | 0 | It sounds like you have an issue unrelated to the client. If you can pare down what's being sent to ES and represent it in a simple curl command it will make what's actually running slowly more apparent. I suspect we just need to tweak your query to make sure it's optimal for your context. | 0 | 748 | 0 | 4 | 2012-12-11T15:52:00.000 | python,elasticsearch | Elastic search client for Python: advice? | 1 | 2 | 2 | 14,884,970 | 0 |
0 | 0 | I'm writing some scripts for our sales people to query an index with elastic search through python. (Eventually the script will update lead info in our Salesforce DB.)
I have been using the urllib2 module, with simplejson, to pull results. The problem is that this seems to be a not-so-good approach, evidenced by scripts which are taking longer and longer to run.
Questions:
Does anyone have any opinions (opinions, on the internet???) about Elastic Search clients for Python? Specifically, I've found pyes and pyelasticsearch, via elasticsearch.org---how do these two stack up?
How good or bad is my current approach of dynamically building the query and running it via self.raw_results = simplejson.load(urllib2.urlopen(self.query))?
Any advice is greatly appreciated! | true | 13,823,554 | 1.2 | 0 | 0 | 2 | We use pyes. And its pretty neat. You can there go with the thrift protocol which is faster then the rest service. | 0 | 748 | 0 | 4 | 2012-12-11T15:52:00.000 | python,elasticsearch | Elastic search client for Python: advice? | 1 | 2 | 2 | 14,870,170 | 0 |
0 | 0 | I am writing my own function for parsing XML text into objects which is can manipulate and render back into XML text. To handle the nesting, I am allowing XML objects to contain other XML objects as elements.
Since I am automatically generating these XML objects, my plan is to just enter them as elements of a dict as they are created. I was planning on generating an attribute called name which I could use as the key, and having the XML object itself be a value assigned to that key.
All this makes sense to me at this point. But now I realize that I would really like to also save an attribute called line_number, which would be the line from the original XML file where I first encountered the object, and there may be some cases where I would want to locate an XML object by line_number, rather than by name.
So these are my questions:
Is it possible to use a dict in such a way that I could find my XML object either by name or by line number? That is, is it possible to have multiple keys assigned to a single value in a dict?
How do I do that?
If this is a bad idea, what is a better way? | false | 13,845,981 | 0.066568 | 0 | 0 | 1 | Since dictionaries can have keys of multiple types, and you are using names (strings only) as one key and numbers (integers only) as another, you can simply make two separate entries point to the same object - one for the number, and one for the string.
dict[0] = dict['key'] = object1 | 1 | 174 | 0 | 0 | 2012-12-12T18:10:00.000 | python,dictionary | Is it possible to use multiple keys for a single element in a dict? | 1 | 1 | 3 | 13,846,048 | 0 |
0 | 0 | I wrote a python app which transfers files via sockets to a server. It always works, but my question is: is that a good way to transfer files from desktop client to server via sockets? How, for example, do Google Drive or Dropbox desktop clients syncronize files (as I know for already existent files GD client sends only changes, like rsync), but what about new files? | false | 13,855,390 | 0.099668 | 0 | 0 | 1 | You may consider packaging the file into a torrent, and transferring it that way. Torrents have LOTS of error recovery. Many large companies, for example, Blizzard, use torrents to deliver content to their users.
You'll still need a way to transfer the torrent info of course
See python package libtorrent and the server software called opentracker
I've also done file transfer with sockets, which is fine if the internet connection is uninterrupted, and you don't want to parallel stream. | 0 | 1,435 | 0 | 1 | 2012-12-13T07:56:00.000 | python,sockets,python-2.7,sync,dropbox | Python file transfer | 1 | 1 | 2 | 30,990,211 | 0 |
1 | 0 | I want to parse html code in python and tried beautiful soup and pyquery already. The problem is that those parsers modify original code e.g insert some tag or etc. Is there any parser out there that do not change the code?
I tried HTMLParser but no success! :(
It doesn't modify the code and just tells me where tags are placed. But it fails in parsing web pages like mail.live.com
Any idea how to parse a web page just like a browser? | true | 13,859,124 | 1.2 | 0 | 0 | 0 | No, to this moment there is no such HTML parser and every parser has it's own limitations. | 0 | 273 | 0 | 1 | 2012-12-13T11:44:00.000 | python,html,parsing | python html parser which doesn't modify actual markup? | 1 | 1 | 3 | 18,350,504 | 0 |
1 | 0 | I'm building a file hosting app that will store all client files within a folder on an S3 bucket. I then want to track the amount of usage on S3 recursively per top folder to charge back the cost of storage and bandwidth to each corresponding client.
Front-end is django but the solution can be python for obvious reasons.
Is it better to create a bucket per client programmatically?
If I do go with the approach of creating a bucket per client, is it then possible to get the cost of cloudfront exposure of the bucket if enabled? | false | 13,873,119 | 0 | 0 | 1 | 0 | No its not possible to create a bucket for each user as Amazon allows only 100 buckets per account. So unless you are sure not to have more than 100 users, it will be a very bad idea.
The ideal solution will be to remember each user's storage in you Django app itself in database. I guess you would be using S3 boto library for storing the files, than it returns the byte size after each upload. You can use that to store that.
There is also another way out, you could create many folders inside a bucket with each folder specific to an user. But still the best way to remember the storage usage in your app | 0 | 818 | 0 | 0 | 2012-12-14T05:20:00.000 | python,django,amazon-s3 | How can I track s3 bucket folder usage with python? | 1 | 1 | 2 | 13,892,252 | 0 |
1 | 0 | I have a use case where I need to fill the form in a website but don't have access to API. Currently we are using webdriver along with browser but it gets very heavy and not fool proof as the process is asynchronous. Is there any way I can do it without browser and also make the process synchronous by closely monitoring the pending requests?
Casperjs and htmlunitdriver seems to be some of the best options I have. Can someone explain advantages or disadvantages in terms of maintenance, fail-proof, light weight.
I would need to navigate complex and many different types of webpages. Some of the webpages I would like to navigate are heavily JS driven.
Can Scrapy be used for this purpose? | false | 13,873,719 | 0.462117 | 0 | 0 | 5 | Use Htmlunitdriver.For making it fail proof You would have to make some changes accordingly.But It will work without browser. | 0 | 811 | 0 | 3 | 2012-12-14T06:23:00.000 | python,selenium,scrapy,selenium-webdriver,phantomjs | Navigation utility without browser, light weight and fail-proof | 1 | 1 | 2 | 13,873,743 | 0 |
0 | 0 | am trying to find a way to detect if a given URL has an RSS feed or not. Any suggestions? | false | 13,892,113 | 1 | 0 | 0 | 6 | Each RSS have some format.
See what Content-Type the server returns for the given URL. However, this may not be specific and a server may not necessarily return the correct header.
Try to parse the content of the URL as RSS and see if it is successful - this is likely the only definitive proof that a given URL is a RSS feed. | 0 | 812 | 0 | 0 | 2012-12-15T12:20:00.000 | python,rss | given a URL in python, how can i check if URL has RSS feed? | 1 | 1 | 1 | 13,892,148 | 0 |
0 | 0 | I wan't to generate a WSDL file for my REST web service. I also need to parse it in Python. How can I do this? | true | 13,909,225 | 1.2 | 0 | 0 | 3 | Errr?
WSDL usually refers to SOAP which to my knowledge encapsulates the actual remote call protocol inside it's own protocol and just happens to use HTTP as a transport
REST refers to (usually) to using HTTP methods appropriately e.g. DELETE /frobnication/1 would delete it PUT /frobnication/1 would completely replace the thing (resource) under that URL. POST /frobnication/1 updates it .... (HTTP does have a few more methods).
REST doesn't usually have a WSDL thou, IIRC, there is some talk about "commonly known entry points" (Google for that).
Vote me down but to me that question seems to mix up 2 completely different topics... | 0 | 800 | 0 | 1 | 2012-12-17T06:17:00.000 | python,rest,python-2.7 | How do I generate a WSDL file for my REST client? | 1 | 1 | 2 | 13,923,221 | 0 |
0 | 0 | Just the title, what the difference between them?
In python, socket.gethostbyname(socket.gethostname()) and socket.gethostbyname(socket.getfqdn()) return different results on my computer. | true | 13,931,924 | 1.2 | 0 | 0 | 7 | From documentation,
socket.gethostname returns a string containing the hostname of the machine where the Python interpreter is currently executing.
socket.getfqdn returns a fully qualified domain name if it's available or gethostname otherwise.
Fully qualified domain name is a domain name that specifies its exact location in the tree hierarchy of the DNS. From wikipedia examples:
For example, given a device with a local hostname myhost and a parent
domain name example.com, the fully qualified domain name is
myhost.example.com. | 0 | 10,295 | 0 | 8 | 2012-12-18T11:26:00.000 | python,sockets | what's the difference between gethostname and getfqdn? | 1 | 3 | 3 | 13,932,042 | 0 |
0 | 0 | Just the title, what the difference between them?
In python, socket.gethostbyname(socket.gethostname()) and socket.gethostbyname(socket.getfqdn()) return different results on my computer. | false | 13,931,924 | 0 | 0 | 0 | 0 | The hostname is not the fully qualified domain name, hence why they return different results.
getfqdn() will return the fully qualified domain name while gethostname() will return the hostname. | 0 | 10,295 | 0 | 8 | 2012-12-18T11:26:00.000 | python,sockets | what's the difference between gethostname and getfqdn? | 1 | 3 | 3 | 13,931,968 | 0 |
0 | 0 | Just the title, what the difference between them?
In python, socket.gethostbyname(socket.gethostname()) and socket.gethostbyname(socket.getfqdn()) return different results on my computer. | false | 13,931,924 | 0.26052 | 0 | 0 | 4 | Note that the selected reply above is quite confusing.
YES socket.getfqdn WILL return a full-qualified hostname. But if it's going to be 'localhost.localdomain' you probably actually want socket.gethostname instead so that you get something that is somewhat useable.
The difference is that one reads from /etc/hostname and /etc/domainname while the other reads the kernel nodename. Depending on your distribution, configuration, OS, etc. your mileage WILL vary.
What this means is that you generally want to first check socket.getfqdn, and verify if it returns 'localhost.localdomain'. if it does, use socket.gethostname instead.
Finally, python also has platform.node which is basically the same as socket.gethostname on python, though this might be a better choice for multiplatform code.
That's quite an important detail. | 0 | 10,295 | 0 | 8 | 2012-12-18T11:26:00.000 | python,sockets | what's the difference between gethostname and getfqdn? | 1 | 3 | 3 | 43,330,159 | 0 |
0 | 0 | I am trying to find a way to rename (change email address aka group id) a google group via api. Using the python client libraries and the provisioning api i am able to modify the group name and description, and I have used the group settings api to modify a group's settings. Is there a way to change the email address? | false | 13,937,326 | 0.197375 | 1 | 0 | 1 | There is no group rename function for groups as there is for users. With the Group Settings and Provisioning APIs though, you can capture much of the group specifics and migrate that over to a new group. You would lose:
-Group Archive
-Managers (show only as members)
-Email Delivery (Immediate, Digest, No-Delivery, etc) | 0 | 923 | 0 | 1 | 2012-12-18T16:28:00.000 | gdata-api,google-api-client,google-api-python-client,google-provisioning-api | Is it possible to change email address of a Google group via API? | 1 | 1 | 1 | 13,938,196 | 0 |
0 | 0 | I wonder how to update fast numbers on a website.
I have a machine that generates a lot of output, and I need to show it on line. However my problem is the update frequency is high, and therefore I am not sure how to handle it.
It would be nice to show the last N numbers, say ten. The numbers are updated at 30Hz. That might be too much for the human eye, but the human eye is only for control here.
I wonder how to do this. A page reload would keep the browser continuously loading a page, and for a web page something more then just these numbers would need to be shown.
I might generate a raw web engine that writes the number to a page over a specific IP address and port number, but even then I wonder whether this page reloading would be too slow, giving a strange experience to the users.
How should I deal with such an extreme update rate of data on a website? Usually websites are not like that.
In the tags for this question I named the languages that I understand. In the end I will probably write in C#. | false | 13,938,903 | 0.132549 | 1 | 0 | 2 | a) WebSockets in conjuction with ajax to update only parts of the site would work, disadvantage: the clients infrastructure (proxies) must support those (which is currently not the case 99% of time).
b) With existing infrastructure the approach is Long Polling. You make an XmlHttpRequest using javascript. In case no data is present, the request is blocked on server side for say 5 to 10 seconds. In case data is avaiable, you immediately answer the request. The client then immediately sends a new request. I managed to get >500 updates per second using java client connecting via proxy, http to a webserver (real time stock data displayed).
You need to bundle several updates with each request in order to get enough throughput. | 0 | 114 | 0 | 2 | 2012-12-18T18:07:00.000 | c#,python,asp.net,web-services,perl | Rapid number updates on a website | 1 | 1 | 3 | 13,939,065 | 0 |
1 | 0 | We have a suite of selenium tests that on setup and teardown open and close the browser to start a new test.
This approach takes a long time for tests to run as the opening and closing is slow. Is there any way to open the browser once in the constructor then reste on setup and cleanup on teardown, then on the deconstructor close the browser?
Any example would be really appreciated. | false | 13,957,413 | 0.761594 | 1 | 0 | 5 | You can use class or module level setup and teardown methods instead of test level setup and teardown. Be careful with this though, as if you don't reset your test environment explicitly in each test, you have to handle cleaning everything out (cookies, history, etc) manually, and recovering the browser if it has crashed, before each test. | 0 | 601 | 0 | 1 | 2012-12-19T17:02:00.000 | python,selenium | Selenium test suite only open browser once | 1 | 1 | 1 | 13,957,461 | 0 |
1 | 0 | I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something.
Thanks for any input! | false | 13,960,897 | 0.158649 | 0 | 0 | 4 | "If you're running selenium tests against a Java application, then it makes sense to drive your tests with Java." This is untrue. It makes no difference what the web application is written in.
Personally I prefer python because it's equally as powerful as other languages, such as Java, and far less verbose making code maintenance less of a headache. However, if you choose a language, don't write it like you were programming in another language. For example if you're writing in Python, don't write like you were using Java. | 0 | 12,904 | 0 | 6 | 2012-12-19T20:47:00.000 | java,python,selenium,webdriver | Selenium Webdriver with Java vs. Python | 1 | 5 | 5 | 35,039,909 | 0 |
1 | 0 | I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something.
Thanks for any input! | false | 13,960,897 | 0 | 0 | 0 | 0 | It really does not matter. Even the Documentation. Selenium lib is not big at all.
Moreover, if you are good in development, you'll wrap selenium in your own code, and will never use driver.find(By.whatever(description)). Also you'd use some standards and By.whatever will become By.xpath only.
Personally, I prefer python and the reason is that and my other tests for software use other python libs -> this way I can unite my tests. | 0 | 12,904 | 0 | 6 | 2012-12-19T20:47:00.000 | java,python,selenium,webdriver | Selenium Webdriver with Java vs. Python | 1 | 5 | 5 | 13,992,922 | 0 |
1 | 0 | I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something.
Thanks for any input! | false | 13,960,897 | 0 | 0 | 0 | 0 | For me it's just a language preference. There are bindings for other languages, but I believe they communicate with Webdriver via some sort of socket interface. | 0 | 12,904 | 0 | 6 | 2012-12-19T20:47:00.000 | java,python,selenium,webdriver | Selenium Webdriver with Java vs. Python | 1 | 5 | 5 | 13,961,695 | 0 |
1 | 0 | I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something.
Thanks for any input! | false | 13,960,897 | 0.039979 | 0 | 0 | 1 | You've got it spot on, there are ton load of documents for Java. All the new feature implementations are mostly explained with Java. Even stackoverflow has a pretty strong community for java + selenium. | 0 | 12,904 | 0 | 6 | 2012-12-19T20:47:00.000 | java,python,selenium,webdriver | Selenium Webdriver with Java vs. Python | 1 | 5 | 5 | 13,961,645 | 0 |
1 | 0 | I'm wondering what the pros and cons are of using Selenium Webdriver with the python bindings versus Java. So far, it seems like going the java route has much better documentation. Other than that, it seems down to which language you prefer, but perhaps I'm missing something.
Thanks for any input! | true | 13,960,897 | 1.2 | 0 | 0 | 2 | Generally speaking, the Java selenium web driver is better documented. When I'm searching for help with a particular issue, I'm much more likely to find a Java discussion of my problem than a Python discussion.
Another thing to consider is, what language does the rest of your code base use? If you're running selenium tests against a Java application, then it makes sense to drive your tests with Java. | 0 | 12,904 | 0 | 6 | 2012-12-19T20:47:00.000 | java,python,selenium,webdriver | Selenium Webdriver with Java vs. Python | 1 | 5 | 5 | 13,961,711 | 0 |
1 | 0 | So I have been trying to figure our how to use BeautifulSoup and did a quick search and found lxml can parse the xpath of an html page. I would LOVE if I could do that but the tutorial isnt that intuitive.
I know how to use Firebug to grab the xpath and was curious if anyone has use lxml and can explain how I can use it to parse specific xpath's, and print them.. say 5 per line..or if it's even possible?!
Selenium is using Chrome and loads the page properly, just need help moving forward.
Thanks! | false | 13,965,403 | 0 | 0 | 0 | 0 | I prefer to use lxml. Because the efficiency of lxml is more higher than selenium for large elements extraction. You can use selenium to get source of webpages and parse the source with lxml's xpath instead of the native find_elements_with_xpath in selenium. | 0 | 1,853 | 0 | 1 | 2012-12-20T04:37:00.000 | python,parsing,selenium,lxml,xpath | Can I parse xpath using python, selenium and lxml? | 1 | 1 | 2 | 40,276,743 | 0 |
1 | 0 | I am trying to crawl a forum website with scrapy.
The crawler works fine if I have
CONCURRENT_REQUESTS = 1
But if I increase that number then I get this error
2012-12-21 05:04:36+0800 [working] DEBUG: Retrying http://www.example.com/profile.php?id=1580> (failed 1 times): 503
Service Unavailable
I want to know if the forum is blocking the request or there is some settings problem. | true | 13,965,684 | 1.2 | 0 | 0 | 8 | HTTP status code 503, "Service Unavailable", means that (for some reason) the server wasn't able to process your request. It's usually a transient error. I you want to know if you have been blocked, just try again in a little while and see what happens.
It could also mean that you're fetching pages too quickly. The fix is not to do this by keeping concurrent requests at 1 (and possibly adding a delay). Be polite.
And you will encounter various errors if you are scraping a enough. Just make sure that your crawler can handle them. | 0 | 14,588 | 0 | 5 | 2012-12-20T05:09:00.000 | python,scrapy | Getting service unavailable error in scrapy crawling | 1 | 1 | 2 | 13,967,095 | 0 |
1 | 0 | is there some way of detecting using javascript that a user has switched to a different tab in the same browser window.
Additionally is there a way to detect a user has switched to a different window than the browser?
thank you | true | 13,989,737 | 1.2 | 0 | 0 | 2 | Trap the window.onblur event.
It's raised whenever the current window (or tab) loses focus. | 0 | 3,555 | 0 | 6 | 2012-12-21T11:56:00.000 | javascript,browser,python-idle | Detect change of browser tabs with javascript | 1 | 2 | 3 | 13,989,768 | 0 |
1 | 0 | is there some way of detecting using javascript that a user has switched to a different tab in the same browser window.
Additionally is there a way to detect a user has switched to a different window than the browser?
thank you | false | 13,989,737 | 0.066568 | 0 | 0 | 1 | Most probably there is no standards javascript for this. Some browsers might support it but normally there is only a window.onblur event to find out the user has gone away from the current window. | 0 | 3,555 | 0 | 6 | 2012-12-21T11:56:00.000 | javascript,browser,python-idle | Detect change of browser tabs with javascript | 1 | 2 | 3 | 13,989,766 | 0 |
0 | 0 | Using Twitter Streaming API getting tweets from a specific query.
However some tweets came with different codification (there are boxes instead of words).
Is there any way to fix it? | false | 13,991,387 | 0.197375 | 1 | 0 | 1 | Use a different font, or a better method of displaying those.
All tweets in the streaming API are encoded with the same codec (JSON data is fully unicode aware), but not all characters can be displayed by all fonts. | 0 | 228 | 0 | 0 | 2012-12-21T13:51:00.000 | python,twitter | Tweets from Twitter Streaming API | 1 | 1 | 1 | 13,991,409 | 0 |
0 | 0 | I am trying to build a chat server that handles multiple clients. I am trying to handle each connected client on a new thread. The problem is that I am really confused on how to forward the message received from a client to the intended receiver. I mean client-1 to client-5. I am very new to socket programming. So any kind of help is appreciated. | true | 14,008,339 | 1.2 | 0 | 0 | 1 | Here's a pseudo-design for your server. I'll speak in programming language agnostic terms.
Have a "global hash table" that maps "client id numbers" to the corresponding "socket" (and any other client data). Any access to this hash table is guarded with a mutex.
Every time you accept a new connection, spin up a thread. I'm assuming there's something in your chat protocol where a client identifies himself, gets a client id number assigned, and gets added to the session. The first thing the thread does is adds the socket for this client connection to the hash table.
Whenever a message comes in (e.g. from client 1 to client 5), lookup "client 5" in the hash table to obtain its socket. Forward the message on this socket.
There's a few race conditions to work out, but that should be a decent enough design.
Of course, if you really want to scale, you wouldn't do the "thread per connection" approach. But if you are limited to about 100 or less clients simultaneously connected, you'll be ok. After that, you should consider a single-threaded approach using non-blocking i/o. | 1 | 1,730 | 0 | 0 | 2012-12-23T03:19:00.000 | python,multithreading,sockets,client-server | Python: multiple client threaded chat server | 1 | 1 | 1 | 14,010,119 | 0 |
0 | 0 | In Python how can I download a bunch of files quickly? urllib.urlretrieve() is very slow, and I'm not very sure how to go about this.
I have a list of 15-20 files to download, and it takes forever just to download one. Each file is about 2-4 mb.
I have never done this before, and I'm not really sure where I should start. Should I use threading and download a few at a time? Or should I use threading to download pieces of each file, but one file at a time, or should I even be using threading? | true | 14,014,498 | 1.2 | 0 | 0 | 1 | urllib.urlretrieve() is very slow
Really? If you've got 15-20 files of 2-4mb each, then I'd just line 'em up and download 'em. The bottle neck is going to be the bandwith for your server and yourself. So IMHO, hardly worth threading or trying anything clever in this case... | 1 | 6,891 | 0 | 2 | 2012-12-23T20:33:00.000 | python,multithreading,python-2.7,download,urllib | Python: Download multiple files quickly | 1 | 1 | 4 | 14,014,516 | 0 |
0 | 0 | One of Python's features is the pickle function, that allows you to store any arbitrary anything, and restore it exactly to its original form. One common usage is to take a fully instantiated object and pickle it for later use. In my case I have an AMQP Message object that is not serializable and I want to be able to store it in a session store and retrieve it which I can do with pickle. The primary difference is that I need to call a method on the object, I am not just looking for the data.
But this project is in nodejs and it seems like with all of node's low-level libraries there must be some way to save this object, so that it could persist between web calls.
The use case is that a web page picks up a RabbitMQ message and displays the info derived from it. I don't want to acknowledge the message until the message has been acted on. I would just normally just save the data in session state, but that's not an option unless I can somehow save it in its original form. | true | 14,100,093 | 1.2 | 0 | 0 | 3 | As far as I am aware, there isn't an equivalent to pickle in JavaScript (or in the standard node libraries). | 1 | 6,428 | 0 | 15 | 2012-12-31T09:54:00.000 | python,node.js,amqp | What would be the equivalent of Pythons "pickle" in nodejs | 1 | 1 | 3 | 14,100,129 | 0 |
0 | 0 | I've spent a few days on and off trying to get some hard statistics on what kind of performance you can expect from using the HTTPServer and/or TCPServer built-in libraries in Python.
I was wondering if anyone can give me any idea's as to how either/or would handle serving HTTP requests and if they would be able to hold up in production environments or in situations with high traffic and if anyone had any tips or clues that would improve performance in these situations. (Assuming that there is no access to external libraries like Twisted etc)
Thanks. | false | 14,111,460 | 0.379949 | 0 | 0 | 4 | Neither of those built-in libraries was meant for serious production use. Get real implementations, for example, from Twisted, or Tornado, or gunicorn, etc, etc, there are lots of them. There's no need to stick with the standard library modules.
The performance, and probably the robustness of the built-in libraries is poor. | 0 | 2,663 | 1 | 4 | 2013-01-01T14:58:00.000 | python,performance,tcpserver,httpserver | Performance of Python HTTPServer and TCPServer | 1 | 1 | 2 | 14,111,484 | 0 |
0 | 0 | I am using hosted exchange Microsoft Office 365 email and I have a Python script that sends email with smtplib. It is working very well. But there is one issue, how can I get the emails to show up in my Outlook Sent Items? | false | 14,153,954 | 0.379949 | 1 | 0 | 2 | You can send a copy of that email to yourself, with some header that tag the email was sent by yourself, then get another script (using IMAP library maybe) to move the email to the Outlook Sent folder | 0 | 1,550 | 0 | 4 | 2013-01-04T08:57:00.000 | python,outlook,smtplib | How can I see emails sent with Python's smtplib in my Outlook Sent Items folder? | 1 | 1 | 1 | 14,154,176 | 0 |
0 | 0 | Using the Selenium Python bindings, is it possible to start the RemoteWebDriver server separately from creating a webdriver.Remote instance? The point of doing this would be to save time spent repeatedly starting and stopping the server when all I really need is a new instance of the client. (This is possible with ChromeDriver.) | true | 14,160,679 | 1.2 | 0 | 0 | 1 | the server is started independently. creating an instance of webdriver.Remote does not start the server. | 0 | 205 | 0 | 0 | 2013-01-04T16:11:00.000 | python,selenium,webdriver | Start `RemoteWebDriver` server separately from creating a `webdriver.Remote` instance? | 1 | 1 | 1 | 14,161,598 | 0 |
0 | 0 | Are there problems with sharing a single instance of RemoteWebDriver between multiple test cases? If not, what's the best practice place to create the instance? I'm working with Python, so I think my options are module level setup, test case class setup, test case instance setup (any others?) | true | 14,161,479 | 1.2 | 1 | 0 | 1 | Sharing a single RemoteWebDriver can be dangerous, since your tests are no longer independently self-contained. You have to be careful about cleaning up browser state and the like, and recovering from browser crashes in the event a previous test has crashed the browser. You'll also probably have more problems if you ever try to do anything distributed across multiple threads, processes, or machines. That said, the options you have for controlling this are not dependent on Selenium itself, but whatever code or framework you are using to drive it. At least with Nose, and I think basic pyunit, you can have setup routines at the class, module, or package level, and they can be configured to run for each test, each class, each module, or each package, if memory serves. | 0 | 249 | 0 | 0 | 2013-01-04T17:03:00.000 | python,selenium,testcase | Share single instance of selenium RemoteWebDriver between multiple test cases | 1 | 1 | 1 | 14,161,887 | 0 |
0 | 0 | My internet connection has an issue. Around 50% of times, web pages don't load because of DNS look up failing. Just reloading page works, and I am able to browse like that.
However, I am also using REST api service for my project. When I run the program, it keeps on calling this webservice repeatedly, hundreds of times. Because of my issue, I can at most connect successfully 3-4 times (when I am lucky), and then ultimately I get connection error - "Max number of retries exceeded".
I was exploring my options when I came across this Keep Alive property in Requests module. Its automatic, and I cant forcefully make it work.
How do I get this working?
P.S. - I know fixing my internet connection issue will solve it, but I am moving in a week, so I dont want to waste time here. Also need to complete my project, so please helppppp!! | true | 14,163,949 | 1.2 | 0 | 0 | 1 | You could try setting up your application or operating system to use a known good DNS server like 8.8.8.8
EDIT: You can also bypass the DNS by adding the host name and IP address of the REST service to yor hosts file. | 0 | 524 | 0 | 0 | 2013-01-04T19:52:00.000 | http,rest,python-2.7,keep-alive,python-requests | Using Keep Alive in Requests module - Python 2.7 | 1 | 1 | 1 | 14,171,479 | 0 |
1 | 0 | Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with. | false | 14,177,436 | 0 | 0 | 0 | 0 | Using AWS CLI,
aws s3 ls s3://*bucketname* --region *bucket-region* --no-sign-request | 0 | 1,214 | 0 | 3 | 2013-01-05T23:11:00.000 | python-3.x,amazon-web-services,amazon-s3 | Get publicly accessible contents of S3 bucket without AWS credentials | 1 | 2 | 2 | 70,791,596 | 0 |
1 | 0 | Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with. | false | 14,177,436 | 0.379949 | 0 | 0 | 4 | If the bucket's permissions allow Everyone to list it, you can just do a simple HTTP GET request to http://s3.amazonaws.com/bucketname with no credentials. The response will be XML with everything in it, whether those objects are accessible by Everyone or not. I don't know if boto has an option to make this request without credentials. If not, you'll have to use lower-level HTTP and XML libraries.
If the bucket itself does not allow Everyone to list it, there is no way to get a list of its contents, even if some of the objects in it are publicly accessible. | 0 | 1,214 | 0 | 3 | 2013-01-05T23:11:00.000 | python-3.x,amazon-web-services,amazon-s3 | Get publicly accessible contents of S3 bucket without AWS credentials | 1 | 2 | 2 | 14,199,730 | 0 |
0 | 0 | Can I have any highlight kind of things using Python 2.7? Say when my script clicking on the submit button,feeding data into the text field or selecting values from the drop-down field, just to highlight on that element to make sure to the script runner that his/her script doing what he/she wants.
EDIT
I am using selenium-webdriver with python to automate some web based work on a third party application.
Thanks | false | 14,241,239 | 0.099668 | 0 | 0 | 1 | [NOTE: I'm leaving this answer for historical purposes but readers should note that the original question has changed from concerning itself with Python to concerning itself with Selenium]
Assuming you're talking about a browser based application being served from a Python back-end server (and it's just a guess since there's no information in your post):
If you are constructing a response in your Python back-end, wrap the stuff that you want to highlight in a <span> tag and set a class on the span tag. Then, in your CSS define that class with whatever highlighting properties you want to use.
However, if you want to accomplish this highlighting in an already-loaded browser page without generating new HTML on the back end and returning that to the browser, then Python (on the server) has no knowledge of or ability to affect the web page in browser. You must accomplish this using Javascript or a Javascript library or framework in the browser. | 0 | 909 | 0 | 1 | 2013-01-09T16:01:00.000 | python,html,selenium-webdriver,highlighting | Can selenium be used to highlight sections of a web page? | 1 | 2 | 2 | 14,241,317 | 0 |
0 | 0 | Can I have any highlight kind of things using Python 2.7? Say when my script clicking on the submit button,feeding data into the text field or selecting values from the drop-down field, just to highlight on that element to make sure to the script runner that his/her script doing what he/she wants.
EDIT
I am using selenium-webdriver with python to automate some web based work on a third party application.
Thanks | false | 14,241,239 | 0.291313 | 0 | 0 | 3 | This is something you need to do with javascript, not python. | 0 | 909 | 0 | 1 | 2013-01-09T16:01:00.000 | python,html,selenium-webdriver,highlighting | Can selenium be used to highlight sections of a web page? | 1 | 2 | 2 | 14,241,261 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.