Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | I'm writing a web app in Python/web.py where you go to the url /unfriend to unfriend someone. This link is spread out across multiple pages. After being unfriended, I would like the user to be redirected to the page they came from. Can I rely on HTTP_REFERER to implement this behavior? I don't want to have to add a parameter to the url. | false | 19,330,905 | 0.099668 | 0 | 0 | 1 | thefourtheye is right that you can't rely on REFERER.
But that doesn't mean you can't use it.
As a security measure (e.g., to prevent deep linking), it's laughably useless.
But for convenience features, there's nothing wrong with it. Assume, say, a third of your users won't supply it. Is your navigation good enough without it? Is the benefit in making things a little nicer for 2 in 3 users worth it? Sometimes the answer is yes.
Keep in mind that some proxies or user agents will intentionally send you garbage. If the REFERER is the same as the current page, or is not part of your app at all, don't use it.
Also, ask yourself whether what you really want here is a redirect to REFERER, or JS window.history.back(). The former is a poor substitute for the latter if that's what you're intending it for (although it can occasionally be useful as a fallback for people who can't run JS). | 0 | 280 | 0 | 0 | 2013-10-12T05:29:00.000 | python,web.py,http-referer | Is it wise to rely on HTTP_REFERER in a web app? | 1 | 1 | 2 | 19,331,922 | 0 |
1 | 0 | Function seeother() and redirect() in web.py is no use. I try to use
web.header('Location', 'www.google.com')
web.header('status', '301')
or
web.HTTPError('301', {'Location': 'www.google.com'})
but still redirect to:
http://127.0.0.1:80/www.google.com
which is not what we want:
http://google.com
How to redirect correctly? | false | 19,360,323 | 0 | 0 | 0 | 0 | which is not what we want:
http://google.com
So why do you then redirect to www.google.com instead of http://google.com? | 0 | 384 | 0 | 0 | 2013-10-14T12:31:00.000 | python,http,web,web.py | In web.py, How to 301 redirect to another domain? | 1 | 1 | 2 | 19,360,420 | 0 |
1 | 0 | I am parsing an HTML webpage with Python and Beautiful Soup (I am open to other solutions, though). I am wondering if it is possible to parse the file based on a line of HTML, i.e., get the td tag from line3. Is this possible? | true | 19,393,524 | 1.2 | 0 | 0 | 1 | consider this example: http://www.pythonforbeginners.com/python-on-the-web/web-scraping-with-beautifulsoup/ there is line-by-line processing and matching of href(you need td)
additionaly consider: soup.find_all("td", limit=3) | 0 | 378 | 0 | 1 | 2013-10-16T00:50:00.000 | python,html,parsing,beautifulsoup | Parse HTML by line | 1 | 1 | 1 | 19,393,621 | 0 |
1 | 0 | This is for users in Google Apps for Business or Education.
Using the Reports API in the Admin SDK I can see when the admin changed another user's password, but I cannot find an API that will tell me when a user has changed their password. I have changed mine twice in the past couple days and it does not come up in Reports API any where. Thanks for the help. | true | 19,434,458 | 1.2 | 0 | 0 | 0 | You cannot, that's the answer. | 0 | 1,254 | 0 | 1 | 2013-10-17T18:29:00.000 | google-api,google-apps,google-api-python-client,google-admin-sdk,google-apps-for-education | How do I find the time the user changed their password using the Google APIs? | 1 | 1 | 1 | 19,525,414 | 0 |
0 | 0 | In python I can get some rudimentary documentation for any object using help(<object>). But to be able to search the documentation, I have to go online. This isn't really helpful if I'm somewhere where the internet isn't accessible.
In R, there is a handy double question mark feature (??<topic>) that allows me to search through the documentation of all installed libraries for any function that includes <topic> in its name or documentation string. Is there anything similar for python? Perhaps even just for loaded objects? | false | 19,441,031 | 0.044415 | 0 | 0 | 2 | Windows Idle - F1 from shell window or editing window gets you a windows help file of all the docs. I think it's better than the online version - it's easier to find stuff. | 1 | 17,711 | 0 | 40 | 2013-10-18T03:20:00.000 | python,offline | Search python docs offline? | 1 | 3 | 9 | 19,441,388 | 0 |
0 | 0 | In python I can get some rudimentary documentation for any object using help(<object>). But to be able to search the documentation, I have to go online. This isn't really helpful if I'm somewhere where the internet isn't accessible.
In R, there is a handy double question mark feature (??<topic>) that allows me to search through the documentation of all installed libraries for any function that includes <topic> in its name or documentation string. Is there anything similar for python? Perhaps even just for loaded objects? | false | 19,441,031 | 1 | 0 | 0 | 29 | pydoc comes with python and can do searches but only in the synopsis lines of available modules. Quoting pydoc --help:
pydoc -k
Search for a keyword in the synopsis lines of all available modules.
Note that into pydoc you can perform searches using "/". | 1 | 17,711 | 0 | 40 | 2013-10-18T03:20:00.000 | python,offline | Search python docs offline? | 1 | 3 | 9 | 20,266,169 | 0 |
0 | 0 | In python I can get some rudimentary documentation for any object using help(<object>). But to be able to search the documentation, I have to go online. This isn't really helpful if I'm somewhere where the internet isn't accessible.
In R, there is a handy double question mark feature (??<topic>) that allows me to search through the documentation of all installed libraries for any function that includes <topic> in its name or documentation string. Is there anything similar for python? Perhaps even just for loaded objects? | false | 19,441,031 | 0.044415 | 0 | 0 | 2 | Although there are certainly better documentations built into your computer than help() like windows idle, another option for some of the more common topics would just be to save some of the online documentation to your computer. For the modules you use a lot and want to access offline, you could just download a text file version of the official online python documentation, which is the best place to get documentation. (file > save page as > select .txt file format) | 1 | 17,711 | 0 | 40 | 2013-10-18T03:20:00.000 | python,offline | Search python docs offline? | 1 | 3 | 9 | 20,134,465 | 0 |
0 | 0 | I'm looking to create some kind of a gateway that sits between a server and a client.
The so called gateway is supposed to filter some packets sent by the client and forward 99% of them.
Here are my questions:
Client opens a new socket to the gateway, i open a socket to the server from the gateway and store it in a list for further use. However, in one situation, all the connections will come from the same IP, thus leaving me with limited options on choosing the socket that should forward the packet to the server. How can i differentiate between opened sockets?
From previous situations, i'm expecting about 500 clients sending a packet every second. Performance wise, should i use a multithread model, or stick with a single thread application?
Still a performance question :I have to choose between C# and Python. Which one should give better performance? | false | 19,470,798 | 0.197375 | 0 | 0 | 1 | Socket addresses are a host and port, not just a host. And different connections from the same host will always have different ports. (I'm assuming TCP, not UDP, here.)
Or, even more simply, you can just compare the file descriptors (or, in Python, the socket objects themselves) instead of their peer addresses.
Meanwhile, for performance, on most platforms, 500 is nearing the limits of what you can do with threads, but not yet over the limits, so you could do it that way. But I think you'll be better off with a single-threaded reactor or a proactor with a small thread pool. Especially if you can use a preexisting framework like twisted or gevents to do the hard part for you.
As for the language, for just forwarding or filtering packets, the performance of your socket multiplexing will be all that matters, so there's no problem using Python. Pick a framework you like from either language, and use that language.
Some last side comments: you do realize that TCP packets aren't going to match up with messages in your higher level protocol(s), right? Also, this whole thing would probably be a lot easier to do, and more efficient, with a Linux or BSD box set up as a router so you don't have to write anything but the filters. | 0 | 1,277 | 0 | 1 | 2013-10-19T20:28:00.000 | c#,python,sockets | Sockets packet forwarding | 1 | 1 | 1 | 19,470,856 | 0 |
0 | 0 | I wrote a program to login to a website and do some automatic stuff (by making HTTP requests).
Most of these automatic stuff requires the program/session is in logged-in state (if the cookie expires, the program/session can not be considered in logged-in state), so I am implementing a isLoggedIn function to test it.
My current approach to get a page only available after login, but this requires a HTTP request and the transfer of a web page, so it is not very fast, what are other possible solutions?
Any lead will be appreciated!
Thank you very much! | false | 19,472,177 | 0 | 1 | 0 | 0 | As you mentioned yourself, the logged-in state is most of the time determined by validity of specific cookie. You can check if the cookie is still valid by comparing it's expiry time with the current time. Alternatively you can make HEAD request which allows you to get cookies (and other headers) without the need to download page body. However you have to check if the site handles HEAD requests for this to work. | 0 | 47 | 0 | 0 | 2013-10-19T23:16:00.000 | python,http,session,login,urllib2 | What is the fastest way to test whether logged in or not when use python to login to a website? | 1 | 1 | 1 | 19,481,620 | 0 |
1 | 0 | I'm trying to parse the HTML of a page with infinite scrolling. I want to load all of the content so that I can parse it all. I'm using Python. Any hints? | true | 19,512,250 | 1.2 | 0 | 0 | 1 | Those pages update their html with AJAX. Usually you just need to find the new AJAX requests send by browser, guess the meaning of the AJAX url parameters and fetch the data from the API.
API servers may validate the user agent, referer, cookie, oauth_token ... of the AJAX request, keep an eye on them. | 0 | 1,165 | 0 | 1 | 2013-10-22T08:01:00.000 | python,infinite-scroll | Parse HTML Infinite Scroll | 1 | 1 | 2 | 19,512,436 | 0 |
0 | 0 | Specifically I'm interested in using a DynamoDB table object from multiple threads (puts, gets, updates, etc). If that's not safe, then is there a safe way (i.e., maybe one table object per thread)? Any other gotchas or tips about working with threads in boto appreciated. | true | 19,523,680 | 1.2 | 0 | 0 | 14 | The boto library uses httplib which has never been, and to my knowledge still is not, thread-safe. The workaround is to make sure each thread creates its own connection to DynamoDB and you should be good. | 1 | 3,394 | 0 | 13 | 2013-10-22T16:43:00.000 | python,thread-safety,boto,amazon-dynamodb | Is boto library thread-safe? | 1 | 1 | 1 | 19,542,645 | 0 |
0 | 0 | I'm trying to download files using python 3. I use webbrowser.open_new(url) to open file locations. some files are downloaded automatically by chrome's downloader, and some are just opened in a chorme window. How can I choose between the options? | false | 19,528,277 | 0 | 0 | 0 | 0 | The Web server on which the file is hosted sends a header that suggests to the browser how it might handle the file, and the user's preferences hold some sway as well. You likely won't be able to override it easily.
You can avoid this by not using a Web browser from Python. urllib2 or better yet, the third-party requests module is a much easier way to talk to the Web. | 0 | 86 | 0 | 0 | 2013-10-22T21:01:00.000 | python,google-chrome,python-3.x | using google chrome's downloader - python 3 | 1 | 1 | 2 | 19,528,475 | 0 |
1 | 0 | I recently updated from OS X 10.8 to OS X 10.9. It looks like the upgrade removed the previous version of Python and all the libraries that were installed. I am trying to re-install Scrapy on OS X 10.9 but I keep getting an error using both pip and easy_install.
Here is the error message I am encountering.
/private/tmp/pip_build_root/lxml/src/lxml/includes/etree_defs.h:9:10: fatal error: 'libxml/xmlversion.h' file not found
#include "libxml/xmlversion.h"
^
1 error generated.
error: command 'cc' failed with exit status 1
Does anyone know to resolve this or have a work around? | false | 19,545,847 | 0.379949 | 0 | 0 | 2 | Some questions from osx 10.9. try xcode-select --install | 0 | 1,390 | 0 | 2 | 2013-10-23T15:22:00.000 | python,macos,scrapy | Using Scrapy with OS X 10.9 | 1 | 1 | 1 | 20,172,902 | 0 |
0 | 0 | I have a cherrypy server on a machine, and i want to get the client identifier from the request. Now i can get the client IP by cherrypy.request.remote.ip, but if the client user use a proxy then the IP address will be the proxy address that i don't want, so is there any way for getting the host name of the client machine or some other ways to distiguish the client identifier | false | 19,556,223 | 0 | 0 | 0 | 0 | This is a HTTP protocol problem and has nothing to do with python or cherrypy.
HTTP clients don't send their hostname along with requests. | 0 | 5,019 | 0 | 2 | 2013-10-24T03:11:00.000 | python,cherrypy | Is there a way to get client host name by cherrypy server | 1 | 2 | 2 | 19,556,266 | 0 |
0 | 0 | I have a cherrypy server on a machine, and i want to get the client identifier from the request. Now i can get the client IP by cherrypy.request.remote.ip, but if the client user use a proxy then the IP address will be the proxy address that i don't want, so is there any way for getting the host name of the client machine or some other ways to distiguish the client identifier | true | 19,556,223 | 1.2 | 0 | 0 | 1 | Original client IP is usually passed along by proxy with X-Forwarded-For header. You can either study the header or use tools.proxy setting to automatically rewrite cherrypy.request.remote.ip. See cherrypy.lib.cptools.proxy for details. | 0 | 5,019 | 0 | 2 | 2013-10-24T03:11:00.000 | python,cherrypy | Is there a way to get client host name by cherrypy server | 1 | 2 | 2 | 19,593,632 | 0 |
0 | 0 | My server has 3 ip addresses, 127.0.0.1, 192.168.0.100 and an internet ip address. I'm going to run a service written by python on this server, but I don't want it to expose on internet.
I'm using BaseHTTPRequestHandler class to implement this service, so how to bind only 127.0.0.1 and 192.168.0.100 but not the other one? | false | 19,556,467 | 0 | 0 | 0 | 0 | I think you have two choices.
1) Listen to all interfaces, but override BaseHTTPRequestHandler.init to check the client address and drop the connection if it comes from an undesired interface
2) Create multiple sockets, one per address you want to listen on. SocketServer.serve_forever() is blocking, so you will either need to use one thread per address or switch to a more sophisticated framework like twisted. | 0 | 758 | 0 | 0 | 2013-10-24T03:38:00.000 | python,basehttprequesthandler | how to bind multiple specified ip address on BaseHTTPRequestHandler of python | 1 | 2 | 2 | 19,556,787 | 0 |
0 | 0 | My server has 3 ip addresses, 127.0.0.1, 192.168.0.100 and an internet ip address. I'm going to run a service written by python on this server, but I don't want it to expose on internet.
I'm using BaseHTTPRequestHandler class to implement this service, so how to bind only 127.0.0.1 and 192.168.0.100 but not the other one? | false | 19,556,467 | 0 | 0 | 0 | 0 | Generally, routers have an option where you can allow servers to be visible or not visible. If on the router you set you server to not be visible, then your server will not be accessible through the internet. | 0 | 758 | 0 | 0 | 2013-10-24T03:38:00.000 | python,basehttprequesthandler | how to bind multiple specified ip address on BaseHTTPRequestHandler of python | 1 | 2 | 2 | 19,556,500 | 0 |
0 | 0 | I need to generate a 32 bit random int but depending of some arguments. The idea is generate a unique ID for each message to send through a own P2P network. To generate it, I would like as arguments: my IP and the time stamp. My question is, how can I generate this 32 bit random int from these arguments?
Thanks again! | false | 19,588,634 | 0.197375 | 0 | 0 | 3 | here's a list of options with their associated problems:
use a random number. you will get a collision (non-unique value) in about half the bits (this is the "birthday collision"). so for 32 bits you get a collision after 2*16 messages. if you are sending less than 65,000 messages this is not a problem, but 65,000 is not such a big number.
use a sequential counter from some service. this is what twitter's snowflake does (see another answer here). the trouble is supplying these across the net. typically with distributed systems you give each agent a set of numbers (so A might get 0-9, B get's 10-19, etc) and they use those numbers then request a new block. that reduces network traffic and load on the service providing numbers. but this is complex.
generate a hash from some values that will be unique. this sounds useful but is really no better than (1), because your hashes are going to collide (i explain why below). so you can hash IP address and timestamp, but all you're doing is generating 32 bit random numbers, in effect (the difference is that you can reproduce these values, but it doesn't seem like you need that functionality anyway), and so again you'll have a collisions after 65,000 messages or so, which is not much.
be smarter about generating ids to guarantee uniqueness. the problem in (3) is that you are hashing more than 32 bits, so you are compressing information and getting overlaps. instead, you could explicitly manage the bits to avoid collisions. for example, number each client for 16 bits (allows up to 65,000 clients) and then have each client user a 16 bit counter (allows up to 65,000 messages per client which is a big improvement on (3)). those won't collide because each is guaranteed unique, but you have a lot of limits in your system and things are starting to get complex (need to number clients and store counter state per client).
use a bigger field. if you used 64 bit ids then you could just use random numbers because collisions would be once every 2**32 messages, which is practically never (1 in 4,000,000,000). or you could join ip address (32 bits) with a 32 bit timestamp (but be careful - that probably means no more than 1 message per second from a client). the only drawback is slightly larger bandwidth, but in most cases ids are much smaller than payloads.
personally, i would use a larger field and random numbers - it's simple and works (although good random numbers are an issue in, say, embedded systems).
finally, if you need the value to be "really" random (because, for example, ids are used to decide priority and you want things to be fair) then you can take one of the solutions above with deterministic values and re-arrange the bits to be pseudo-random. for example, reversing the bits in a counter may well be good enough (compare lsb first). | 1 | 3,076 | 0 | 0 | 2013-10-25T11:30:00.000 | python,random,header | Python - Generate a 32 bit random int with arguments | 1 | 1 | 3 | 19,589,554 | 0 |
0 | 0 | I run python scripts on my macbook air that process data from external API's and often take several hours or occasionally even days.
However, sometimes I need to suspend my laptop in the middle of running a script so I can go to work or go home or similar.
How can I simply pause/resume these scripts in the middle of their for loops?
Is there something very simple that I can add at the script level that just listens for a particular key stroke to stop/start? Or something I can do at the *nix process management level?
I'm well aware of Pickle but I'd rather not deal with the hassle of serializing/unserializing my data--since all I'm doing is hibernating the mac, I'm hoping if the script gets paused and then I hibernate, that OS X will handle saving the RAM to disk and then restoring back to RAM when I reopen the computer. At that point, I can hit a simple keystroke to continue the python script.
Since I'm switching between different wifi networks, not sure if the different IPs will cause problems when my script tries to access the internet to reach the 3rd party APIs. | true | 19,601,598 | 1.2 | 0 | 0 | 11 | This was originally a comment, but it seems to be what OP wants, so I'm reposting it as an answer
I would use ctrl+z to suspend your live, running process. This will leave you with a PID, which you can later resume with a call to fg: fg <job-number>.
This shouldn't have any implications with changed network settings (like IP, etc), at least as far as python is concerned. I can't speak to whether the API will freak out, though | 0 | 7,855 | 1 | 3 | 2013-10-26T00:50:00.000 | macos,python-2.7,resume | How do I pause/resume a Python script? | 1 | 1 | 1 | 19,601,896 | 0 |
1 | 0 | I need help defining an architecture for a tool that will scrape more than 1000 big websites daily for new updates.
I'm planning to use Scrapy for this project:
Giving that Scrapy needs a project for each website, how can I handle scraping 1000+ websites and storing it's data with Scrapy in just one project? I tried adding a project generator, but is this a good idea?
How can I tell if a website was updated with new content so I can scrape it again?
Thanks! | false | 19,618,735 | 0 | 0 | 0 | 0 | I will be interested to see what other answers come up for this. I have done some web crawling / scrapping with code that I have written myself using urllib to get the html then just searching the html for what I need, but not tried scrapy yet.
I guess to see if there are differences you may just need to compare the previous and new html pages, but you would need to either work out what changes to ignore e.g. dates etc, or what specific changes you are looking for, unless there is an easier way to do this using scrapy.
On the storage front you could either store the html data just in a file system or look into just writting it to a database as strings. Just a local database like SQLite should work fine for this, but there are many other options.
Finally, I would also advise you to check out the terms on the sites you are planning to scrape and also check for guidance in the robots.txt if included within the html as some sites give guidance on how frequently they are happy for web crawlers to use them etc. | 0 | 3,232 | 0 | 2 | 2013-10-27T13:44:00.000 | python,scrapy | Crawl and monitor +1000 websites | 1 | 1 | 2 | 19,618,864 | 0 |
1 | 0 | I want to make a website that shows the comparison between amazon and e-bay product price.
Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler. | false | 19,687,421 | 0.022219 | 0 | 0 | 1 | Beautifulsoup is web scraping small library. it does your job but sometime it does not satisfy your needs.i mean if you scrape websites in large amount of data so here in this case beautifulsoup fails.
In this case you should use Scrapy which is a complete scraping framework which will do you job.
Also scrapy has support for databases(all kind of databases) so it is a huge
of scrapy over other web scraping libraries. | 0 | 84,428 | 0 | 150 | 2013-10-30T15:43:00.000 | python,beautifulsoup,scrapy,web-crawler | Difference between BeautifulSoup and Scrapy crawler? | 1 | 6 | 9 | 66,479,925 | 0 |
1 | 0 | I want to make a website that shows the comparison between amazon and e-bay product price.
Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler. | false | 19,687,421 | 0 | 0 | 0 | 0 | The differences are many and selection of any tool/technology depends on individual needs.
Few major differences are:
BeautifulSoup is comparatively is easy to learn than Scrapy.
The extensions, support, community is larger for Scrapy than for BeautifulSoup.
Scrapy should be considered as a Spider while BeautifulSoup is a Parser. | 0 | 84,428 | 0 | 150 | 2013-10-30T15:43:00.000 | python,beautifulsoup,scrapy,web-crawler | Difference between BeautifulSoup and Scrapy crawler? | 1 | 6 | 9 | 54,838,886 | 0 |
1 | 0 | I want to make a website that shows the comparison between amazon and e-bay product price.
Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler. | false | 19,687,421 | 0.022219 | 0 | 0 | 1 | Using scrapy you can save tons of code and start with structured programming, If you dont like any of the scapy's pre-written methods then BeautifulSoup can be used in the place of scrapy method.
Big project takes both advantages. | 0 | 84,428 | 0 | 150 | 2013-10-30T15:43:00.000 | python,beautifulsoup,scrapy,web-crawler | Difference between BeautifulSoup and Scrapy crawler? | 1 | 6 | 9 | 49,187,707 | 0 |
1 | 0 | I want to make a website that shows the comparison between amazon and e-bay product price.
Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler. | false | 19,687,421 | 0.066568 | 0 | 0 | 3 | Both are using to parse data.
Scrapy:
Scrapy is a fast high-level web crawling and web scraping framework,
used to crawl websites and extract structured data from their pages.
But it has some limitations when data comes from java script or
loading dynamicaly, we can over come it by using packages like splash,
selenium etc.
BeautifulSoup:
Beautiful Soup is a Python library for pulling data out of HTML and
XML files.
we can use this package for getting data from java script or
dynamically loading pages.
Scrapy with BeautifulSoup is one of the best combo we can work with for scraping static and dynamic contents | 0 | 84,428 | 0 | 150 | 2013-10-30T15:43:00.000 | python,beautifulsoup,scrapy,web-crawler | Difference between BeautifulSoup and Scrapy crawler? | 1 | 6 | 9 | 46,601,960 | 0 |
1 | 0 | I want to make a website that shows the comparison between amazon and e-bay product price.
Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler. | false | 19,687,421 | 1 | 0 | 0 | 21 | I think both are good... im doing a project right now that use both. First i scrap all the pages using scrapy and save that on a mongodb collection using their pipelines, also downloading the images that exists on the page.
After that i use BeautifulSoup4 to make a pos-processing where i must change attributes values and get some special tags.
If you don't know which pages products you want, a good tool will be scrapy since you can use their crawlers to run all amazon/ebay website looking for the products without making a explicit for loop.
Take a look at the scrapy documentation, it's very simple to use. | 0 | 84,428 | 0 | 150 | 2013-10-30T15:43:00.000 | python,beautifulsoup,scrapy,web-crawler | Difference between BeautifulSoup and Scrapy crawler? | 1 | 6 | 9 | 19,687,572 | 0 |
1 | 0 | I want to make a website that shows the comparison between amazon and e-bay product price.
Which of these will work better and why? I am somewhat familiar with BeautifulSoup but not so much with Scrapy crawler. | false | 19,687,421 | 0.044415 | 0 | 0 | 2 | The way I do it is to use the eBay/Amazon API's rather than scrapy, and then parse the results using BeautifulSoup.
The APIs gives you an official way of getting the same data that you would have got from scrapy crawler, with no need to worry about hiding your identity, mess about with proxies,etc. | 0 | 84,428 | 0 | 150 | 2013-10-30T15:43:00.000 | python,beautifulsoup,scrapy,web-crawler | Difference between BeautifulSoup and Scrapy crawler? | 1 | 6 | 9 | 24,040,613 | 0 |
0 | 0 | I'm using python and httplib to implement a really simple file uploader for my file sharing server. Files are chunked and uploaded one chunk at a time if they are larger than 1MB. The network connection between my client and server is quite good (100mbps, <3ms latency).
When chunk size is small (below 128kB or so), everything works fine (>200kB/s). But when I increase the chunk size to 256kB or above, it takes about 10 times more time to complete a chunk comparing to 128kB chunking (<20kB/s). To make the thing even stranger, this only happens in my win32 machine (win8 x86, running 32b python) but not in my amd64 one (win8 amd64, running 64b python).
After some profilings, I've narrowed down my search to request() and getresponse() functions of httplib.HttpConnection, as these are the cause of blocking.
My first guess is something about socket buffering. But changing SO_SNDBUF and TCP_NODELAY options does not help much. I've also checked my server side, but everything's normal.
I really hope someone can help me out here. Changing the http library (to pycurl) is the last thing I want to do. Thanks in advance! | true | 19,724,784 | 1.2 | 0 | 0 | 1 | Turns out it's a VM related problem. I was running my Python code on a VM, but when I copy the same code to a physical machinse running the same Windows edition, the problem disappears.
As I'm totally unfamiliar with VM mechanisms, it would be great if someone can explain why such a problem exists in VM. | 0 | 210 | 0 | 1 | 2013-11-01T10:18:00.000 | python,performance,winapi,httplib | Slow http upload using httplib of python2.7 at win32 | 1 | 1 | 1 | 21,978,052 | 0 |
1 | 0 | how can I simulate the action of dragging and dropping a file from the filesystem to an element that has a ondrag event trigger?
As for the normale "file" input, I was able to set the value of the input with jQuery. Can't I create a javascript File object or use any similar hack?
Thanks
Thanks | false | 19,771,234 | 0.049958 | 0 | 0 | 1 | Selenium only works with your web browser. If you are opening something other than a web browser such as file browser you cannot interact with it. Drag and drops work with items within a web browser but not from program such as Windows Explorer or a Linux file explorer to a web browser. Create and element in your browser with jQuery and drag and drop it. | 0 | 1,516 | 0 | 0 | 2013-11-04T15:35:00.000 | jquery,python,selenium,selenium-webdriver | Selenium python: simulate file drag | 1 | 1 | 4 | 19,771,618 | 0 |
0 | 0 | I'm developing a python script and I need to find the fastest way for getting a JSON from remote server. Currently I'm using requests module, but still requesting JSON is the slowest part of the script. So, what is the fastest way for python HTTP GET request?
Thanks for any answer. | true | 19,793,448 | 1.2 | 1 | 0 | 2 | Write a C module that does everything. Or fire up a profiler to find out in which part of the code the time is spent exactly and then fix that.
Just as guideline: Python should be faster than the network, so the HTTP request code probably isn't your problem. My guess is that you do something wrong but since you don't provide us with any information (like the code you wrote), we can't help you. | 0 | 2,147 | 0 | 0 | 2013-11-05T16:08:00.000 | python,http,get,httprequest | Fastest way for python HTTP GET request | 1 | 1 | 2 | 19,793,524 | 0 |
0 | 0 | Intro
I have a cluster to monitor using Zabbix 2.0, everything works fine and I have all the data I need on Zabbix, but the way zabbix displays the data is not optimal for our use case. At the same time I have a python app running with a web front end I can use to create a more refined way of displaying Zabbix's data. What I want to do is to turn Zabbix's latest data tab into a grid view with a host in every row and the items as columns (like a spreadsheet).
The problem
Apparently Zabbix's API is still a work in progress and the interface sometimes changes, which should not be a problem if some basic functionality is working. What I need to do is to be able to fetch the list of hosts not only IDs but the host's info as well. And for each host I need to be able to fetch some items, again not only the items ID but the entire data too. So far I've tried using two Python libraries to do it: zabbix_api and PyZabbix, no luck so far since both libraries fetch only IDs and not the data I need for hosts and items.
The question
Is there a library/way of doing this that actually works or is this API in a too early stage yet?
Thanks in advance! | false | 19,795,589 | 0.099668 | 0 | 0 | 1 | I use zabbix_api to do navigate through zabbix catalogs, get hosts, get host, get host's items, etc. Though I didn't try to get the data with python, I don't see why it shouldn't work. I do get data from PHP using PhpZabbixApi. Any specific problems you've run into? | 0 | 1,002 | 0 | 4 | 2013-11-05T17:57:00.000 | python,zabbix | using zabbix API to create a grid view from python | 1 | 1 | 2 | 19,806,257 | 0 |
1 | 0 | I've been using selenium in python to drive phantomjs. The problem is that it is quite slow.
I'm beginning to think that it is selenium that is slow, not the core phantomjs functionality of emulating a browser, Javascript and all.
Is there a more direct way to drive phantom that is faster? | false | 19,820,306 | 0 | 0 | 0 | 0 | set load images equals to false , which will definitely help you to increase the speed of phantomJS. | 0 | 1,072 | 0 | 1 | 2013-11-06T18:50:00.000 | python,selenium,selenium-webdriver,phantomjs | phantomjs vs selenium performance | 1 | 1 | 1 | 45,224,165 | 0 |
0 | 0 | Im using SauceLabs and I need the sessionId to get the Job Id there and use it to set pass/fail status during execution of the test. How can I get the session Id using python? | false | 19,944,745 | 0 | 0 | 0 | 0 | "driver.session_id" will get you the session-id of current session. | 0 | 14,421 | 0 | 11 | 2013-11-13T03:31:00.000 | python,selenium,webdriver,selenium-webdriver | How to get session id on Selenium webdriver using Python? | 1 | 2 | 3 | 58,951,194 | 0 |
0 | 0 | Im using SauceLabs and I need the sessionId to get the Job Id there and use it to set pass/fail status during execution of the test. How can I get the session Id using python? | false | 19,944,745 | 0 | 0 | 0 | 0 | This worked for me:
BuiltIn().get_library_instance('SeleniumLibrary').driver.session_id | 0 | 14,421 | 0 | 11 | 2013-11-13T03:31:00.000 | python,selenium,webdriver,selenium-webdriver | How to get session id on Selenium webdriver using Python? | 1 | 2 | 3 | 60,012,259 | 0 |
1 | 0 | I am using twiiter streaming api and twython module with python 2.7 windows 7 os. I want to click a button and streaming of tweets should start. and on clicking the streaming should stop.I am using python for backend and HTML on front end.I am communicating to python via php using passthru function.when i am giving an ajax call to php on clicking of button then all the tweets is displayed at a time. I want streaming.Can anyone help?Thanks | true | 19,946,192 | 1.2 | 1 | 0 | 0 | I am using twython and using long polling technique for displaying the streams. | 0 | 225 | 0 | 0 | 2013-11-13T05:41:00.000 | php,python,ajax,twitter,streaming | Streaming Tweets Via Python | 1 | 1 | 2 | 22,292,555 | 0 |
0 | 0 | I have a python script which reads all tweets about a specific sporting event and enters them into a database. While I was running it this weekend every time a big event occurred in the game the script would stop and I would get an error. It said it was with the code but I don't believe that is the case. I found this on Twitter's api site:
"Falling behind
Clients which are unable to process messages fast enough will be disconnected. A way to track whether your client is falling behind is to compare the timestamp of the Tweets you receive with the current time. If the difference between the timestamps increases over time, then the client is not processing Tweets as fast as they are being delivered. Another way to receive notifications that a client is falling behind is to pass the stall_warnings parameter when establishing the streaming connection."
and I was wondering if this is whats happening to me and what would be the best way to implement a solution. | false | 19,960,086 | 0.53705 | 1 | 0 | 3 | As straming API creates a permanent connection, Falling behind technically means that tweets are put in this connection faster than consumed by your client.
Solution is straightforward, you have to process tweets faster, that is optimize your landscape. There must be a bottleneck/bottlenecks, identify them and handle properly. For example, it might be database latency, when your db can not process sufficient inserts per second, IO latency, when data cant be stored to disk as fast as needed; code inefficiency; high CPU load; network bandwidth bound etc.
No silver bullet for all cases, but some obvious steps include:
storing received from Twitter data as is, and do post-processing in windows of lower load;
deploy of a cluster with several tweets consumers (processors) and data sharding;
usage of faster disks/some raid configuration can speed-up IO;
tweet query optimisation, making sure to request and process least amount of tweets possible;
code optimisation;
migration to a datacenter with higher bandwith; | 0 | 599 | 0 | 1 | 2013-11-13T17:11:00.000 | python,twitter,tweepy | Tweepy streaming API falling behind | 1 | 1 | 1 | 19,960,645 | 0 |
1 | 0 | I've developed a small write-only REST api with Flask Restful that accepts PUT request from a handful of clients that can potentially have changing IP addresses. My clients are embedded Chromium clients running an AngularJS front-end; they authenticate with my API with a simple magic key -- it's sufficient for my very limited scale.
I'm testing deploying my API now and I notice that the Angular clients are attempting to send an OPTIONS http methods to my Flask service. My API meanwhile is replying with a 404 (since I didn't write an OPTIONS handler yet, only a PUT handler). It seems that when sending cross-domain requests that are not POST or GET, Angular will send a pre-flight OPTIONS method at the server to make sure the cross-domain request is accepted before it sends the actual request. Is that right?
Anyway, how do I allow all cross-domain PUT requests to Flask Restful API? I've used cross-domaion decorators with a (non-restful) Flask instance before, but do I need to write an OPTIONS handler as well into my API? | false | 19,962,699 | 0.110656 | 0 | 0 | 5 | Just an update to this comment. Flask CORS is the way to go, but the flask.ext.cors is deprecated.
use:
from flask_cors import CORS | 0 | 58,328 | 0 | 47 | 2013-11-13T19:31:00.000 | python,angularjs,flask,cors,flask-restful | Flask RESTful cross-domain issue with Angular: PUT, OPTIONS methods | 1 | 2 | 9 | 47,288,580 | 0 |
1 | 0 | I've developed a small write-only REST api with Flask Restful that accepts PUT request from a handful of clients that can potentially have changing IP addresses. My clients are embedded Chromium clients running an AngularJS front-end; they authenticate with my API with a simple magic key -- it's sufficient for my very limited scale.
I'm testing deploying my API now and I notice that the Angular clients are attempting to send an OPTIONS http methods to my Flask service. My API meanwhile is replying with a 404 (since I didn't write an OPTIONS handler yet, only a PUT handler). It seems that when sending cross-domain requests that are not POST or GET, Angular will send a pre-flight OPTIONS method at the server to make sure the cross-domain request is accepted before it sends the actual request. Is that right?
Anyway, how do I allow all cross-domain PUT requests to Flask Restful API? I've used cross-domaion decorators with a (non-restful) Flask instance before, but do I need to write an OPTIONS handler as well into my API? | false | 19,962,699 | 1 | 0 | 0 | 36 | You can use the after_request hook:
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin', '*')
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization')
response.headers.add('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE')
return response | 0 | 58,328 | 0 | 47 | 2013-11-13T19:31:00.000 | python,angularjs,flask,cors,flask-restful | Flask RESTful cross-domain issue with Angular: PUT, OPTIONS methods | 1 | 2 | 9 | 28,923,164 | 0 |
0 | 0 | I have a python.py file that I would like to by able to run on any computer that has python installed.
However, the program uses several packages that I installed through pip (BeautifulSoup and Selenium).
Is there a way to make a single python file that will automatically incorporate everything it needs from these packages into the .py file?
Thanks! | true | 19,962,995 | 1.2 | 0 | 0 | 0 | There is no nice solution to do this. It would be the best to use existing mechanisms -- as dependencies defined in setup.py.
If you really need to do this, you could execute the corresponding scripts using subprocess.call():
Check whether pip is available. If not, terminate the program.
Check whether virtualenv is available. If yes, create and activate a new environment.
Call pip to install the required packages.
If virtualenv is not installed, this will require root privileges. | 1 | 45 | 0 | 1 | 2013-11-13T19:48:00.000 | python,selenium,beautifulsoup | Make python file that isn't reliant on installed packages | 1 | 1 | 1 | 19,963,148 | 0 |
0 | 0 | I am currently designing an application that will require data transmission. I am currently working on the client software that will build the data packages that will be sent via the network level service.
What data type should I use for network transmission? I am currently pondering whether I should use a physical data file (.dat) which can be easily manipulated (created/read/etc.) via Python or use only internal data. From a management and organizational standpoint, I think file based data may be the easiest to manipulate and handle on a networking level.
If I were head more towards a internal (Python) data handling method, what should my starting point be? Should I look at dictionaries? The over-arching goal is to keep data size minimal. Using file-based data, I believe I would only be looking at just a few bytes for actual transmission. The native platform is going to be Windows, but I would also like to look at my options for a mobile standpoint (Android/iOS).
The purpose of the program is data entry. User entry will be recorded, packaged, encrypted and sent (via a WAN) to a server where it will be stored in a database for query at a later time. | false | 20,004,082 | 0 | 0 | 0 | 0 | There are best practices, definitely.
As a first advice, you should definitely decouple the implementation from the representation used when you send/receive data via the network. Don't use Python dicts. Use a widely accepted serialization format like JSON, ASN.1 or protocol buffers. Make sure you have a clear idea what you need to send over the network and what the requirements are (latency, throughput, CPU time for encoding/decoding, etc) and choose something that fits them.
Second, use a de facto or de jure standard for communicating over the network. Be it REST, AMQP or anything else - it's impossible to tell which one would be the best fit since your question is too broad. But make sure you're not implementing your own in-house adhoc application layer protocol - you would just make your life and your colleagues life so much harder down the road.
I'd suggest you think a bit more what you want to do, and post more specific questions later on. | 0 | 218 | 0 | 0 | 2013-11-15T15:05:00.000 | python,networking,application-design | Data Handling For Network Transmission | 1 | 1 | 1 | 20,030,106 | 0 |
0 | 0 | What is the cause of a Stack usage error from libxml2/libxslt/lxml? | false | 20,005,436 | 0.099668 | 0 | 0 | 1 | It seems that you're using lxml extension functions. In this case, the "Stack usage error" (XPATH_STACK_ERROR internally) happens when a value is popped off the XPath stack and the stack is empty. The typical scenario is an extension function called with fewer parameters than expected. | 0 | 183 | 0 | 0 | 2013-11-15T16:10:00.000 | python,lxml,libxml2,libxslt | libxml: "Stack usage error" - further information? | 1 | 1 | 2 | 20,007,236 | 0 |
0 | 0 | I'm building up a nice application in python from the bottom up that allows encrypted communication between peers. This application able users to dynamically install new plugins and therefore new protocols.
How is used '\0' among common socket operations ? Is it a good separator or should I use something else ?
I would like to be able to manage my own socket code which prevents me from using libs that abstract those bytes constructions.
By the way I'm using Python 3 so all data sent or received is encoded. I'm using utf-8 by default. | true | 20,020,409 | 1.2 | 0 | 0 | 1 | The NUL-Byte (b'\0') has been and still is commonly used in binary protocols as separator or when transferring numbers (e.g.: 1 as a 32 bit integer is b'\x01\x00\x00\x00'). Its usage can therefor be considered completely safe with socket implementations on all platforms.
When encoding and decoding strings in Python 3 however, I'd recommend you insert those NUL-Bytes after encoding your string to bytes and stripping them (on the receiver side) before decoding your strings to Unicode. | 1 | 56 | 0 | 0 | 2013-11-16T15:49:00.000 | python,sockets | Usage of '\0' in various protocols or How can i find a good separator? | 1 | 1 | 1 | 20,026,013 | 0 |
1 | 0 | I am now using Django frame work to build a website which has the ability to control a remote embedded system (simply with functions like "Turn ON/OFF"). What i can image now is to use Socket Programming with Python (because Django is pure Python). As i have learnt, i only know how to send and receive messages with sockets between the client machine and server.
Can any one tell me
1. What else is needed to learn for this remote control function?
2. Or is there any better ways (better frameworks) to implement this?
3. Dose Django have built in methods for Socket Programming? (If not, is it possible to implement it with self-defined app)? | true | 20,039,657 | 1.2 | 0 | 0 | 0 | You basically have one major thing to decide:
Is your embedded machine going to open up a port that allows any
thing that knows it's IP and port details to control it and your web
page write to that IP/Port OR
Is your embedded device going to poll the web app to find out which
state it should be in.
The first option will normally respond faster but your embedded machine will be more vulnerable to outside interference, you will need to be able to run active code on your web server, which a lot of servers do not allow, etc.
The second will only respond to state changes at an average of half the polling rate but will probably be simpler to program and manage. Plus your server is not also acting as a client. | 0 | 469 | 0 | 0 | 2013-11-18T03:36:00.000 | python,django,sockets,web,remote-server | Remote control of an embedded system from Website | 1 | 1 | 1 | 20,039,992 | 0 |
0 | 0 | I'd like to check the status, e.g. How many files left to upload. of a Dropbox account using the API in python. Is this possible? | false | 20,060,354 | 0.197375 | 0 | 0 | 2 | You can't do it directly, but you can do it indirectly (sort of). What I do is place a dummy file in my Dropbox and have it sync to the server. At the very beginning of my script I delete the file locally and then wait for it to reappear. Once it has reappeared I have a good idea that Dropbox is connected and synced. It's not perfect because more files might be waiting to be synced, but that's the best I could come up with. | 0 | 845 | 0 | 1 | 2013-11-18T23:44:00.000 | python,dropbox-api | How do I check the sync status of a Dropbox account using the API? | 1 | 2 | 2 | 43,394,177 | 0 |
0 | 0 | I'd like to check the status, e.g. How many files left to upload. of a Dropbox account using the API in python. Is this possible? | true | 20,060,354 | 1.2 | 0 | 0 | 1 | The Dropbox API is a way to add Dropbox integration to your application (that is, to allow your application to sync its files via Dropbox, or to explicitly upload/download files outside of the sync process), not a way to control or monitor the Dropbox desktop application (or the way other applications sync, or anything else).
So no, it's not possible.
The only way to do this is to write code for each platform to control the Dropbox app, e.g., via UI scripting on the Mac or intercepting WM messages on Windows. (Or, alternatively, it might be possible to write your own replacement sync tool and use that instead of the standard desktop app, in which case you can obviously monitor what you're doing.) | 0 | 845 | 0 | 1 | 2013-11-18T23:44:00.000 | python,dropbox-api | How do I check the sync status of a Dropbox account using the API? | 1 | 2 | 2 | 20,060,938 | 0 |
0 | 0 | I have generated a custom library for python using Swig and i want to use that library somewhere else (with out the source files) , Should i copy the .so file to that place ? or is there any other way.
Using Swig it has generated one so file(say _example.so) now if i want to use that library in that particular folder i need to do import example but if i am trying the same in any other folder it is throwing error saying 'Import Error: no module named example'. | true | 20,068,092 | 1.2 | 0 | 0 | 1 | Normally you should have generated _example.so and example.py. You need to distribute both. If you are concerned about exposing the sources - do not worry, example.py contains only assets translating python code to calls to the shared library. | 0 | 68 | 0 | 0 | 2013-11-19T09:45:00.000 | python,swig | How to use cutom python library? | 1 | 1 | 2 | 20,068,971 | 0 |
0 | 0 | I want to send data over to my server via HTTP GET requests, every 1-2 seconds.
Should I create a new connection every time, or should i keep the connection open and keep sending requests over the same connection? If I employ the latter method would httplib keep the connection alive, what happens if the connection is broken?
I am not very familiar with http and networking protocols.
EDIT: I am working on gps tracking system for my university project, I need to regularly upload the coordinates to a database via php script. | true | 20,160,026 | 1.2 | 0 | 0 | 1 | The thing to remember about an HTTP connection is it's still, at a lower level, a socket connection over TCP. They can be prone to issues with leaving a connection open, even if you're constantly streaming data from the source.
While some pretty serious efforts have been made in this area (socket.io, websockets, HTTP long polling, etc), your best and simplest bet is to just make new requests every couple of seconds.
However, there are specific use cases for using things like websockets, so perhaps you can explain what you're doing a little better, then maybe we can say for sure. | 0 | 551 | 0 | 1 | 2013-11-23T07:57:00.000 | python,http,connection,httplib | Python httplib [Multiple requests] - How long can i keep the connection open? | 1 | 1 | 1 | 20,160,481 | 0 |
0 | 0 | I am developing a server using python but the server can communicate with only one client at a time . Even if the server establish connection with more than one clients, it can't make conversation with all clients at the same time.
One client should wait until the started conversation end, which may last for several minutes. This problem would create Tremendous delay up on the client which hasn't started conversation.
So, how could I let my python server to communicate with more than one clients at the same time ?
Thank you in advance | false | 20,172,765 | 0 | 0 | 0 | 0 | You may use Tornado. It is asynchronic multithreading web-server framework. | 0 | 180 | 0 | 2 | 2013-11-24T08:55:00.000 | python,network-programming,client-server | how can I make a server communicate with more than 1 client at the same time? | 1 | 1 | 2 | 20,173,028 | 0 |
0 | 0 | As the question explain, I would like to saturate my bandwidth. For download I had an idea:
Download a random file (5MB for example) in loop for n time using wget or urllib2.
Delete the file each completed download, with the same loop.
(For wget using a Bash script / For urllib2 using a Python script)
But, I have two questions:
How do I saturate the download bandwidth without files downloading?
How do I saturate the upload bandwidth? (I have no idea in this)
I mean a total saturation, but if I want a partial saturating? | true | 20,289,809 | 1.2 | 0 | 0 | 1 | Just running a few wget's should easily saturate your download bandwidth.
For upload, you might set up a web server on your computer (carefully poking a hole through your firewall for it), and then connect to a web proxy (there are a few of these that'll anonymise your data) and back to your web server. Then connect to your web server through the proxy and download (or upload!) a bunch of stuff.
It may be more effective to do these things one at a time, rather than both at the same time, as they may interfere with each other a bit. | 0 | 1,937 | 0 | 0 | 2013-11-29T16:18:00.000 | python,bash,upload,download,bandwidth | How do I saturate my upload and download available bandwidth? | 1 | 1 | 1 | 20,290,664 | 0 |
0 | 0 | I'm making a multilayer, text-based game in python sockets. The game is two player with a server which the clients connect to. I'm not sure whether to store player information (name, health, etc) on the server or on the client? What are the advantages of both? I was thinking about storing the information on the client and sending the player object to the server when ever it changes, though this probably isn't very efficient. Any help would be appreciated! | true | 20,303,411 | 1.2 | 0 | 0 | 2 | I think you shouldn't even consider storing data like health client-side. Doing that will allow super-easy hacks to be made and the fact that the game is written in Python makes this a lot easier.
So I think you should keep these data in the server-side and use it from there. | 0 | 162 | 0 | 2 | 2013-11-30T17:34:00.000 | python,sockets,multiplayer | Should I store player information on server or client? | 1 | 1 | 1 | 20,303,470 | 0 |
0 | 0 | I'm using twitter python library to fetch some tweets from a public stream. The library fetches tweets in json format and converts them to python structures. What I'm trying to do is to directly get the json string and write it to a file. Inside the twitter library it first reads a network socket and applies .decode('utf8') to the buffer. Then, it wraps the info in a python structure and returns it. I can use jsonEncoder to encode it back to the json string and save it to a file. But there is a problem with character encoding I guess. When I try to print the json string it prints fine in the console. But when I try to write it into a file, some characters appear such as \u0627\u0644\u0644\u06be\u064f
I tried to open the saved file using different encodings and nothing has changed. It suppose to be in utf8 encoding and when I try to display it, those special characters should be replaced with actual characters they represent. Am I missing something here? How can I achieve this?
more info:
I'm using python 2.7
I open the file like this:
json_file = open('test.json', 'w')
I also tried this:
json_file = codecs.open( 'test.json', 'w', 'utf-8' )
nothing has changed. I blindly tried, .encode('utf8'), .decode('utf8') on the json string and the result is the same. I tried different text editors to view the written text, I used cat command to see the text in the console and those characters which start with \u still appear.
Update:
I solved the problem. jsonEncoder has an option ensure_ascii
If ensure_ascii is True (the default), all non-ASCII characters in the
output are escaped with \uXXXX sequences, and the results are str
instances consisting of ASCII characters only.
I made it False and the problem has gone away. | false | 20,306,249 | 0 | 1 | 0 | 0 | Well, since you won't post your solution as an answer, I will. This question should not be left showing no answer.
jsonEncoder has an option ensure_ascii.
If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences, and the results are str instances consisting of ASCII characters only.
Make it False and the problem will go away. | 1 | 984 | 0 | 2 | 2013-11-30T22:13:00.000 | python,json,unicode,utf-8 | Python unicode file writing | 1 | 2 | 2 | 20,464,991 | 0 |
0 | 0 | I'm using twitter python library to fetch some tweets from a public stream. The library fetches tweets in json format and converts them to python structures. What I'm trying to do is to directly get the json string and write it to a file. Inside the twitter library it first reads a network socket and applies .decode('utf8') to the buffer. Then, it wraps the info in a python structure and returns it. I can use jsonEncoder to encode it back to the json string and save it to a file. But there is a problem with character encoding I guess. When I try to print the json string it prints fine in the console. But when I try to write it into a file, some characters appear such as \u0627\u0644\u0644\u06be\u064f
I tried to open the saved file using different encodings and nothing has changed. It suppose to be in utf8 encoding and when I try to display it, those special characters should be replaced with actual characters they represent. Am I missing something here? How can I achieve this?
more info:
I'm using python 2.7
I open the file like this:
json_file = open('test.json', 'w')
I also tried this:
json_file = codecs.open( 'test.json', 'w', 'utf-8' )
nothing has changed. I blindly tried, .encode('utf8'), .decode('utf8') on the json string and the result is the same. I tried different text editors to view the written text, I used cat command to see the text in the console and those characters which start with \u still appear.
Update:
I solved the problem. jsonEncoder has an option ensure_ascii
If ensure_ascii is True (the default), all non-ASCII characters in the
output are escaped with \uXXXX sequences, and the results are str
instances consisting of ASCII characters only.
I made it False and the problem has gone away. | true | 20,306,249 | 1.2 | 1 | 0 | 2 | jsonEncoder has an option ensure_ascii
If ensure_ascii is True (the default), all non-ASCII characters in the
output are escaped with \uXXXX sequences, and the results are str
instances consisting of ASCII characters only.
Make it False and the problem will go away. | 1 | 984 | 0 | 2 | 2013-11-30T22:13:00.000 | python,json,unicode,utf-8 | Python unicode file writing | 1 | 2 | 2 | 20,528,987 | 0 |
0 | 0 | I have two web hosts. One with a standard shared hosting provider, and the other: AWS Free Tier. I would like for these two servers to be able to communicate with one another. Basically, I would like the shared server to send some information to the AWS server, causing the AWS server to run certain scripts.
I am familiar with python, so am wondering if there is some python library I can use to quickly cook up a script that would listen to a certain port (on AWS). Also, security is in issue: I want AWS to only listen to requests from a certain IP. Is this possible? What python libraries could I use for something like this?
I am fairly new to web programming and wasn't able to google the solution to this. Thanks! | false | 20,314,048 | 0.197375 | 1 | 0 | 2 | Since you are already using aws, for something like this you couldconsider using AWS SQS to add a queue between the two hosts and communicate thru it instead of directly.
Using SQS, it would be easy to write a script to add messages to the SQS queue when something needs to be run on the other host, and equally easy for the second host to poll the queue looking for messages.
Adding a queue between the two hosts decouples them, and adds a bit of fault tolerance (i.e. one of the hosts could go off line for a bit without the messages being lost), and possibly gives you the ability to scale up if you need to a bit easier (for example, if you ever needed multiple instances at aws processing jobs from the other host you could just add them and tell them to also poll the same queue, as opposed to building in a 1-1 'always on' dependency between the two.
Lots of different ways to skin this cat, so maybe the approach above is overkill in your case, but thought I'd throw it out there as something to consider. | 0 | 43 | 0 | 2 | 2013-12-01T15:55:00.000 | python,web,amazon-web-services,server-communication | What options are there to allow one server to send information to another server? | 1 | 1 | 2 | 20,316,557 | 0 |
0 | 0 | I am trying to find a way to extract the depth of a website using python.
Depth of a subwebsite is equal to the number of clicks required from the main website (e.g. www.ualberta.ca) in order for a user to get to the subwebsite (e.g. www.ualberta.ca/beartracks). so for instance if it takes one additional click to get to a subwebsite from the main domain, the depth of the subwebsite would be 1.
is there anyway for me to measure this using python? thank you! | false | 20,317,477 | 0 | 0 | 0 | 0 | It sounds like you want to write a spider to do breadth-first search from the first url until you find a link to the second url.
I suggest you look at the Scrapy package; it makes it very easy to do. | 0 | 470 | 0 | 2 | 2013-12-01T21:29:00.000 | python,html,css,web-scraping,web-analytics | measuring a depth of a website using python | 1 | 1 | 2 | 20,317,674 | 0 |
1 | 0 | I am trying to develop a small automated tool in python that can check Forms inputs of a web application for XSS vulnerability. I hope to do this using python mechanize library so that I can automate form filling and submit and get the response from the python code. Though mechanize is also works as a browser, is there a way to detect a browser alert message for an input containing a script. Or else is there any other library for python such that I can perform this functionality. Any sample code will be a great favor.
PS : I am trying to develop this so that I can find them in an application we are developing and include them in a report and NOT for Hacking purpose.
Thank you. | true | 20,347,235 | 1.2 | 1 | 0 | 0 | Answering my own question. Browser giving an alert message simply means that our the node is injected into DOM. By simply looking for the string that I injected in the response body, I could determine whether the given input is reflected through the browser without proper sanitization. | 0 | 244 | 0 | 0 | 2013-12-03T09:24:00.000 | python,mechanize,mechanize-python | Identify Browser alert messges in Mechanize - Python | 1 | 1 | 1 | 22,406,798 | 0 |
0 | 0 | I am trying to run a python code on a Linux server, and my code involves running Selenium.
soon after I started running the code, the following error popped up:
The browser appears to have exited before we could connect. The output was: Error: cannot open display:
I installed firefox and selenium, but for some reason the error is keep popping up
how can I solve this issue?
thank you | false | 20,363,655 | 0 | 0 | 0 | 0 | I'm guessing you need a $DISPLAY variable, and xauth or xhost.
Selenium depends on a browser, and your browser on Linux depends on X11. $DISPLAY tells X11 where to find the X server (the thing that renders the graphics - usually this is on the computer you're sitting in front of), and xauth or xhost tells the remote host how to authenticate to the X server.
If you're using putty to connect to the Linux host (or other X11-less ssh client), you'll probably need to install an X server on the machine you're sitting in front of, and then use Cygwin ssh -Y to forward xauth creds to the remote host.
Another option that works pretty well for many people is to use VNC. This allows you to reboot the machine you're sitting in front of, without interrupting your Selenium tests. There are many interoperable VNC client/servers.
You can easily test your X11 communication by just running "xterm &" or "xdpyinfo". If this displays a command window on the machine you're sitting in front of, X11's set up. | 0 | 64 | 0 | 0 | 2013-12-03T23:14:00.000 | python,selenium | running a python code that includes selenium module gives an error | 1 | 1 | 1 | 20,363,967 | 0 |
1 | 0 | I am running a code that has selenium component, which requires phantomJS.
I am getting the following error message:
Unable to start phantomjs with ghostdriver.' ; Screenshot: available
In my code, I specified my phantomJS path(the bin path), but such measure didn't work.
I have placed the phantomJS-osx folder at the same location as my folder for selenium - would it be the cause of my problem?
thanks | false | 20,366,522 | 0 | 0 | 0 | 0 | Not sure what version of Ghostdriver you are on, but I got that error on 1.9.7 until I upgraded to selenium 2.40 | 0 | 2,389 | 0 | 2 | 2013-12-04T03:56:00.000 | python,selenium,phantomjs | Unable to start phantomjs with ghostdriver | 1 | 2 | 2 | 26,636,714 | 0 |
1 | 0 | I am running a code that has selenium component, which requires phantomJS.
I am getting the following error message:
Unable to start phantomjs with ghostdriver.' ; Screenshot: available
In my code, I specified my phantomJS path(the bin path), but such measure didn't work.
I have placed the phantomJS-osx folder at the same location as my folder for selenium - would it be the cause of my problem?
thanks | false | 20,366,522 | -0.197375 | 0 | 0 | -2 | This is a bug. Please use selenium 2.37.2 | 0 | 2,389 | 0 | 2 | 2013-12-04T03:56:00.000 | python,selenium,phantomjs | Unable to start phantomjs with ghostdriver | 1 | 2 | 2 | 20,742,773 | 0 |
1 | 0 | I do a simple web application written in Python using cherrypy and Mako. So, my question is also simple.
I have one page with URL http://1.2.3.4/a/page_first. Also there is an image that available on URL http://1.2.3.4/a/page_first/my_image.png. And I want to locate my_image.png on the page_first.
I added a tag <img src="my_image.png"/>, but it is not shown. I looked at web developer tools->Network and saw that request URL for image was http://1.2.3.4/a/my_image.png, instead of http://1.2.3.4/a/page_first/my_image.png.
Why does it happen?
Thanks. | false | 20,421,001 | 0.099668 | 0 | 0 | 1 | Try <img src="/page_first/my_image.png"/> | 0 | 195 | 0 | 2 | 2013-12-06T09:59:00.000 | python,cherrypy | Using relative URL's | 1 | 2 | 2 | 20,422,180 | 0 |
1 | 0 | I do a simple web application written in Python using cherrypy and Mako. So, my question is also simple.
I have one page with URL http://1.2.3.4/a/page_first. Also there is an image that available on URL http://1.2.3.4/a/page_first/my_image.png. And I want to locate my_image.png on the page_first.
I added a tag <img src="my_image.png"/>, but it is not shown. I looked at web developer tools->Network and saw that request URL for image was http://1.2.3.4/a/my_image.png, instead of http://1.2.3.4/a/page_first/my_image.png.
Why does it happen?
Thanks. | true | 20,421,001 | 1.2 | 0 | 0 | 2 | The page address needs to be http://1.2.3.4/a/page_first/ (with trailing slash).
ADDED:
You don't seem to understand relative URLs, so let me explain. When you reference an image like this <img src="my_image.png"/>, the image URL in the tag doesn't have any host/path info, so path is taken from the address of the HTML page that refers to the image. Since path is everything up to the last slash, in your case it is http://1.2.3.4/a/. So the full image URL that the browser will request becomes http://1.2.3.4/a/my_image.png.
You want it to be http://1.2.3.4/a/page_first/my_image.png, so the path part of the HTML page must be /a/page_first/.
Note that the browser will not assume page_first is "a directory" just because it doesn't have an "extension", and will not add the trailing slash automatically. When you access a server publishing static dirs and files and specify a directory name for the path and omit the trailing slash (e. g. http://www.example.com/some/path/here), the server is able to determine that you actually request a directory, and it adds the slash (and usually also a default/index file name) for you. It's not generally the case with dynamic web sites where URLs are programmed.
So basically you need to explicitly include the trailing slash in your page path: dispatcher.connect('page','/a/:number_of_page/', controller=self, action='page_method') and always refer to it with the trailing slash (http://1.2.3.4/a/page_first/), otherwise the route will not be matched.
As a side note, usually you put the images and other static files into a dedicated dir and serve them either with CherryPy's static dir tool, or, if it's a high load site, with a dedicated server. | 0 | 195 | 0 | 2 | 2013-12-06T09:59:00.000 | python,cherrypy | Using relative URL's | 1 | 2 | 2 | 20,422,773 | 0 |
0 | 0 | The title says it all, I want to programmatically get the version of Selenium I have installed on my Python environment. | false | 20,428,732 | 0.244919 | 0 | 0 | 5 | You can try:
pip list
conda list
or for example on MAC:
brew list
And then check if and what version is in your installed package list.
For conda you might have different environments. Change it by conda activate myenv where myenv is the name of your second or more test environments. | 0 | 48,042 | 0 | 16 | 2013-12-06T16:22:00.000 | python,selenium | How do I retrieve the version of Selenium currently installed, from Python | 1 | 2 | 4 | 59,528,985 | 0 |
0 | 0 | The title says it all, I want to programmatically get the version of Selenium I have installed on my Python environment. | false | 20,428,732 | 0.148885 | 0 | 0 | 3 | Try using pip show selenium, that worked for me | 0 | 48,042 | 0 | 16 | 2013-12-06T16:22:00.000 | python,selenium | How do I retrieve the version of Selenium currently installed, from Python | 1 | 2 | 4 | 66,284,498 | 0 |
0 | 0 | I need to download a thousand csv files size: 20KB - 350KB. Here is my code so far:
Im using urllib.request.urlretrieve. And with it i download thousand files with size of all of them together: 250MB, for over an hour.
So my question is:
How can I download thousand csv files faster then one hour?
Thank you! | false | 20,441,270 | 0 | 0 | 0 | 0 | You are probably not going to be able to top that speed without either a) a faster internet connection both for you and the provider or b) getting the provider to provide a zip or tar.gz format of the files that you need.
The other possibility would be to use a cloud service such as Amazon to get the files to your cloud location, zip or compress them there and then download the zip file to your local machine. As the cloud service is on the internet backbone it should have faster service than you. The downside is you may end up having to pay for this depending on the service you use. | 0 | 5,698 | 0 | 8 | 2013-12-07T12:17:00.000 | python,csv,python-3.x,urllib | Fastest way to download thousand files using python? | 1 | 2 | 4 | 20,441,410 | 0 |
0 | 0 | I need to download a thousand csv files size: 20KB - 350KB. Here is my code so far:
Im using urllib.request.urlretrieve. And with it i download thousand files with size of all of them together: 250MB, for over an hour.
So my question is:
How can I download thousand csv files faster then one hour?
Thank you! | false | 20,441,270 | 0.049958 | 0 | 0 | 1 | The issue is very unlikely to be bandwidth (connection speed) because any network connection can maintain that bandwidth. The issue is latency - the time it takes to establish a connection and set up your transfers. I know nothing about Python, but would suggest you split your list and run the queries in parallel if possible, on multiple threads or processes - since the issue is almost certainly neither CPU, nor bandwidth-bound. So, I am saying fire off multiple requests in parallel so a bunch of setups can all be proceeding at the same time and the time each takes is masked behind another.
By the way, if your thousand files amount to 5MB, then they are around 5kB each, rather than the 20kB to 350kB you say. | 0 | 5,698 | 0 | 8 | 2013-12-07T12:17:00.000 | python,csv,python-3.x,urllib | Fastest way to download thousand files using python? | 1 | 2 | 4 | 20,441,497 | 0 |
0 | 0 | For example, if I have my minecraft server running on port 25565, I want to have a python script close the port so that all connections will be dropped and no further connections will be made without having to shutdown the service.
I have tried binding a new socket to the same port number and then closing the socket, but it does not have any effect on the server.
I am working in Python 3.3. | false | 20,447,826 | 0 | 0 | 0 | 0 | Use a firewall for example?
On linux there is the iptables. It's easy to use and powerful. | 0 | 148 | 0 | 0 | 2013-12-07T23:08:00.000 | python,sockets,python-3.x,port | How do I force close a port being used by an independent service? | 1 | 1 | 1 | 20,447,877 | 0 |
0 | 0 | Is there an easy way to get the number of followers an account has without having to loop through cursors? I'm looking for a simple function call which will return to me just the number of followers the use with a given Twitter ID has.
Just looking for the physical number not access to anything else | true | 20,465,269 | 1.2 | 1 | 0 | 4 | What I ended up doing was .show_user(user_id=twitter_id) which returns (among other things) the followers count via ['followers_count'] | 0 | 907 | 0 | 0 | 2013-12-09T07:23:00.000 | python,twitter,twython | Twython get followers count | 1 | 1 | 2 | 20,511,726 | 0 |
0 | 0 | I am writing an application in Python that involves socket programming. I have understood that it's better to use the non-blocking sockets, and thus write an event-driven server. I am not sure as to why and how I should prefer one of these two methods that I want to use: select() and poll() for checking activity in any of the sockets. Could anyone help me out if there's anything in either of these methods that makes it a better choice that the other?
I mean, why would I choose one over the other? | false | 20,466,573 | 0.099668 | 0 | 0 | 1 | Note that select with a timeout of 0 is basically the same as a poll but the biggest problem for cross system programming is that the support for both select and poll is mixed and inconsistent - personally I tend to opt for a blocking listener in a separate thread that once a complete frame, message, etc., has been received raises an event with the data attached - this seems to work well cross systems. | 0 | 805 | 0 | 0 | 2013-12-09T08:51:00.000 | python,sockets,networking,select,network-programming | How can I choose between select.select() and select.poll() methods in the select module in Python? | 1 | 1 | 2 | 20,466,700 | 0 |
0 | 0 | as we know, python has two built-in url lib:
urllib
urllib2
and a third-party lib:
urllib3
if my requirement is only to request a API by GET method, assume it return a JSON string.
which lib I should use? do they have some duplicated functions?
if the urllib can implement my require, but after if my requirements get more and more complicated, the urllib can not fit my function, I should import another lib at that time, but I really want to import only one lib, because I think import all of them can make me confused, I think the method between them are totally different.
so now I am confused which lib I should use, I prefer urllib3, I think it can fit my requirement all time, how do you think? | false | 20,467,107 | 0.066568 | 0 | 0 | 1 | Personally I avoid to use third-party library when possible, so I can reduce the dependencies' list and improve portability.
urllib and urllib2 are not mutually exclusive and are often mixed in the same project. | 1 | 22,979 | 0 | 9 | 2013-12-09T09:23:00.000 | python,urllib2,urllib,urllib3 | Which urllib I should choose? | 1 | 2 | 3 | 20,467,364 | 0 |
0 | 0 | as we know, python has two built-in url lib:
urllib
urllib2
and a third-party lib:
urllib3
if my requirement is only to request a API by GET method, assume it return a JSON string.
which lib I should use? do they have some duplicated functions?
if the urllib can implement my require, but after if my requirements get more and more complicated, the urllib can not fit my function, I should import another lib at that time, but I really want to import only one lib, because I think import all of them can make me confused, I think the method between them are totally different.
so now I am confused which lib I should use, I prefer urllib3, I think it can fit my requirement all time, how do you think? | true | 20,467,107 | 1.2 | 0 | 0 | 10 | As Alexander says in the comments, use requests. That's all you need. | 1 | 22,979 | 0 | 9 | 2013-12-09T09:23:00.000 | python,urllib2,urllib,urllib3 | Which urllib I should choose? | 1 | 2 | 3 | 20,467,287 | 0 |
0 | 0 | I'm having problem with data type in python.
I run system.methodSignature('SearchInfo'), it returns [['array', 'struct']].
What should I put as the argument in SearchInfo()?
And what is struct data type in xml-rpc?
Please help. | true | 20,490,415 | 1.2 | 0 | 0 | 2 | Found out the way for <struct> data type for xmlrpc.
data type is like a dict type.
For my case, it's actually SearchInfo({'id' : 12345}).
Good luck to whoever needing this information (: | 0 | 617 | 0 | 1 | 2013-12-10T09:20:00.000 | python,xml,xml-rpc | Python XML-RPC data type | 1 | 1 | 1 | 20,534,267 | 0 |
0 | 0 | As the title says, I'm looking for information about the best library to send HTTP requests in Python really fast.
Do you know which one is the fastest and/or consume less CPU time/memory ?
urllib2
httplib2
requests
Thanks | false | 20,595,525 | 0 | 1 | 0 | 0 | Sending HTTP request is pretty simple, I don't think it could be a block issue for most real world applications.
If you really want to send request very fast, you can consider to use multi processes, not waste your time on choosing a faster library(Which likely to be helpless). | 0 | 1,475 | 0 | 0 | 2013-12-15T14:20:00.000 | python,http | Best performance HTTP library in Python | 1 | 2 | 3 | 20,595,734 | 0 |
0 | 0 | As the title says, I'm looking for information about the best library to send HTTP requests in Python really fast.
Do you know which one is the fastest and/or consume less CPU time/memory ?
urllib2
httplib2
requests
Thanks | false | 20,595,525 | 0.132549 | 1 | 0 | 2 | urllib2 might be better for performance, but requests is much simpler to use. | 0 | 1,475 | 0 | 0 | 2013-12-15T14:20:00.000 | python,http | Best performance HTTP library in Python | 1 | 2 | 3 | 20,595,601 | 0 |
1 | 0 | I'm building a web app with google app engine and python. I've read that html5 geolocation is much more precise than IP geolocation, but is that precise enough to pinpoint buildings? Or is building my own map with customized coordinates a better option? | false | 20,619,533 | 0.197375 | 0 | 0 | 1 | How precise HTML5 geolocation is depends entirely on what the user's browser supports.
On a phone, it may have access to the phone's idea of the user's location (based on GPS plus cell and WiFi triangulation); on a desktop machine, there's little to go on besides IPs, so it can't do any better than you could do yourself.
But either way, the user may have disabled or limited it (or it may be disabled or limited by default for him). Or may be using a browser that doesn't support HTML5 locations. Or may be using an add-on that fuzzes or flat-out lies about location.
So:
is that precise enough to pinpoint buildings?
It can be.
Or is building my own map with customized coordinates a better option?
How would that help? If you don't know the coordinates the user is at, you have nothing to look up on the map. | 0 | 213 | 0 | 0 | 2013-12-16T19:44:00.000 | python,html,google-app-engine,geolocation,gps | I want to locate the specific building the user is in. Should I use html5 geolocation or build my own custom map? | 1 | 1 | 1 | 20,619,667 | 0 |
1 | 0 | Some characters, such as ordinal 22 or 8, don't display in html (using chrome, for example when copy-and-pasting them into this 'Ask question' editor; I am assuming utf-8). How do I determine which characters are valid html, and of the valid, which are rendered?
A table/reference would be helpful (I couldn't find one by google-ing), but preferably I need a set of rules or a solution that can be implemented in python. | false | 20,628,825 | 0 | 0 | 0 | 0 | What is a valid character in HTML depends on your definition for “HTML” and “valid”. Different HTML versions have different rules for formally valid characters, and they may have characters that are valid but not recommended. Moreover, there are general policies such as favoring Normalization Form C; though not part of HTML specifications, such policies are often regarded as relevant to HTML, too.
What is rendered (and how) depends on the browser, the style sheets of the HTML document, and available fonts in the user’s computer. Moreover, not all characters are rendered as such. For example, in normal HTML content, any contiguous sequence of whitespace characters is treated as equivalent to a single space character.
So the answer is really “It depends.” Consider asking a much more targeted practical question to get a more targeted answer. | 0 | 120 | 0 | 1 | 2013-12-17T07:58:00.000 | python,html | How to I determine whether a character is valid-for/rendered-in html? | 1 | 1 | 2 | 20,629,430 | 0 |
1 | 0 | I have an object which loads all the data from DB Object_X.
This object has a few methods defined. I pass some parameter and based on parameter I call one of the functions in Object_X, it uses pre-populated data in object and parameter to get some result.
I have created a web service which invokes any method defined in Object_X and returns a result.
My problem is that for every request I am loading all the data from db again and again which is time consuming. Is there a way that I can load the data one time when I start a server and uses the same object for each subsequent request? | false | 20,673,260 | 0 | 0 | 0 | 0 | Assuming your application is served by a long running process (as opposed to something like plain CGI), AND your database is never updated, you always have the option to instantiate your Object_X at the module's top-level (making it a global), so it will only be created once per process. Now it means that you'll have some overhead on process start, and that you'll have to restart all your processes everytime your db is updated.
The real question IMHO is why do you want to load all your db right from the start ? If that's for "optimization" then there might be better tools and strategies (faster db, caches etc). | 0 | 56 | 0 | 3 | 2013-12-19T04:07:00.000 | python,python-2.7,web.py | Prevent creating of expensive object on each request | 1 | 1 | 2 | 20,676,165 | 0 |
0 | 0 | I am just curious as I notice when the sphero is blinking while it is idling waiting for a connection, it is much brighter than when I set the colour using the setRGB functionality. Am I missing something to adjust the brightness as well. I can't seem to find anything in the documentation. | false | 20,692,146 | 0 | 0 | 0 | 0 | User3120784, in certain cases when you are using Sphero with the leveling system enabled, this can be seen. I personally just had an issue where the ball would do the brightness discrepancy that you are describing here, and the solution was to just check and then uncheck "Level up" in the advanced settings in the Sphero app.
To address your question in your comment, the difference is that when the Sphero is idling, it is being controlled solely by firmware, whereas the setRGB of the API is using application level code, and telling the firmware to set a brightness, which it may or may not do depending on the "Level up" setting that I mentioned earlier. | 0 | 354 | 0 | 1 | 2013-12-19T21:35:00.000 | python,sphero-api | How to control LED max brightness using Sphero API | 1 | 1 | 1 | 21,339,422 | 0 |
1 | 0 | Instead of "rss", I want to add a global variable to it. So that I don't have to change it again and again.
sel.select('//a[contains(@href, "rss")]/@href').extract() to something like this:
sel.select('//a[contains(@href, url_type)]/@href').extract() | true | 20,721,145 | 1.2 | 0 | 0 | 1 | Use str.format to insert variable value into xpath expression:
sel.select('//a[contains(@href, "{0}")]/@href'.format(url_type)).extract() | 0 | 145 | 0 | 0 | 2013-12-21T16:35:00.000 | python,scrapy | Add variable in HtmlXPathSelector select function | 1 | 1 | 1 | 20,721,196 | 0 |
1 | 0 | I play on chess.com and I'd like to download a history of my games. Unfortunately, they don't make it easy: I can access 100 pages of 50 games one at a time, click "Select All" and "Download" and then they e-mail it to me.
Is there a way to write a script, in python or another language, that helps me automate any part of the process? Something that simulates clicking a link? Is Capybara useful for things like this outside of unit testing? Selenium?
I don't have much experience with web development yet. Thanks for your help! | true | 20,749,102 | 1.2 | 0 | 0 | 1 | You may want to check out CasperJS. I use Python to fire CasperJS scripts to do web scraping and return data to Python to parse further or store to a database etc...
Python itself has BeautifulSoup and Mechanize but the combination is not great with Ajax based sites.
Python and CasperJS is perfect. | 0 | 406 | 0 | 2 | 2013-12-23T18:25:00.000 | python,selenium,automation,web-scraping,capybara | Automating web tasks? | 1 | 1 | 3 | 20,749,267 | 0 |
0 | 0 | As far as I understand the basics of the client-server model, generally only client may initiate requests; server responds to them. Now I've run into a system where the server sends asynchronous messages back to the client via the same persistent TCP connection whenever it wants. So, a couple of questions:
Is it a right thing to do at all? It seems to really overcomplicate implementation of a client.
Are there any nice patterns/methodologies I could use to implement a client for such a system in Python? Changing the server is not an option.
Obviously, the client has to watch both the local request queue (i.e. requests to be sent to the server), and the incoming messages from the server. Launching two threads (Rx and Tx) per connection does not feel right to me. Using select() is a major PITA here. Do I miss something? | false | 20,774,949 | 0 | 0 | 0 | 0 | I believe what you are trying to achieve is a bit similar to jsonp. While sending to the client, send through a callback method which you know of, that is existing in client.
like if you are sending "some data xyz", send it like server.send("callback('some data xyz')");. This suggestion is for javascript because it executes the returned code as if it were called through that method., and I believe you can port this theory to python with some difficulty. But I am not sure, though. | 0 | 530 | 0 | 0 | 2013-12-25T16:53:00.000 | python,sockets,networking,tcp,client-server | Is it OK to send asynchronous notifications from server to client via the same TCP connection? | 1 | 2 | 3 | 20,775,026 | 0 |
0 | 0 | As far as I understand the basics of the client-server model, generally only client may initiate requests; server responds to them. Now I've run into a system where the server sends asynchronous messages back to the client via the same persistent TCP connection whenever it wants. So, a couple of questions:
Is it a right thing to do at all? It seems to really overcomplicate implementation of a client.
Are there any nice patterns/methodologies I could use to implement a client for such a system in Python? Changing the server is not an option.
Obviously, the client has to watch both the local request queue (i.e. requests to be sent to the server), and the incoming messages from the server. Launching two threads (Rx and Tx) per connection does not feel right to me. Using select() is a major PITA here. Do I miss something? | false | 20,774,949 | 0 | 0 | 0 | 0 | Yes this is very normal and Server can also send the messages to client after connection is made like in case of telnet server when you initiate a connection it sends you a message for the capability exchange and after that it asks you about your username & password.
You could very well use select() or if I were in your shoes I would have spawned a separate thread to receive the asynchronous messages from the server & would have left the main thread free to do further processing. | 0 | 530 | 0 | 0 | 2013-12-25T16:53:00.000 | python,sockets,networking,tcp,client-server | Is it OK to send asynchronous notifications from server to client via the same TCP connection? | 1 | 2 | 3 | 20,775,573 | 0 |
0 | 0 | I wrote an auth module for FreeRADIUS with Python.
I want to manage a NAS with it.
Is there way except generating a client.conf file and restarting? | false | 20,779,360 | 0.197375 | 1 | 0 | 1 | Not directly no. You can use sqlite if you want an easily modifiable local data store for clients definitions. | 0 | 961 | 0 | 0 | 2013-12-26T04:58:00.000 | python,radius,freeradius | Can I define RADIUS clients with Python? | 1 | 1 | 1 | 20,792,780 | 0 |
1 | 0 | So I have run into the issue of getting data from Google Finance. They have an html access system that you can use to access webpages that give stock data in simple text format (ideal for minimizing parsing). However, if you access this service too frequently, Google locks you out and you need to enter a captcha. I currently have a list of about 50 stocks and I want to update my price data every 15 seconds, but I soon get locked out (after about 3-4 minutes).
Does anyone have any solutions to this/understand the nature of how often is the max I could ping Google for this information?
Not sure why a feature like this would be on a service designed to give data like this... but similar alternative services with realtime data would also be accepted. | true | 20,831,821 | 1.2 | 0 | 0 | 0 | Yahoo YQL works fairly well, but throws numerous HTTP 500 errors that need to be handled, they are all benign. TradeKing is an option, however, the oauth2 package is required and that is very difficult to install properly | 0 | 447 | 0 | 0 | 2013-12-30T00:43:00.000 | python,captcha,bots,google-finance | Google Finance Lock Out - Robot | 1 | 1 | 2 | 20,929,338 | 0 |
1 | 0 | There is a Python interpreter in naclports (to run as Google Chrome Native Client App).
Are there any examples for bundling the interpreter with a custom Python application and how to integrate this application with a HTML page? | true | 20,854,222 | 1.2 | 0 | 0 | 2 | The interpreter is currently the only python example in naclports. However, it should be possible to link libpython into any nacl binary, and use it just as you would embed python in any other C/C++ application. A couple of caveats: you must initialize nacl_io before making any python calls, and as you should not make python calls on the main (PPAPI) thread.
In terms of interacting with the HTML page, as with all NaCl applications this must be done by sending messages back and forth between native and javascript code using PostMessage(). There is no way to directly access the HTML or JavaScript from native code. | 0 | 779 | 0 | 1 | 2013-12-31T08:24:00.000 | python,google-nativeclient | Practical use of Python as Chrome Native Client | 1 | 1 | 1 | 20,912,809 | 0 |
1 | 0 | I have a need for my client(s) to send data to my app engine application that should go something like this:
Client --> Server (This is the data that I have)
Server --> Client (Based on what you've just given me, this is what I'm going to need)
Client --> Server (Here's the data that you need)
I don't have much experience working with REST interfaces, but it seems that GET and POST are not entirely appropriate here. I'm assuming that the client needs to establish some kind of persistent connection with the server so they can both have a proper "conversation". My understanding is that sockets are reserved for paid apps, and I'd like to keep this on the free tier. However, I'm not sure of how to go about this. Is it the Channel API I should be using? I'm a bit confused by the documentation.
The app engine app is Python, as is the client. The solution that I'm leaning towards right now is that the client does a POST to the server (here's what I have), and subsequently does a GET (tell me what you need) and lastly does a POST (here's the data you wanted). But it seems messy.
Can anyone point me in the right direction please?
EDIT:
I didn't realize that you could get the POST response with Pythons urllib using the 'read' function of the object returned by urlopen. That makes things a lot nicer, but if anyone has any other suggestions I'd be glad to hear them. | true | 20,872,804 | 1.2 | 0 | 0 | 3 | What you suggest is the right way. 1&2 is a single post. Then you post again to the server. | 0 | 98 | 1 | 0 | 2014-01-01T20:16:00.000 | python,google-app-engine,http,rest,client-server | Establishing connection between client and Google App Engine server | 1 | 1 | 1 | 20,874,529 | 0 |
0 | 0 | There are 10 processes sharing a socket in my application.
They all wait for it to become readable using select.
But I notice in the application log that only 2 of these 10 processes any time the socket become readable.
What could be the reason? | true | 20,878,718 | 1.2 | 0 | 0 | 3 | I suspect what's happening is that the first process is waking up, returning from select(), and calling accept() before the subsequent context switch to the other processes can occur.
I'm not sure what select() actually blocks on or how it wakes up. I suspect when it does wake up from it's waiting on, it re-checks the queue to see if data is still available. If not, it goes back to waiting.
I'll double-down on my hypothesis as well. The fact that 2 processes are waking up is indicative of the fact that you have a dual-core processor. If you had a quad-core, you might see up to 4 processes wake up simultaneously.
One simple way to prove this theory: Put a 2 second sleep() call just prior to calling accept(). I suspect you'll see all 10 processes waking up and logging an attempt to call accept.
If your goal is to have N processes (or threads) servicing incoming connections, your approach is probably still good. You could probably switch from doing a select() call on a non-blocking socket, to just using a blocking socket that calls accept() directly. When an incoming connection comes in, one of the processes will return from accept() with a valid client socket handle. The others will still remain blocked. | 0 | 92 | 0 | 1 | 2014-01-02T08:02:00.000 | python,linux,sockets,select | Why is only 2 of 10 selecting process notified when a socket become readable? | 1 | 1 | 1 | 20,883,107 | 0 |
1 | 0 | I want to execute my python code on the side even though there might be security problem
How can I write with importing modules and all?
I have tried using of pyjs to convert the below code to JS
import socket
print socket.gethostbyname_ex(socket.gethostname())[2][0]
but I am not find how to do the same.
Please help me how to how can convert this to JS and how to write other the python scripts and how to import modules in HTML. | false | 20,900,530 | 0.197375 | 0 | 0 | 1 | There are more than just security problems. It's just not possible. You can't use the Python socket library inside the client browser. You can convert Python code to JS (probably badly) but you can't use a C based library that is probably not present on the client. You can access the browser only. You cannot reliably get the hostname of the client PC. Maybe ask another question talking about what you are trying to achieve and someone might be able to help | 0 | 152 | 0 | 0 | 2014-01-03T09:36:00.000 | javascript,html,python-2.7,pyjamas | How to Write Python Script in html | 1 | 1 | 1 | 20,901,453 | 0 |
1 | 0 | I'm working on a webscraping project, and I am running into problems with cloudflare scrapeshield. Does anyone know how to get around it? I'm using selenium webdriver, which is getting redirected to some lightspeed page by scrapeshield. Built with python on top of firefox. Browsing normally does not cause it to redirect. Is there something that webdriver does differently from a regular browser? | false | 20,931,426 | 0.197375 | 0 | 0 | 1 | See, what scrapeshield does is checking if you are using a real browser, it's essentially checking your browser for certain bugs in them. Let's say that Chrome can't process an IFrame if there is a 303 error in the line at the same time, certain web browser react differently to different tests, so webdriver must not react to these causing the system to say "We got an intruder, change the page!". I might be correct, not 100% sure though...
More Info on source:
I found most of this information on a Defcon talk about web sniffers and preventing them from getting the proper vulnerability information on the server, he made a web browser identifier in PHP too. | 0 | 4,617 | 0 | 7 | 2014-01-05T08:04:00.000 | python,selenium,web-scraping,cloudflare | Bypassing Cloudflare Scrapeshield | 1 | 1 | 1 | 23,142,928 | 0 |
1 | 0 | I am currently using selenium webdriver to parse through facebook user friends page and extract all ids from the AJAX script. But I need to scroll down to get all the friends. How can I scroll down in Selenium. I am using python. | false | 20,986,631 | 0.019045 | 0 | 0 | 2 | insert this line driver.execute_script("window.scrollBy(0,925)", "") | 0 | 415,578 | 0 | 209 | 2014-01-08T03:44:00.000 | python,selenium,selenium-webdriver,automated-tests | How can I scroll a web page using selenium webdriver in python? | 1 | 1 | 21 | 65,731,313 | 0 |
1 | 0 | I am quite surprised I couldn't find anything on this in my Google searching.
I'm using TwistedWeb to make a simple JSON HTTP API. I'd like to customize the 404 page so it returns something in JSON rather than the default HTML. How might I do this? | true | 20,987,496 | 1.2 | 0 | 0 | 1 | There is no API in Twisted Web like something.set404(someResource). A NOT FOUND response is generated as the default when resource traversal reaches a point where the next child does not exist - as indicated by the next IResource.getChildWithDefault call. Depending on how your application is structured, this means you may want to have your own base class implementing IResource which creates your custom NOT FOUND resource for all of its subclasses (or, better, make a wrapper since composition is better than inheritance).
If you read the implementation of twisted.web.resource.Resource.getChild you'll see where the default NOT FOUND behavior comes from and maybe get an idea of how to create your own similar behavior with different content. | 0 | 396 | 0 | 4 | 2014-01-08T05:12:00.000 | python,twisted,twisted.web | TwistedWeb: Custom 404 Not Found pages | 1 | 1 | 1 | 20,999,393 | 0 |
1 | 0 | I'm new to working with html pages on python.
I'm trying to run the BBC site offline from my PC, and I wrote a python code for that.
I've already made functions that download all html pages on the site, by going through the links found on homepage (with regex).
I have all links on a local directory, but they are all called sub0,sub1,sub2.
How can I edit the homepage so it would direct all links to the html pages on my directory instead of the pages online?
again, the pages aren't called in their original name-
so replacing the domain with a local directory won't work.
I need a way to go through all links on main page and change their whole path. | false | 21,000,038 | 0.197375 | 0 | 0 | 1 | I think the best way to do this would be to create some sort of mapping file. The file would map the original URL on the BBC site => the path to the file on your machine. You could generate this file very easily during the process when you are scraping the links from the homepage. Then, when you want to crawl this site offline you can simply iterate over this document and visit the local file paths. Alternatively you could crawl over the original homepage and do a search for the links in the mapping file and find out what file they lead to.
There are some clear downsides to this approach, the most obvious being that changing the directory structure/filenames of the downloaded pages will break your crawl... | 0 | 162 | 0 | 1 | 2014-01-08T15:39:00.000 | python,html,regex,web-crawler | Trying to download html pages to create a very simple web crawler | 1 | 1 | 1 | 21,000,334 | 0 |
1 | 0 | As the title suggests, I am using the s3cmd tool to upload/download files on Amazon.
However I have to use Windows Server and bring in some sort of progress reporting.
The problem is that on windows, s3cmd gives me the following error:
ERROR: Option --progress is not yet supported on MS Windows platform. Assuming -
-no-progress.
Now, I need this --progress option.
Are there any workarounds for that? Or maybe some other tool?
Thanks. | true | 21,017,853 | 1.2 | 0 | 1 | 2 | OK, I have found a decent workaround to that:
Just navigate to C:\Python27\Scripts\s3cmd and comment out lines 1837-1845.
This way we can essentially skip a windows check and print progress on the cmd.
However, since it works normally, I have no clue why the authors put it there in the first place.
Cheers. | 0 | 701 | 0 | 1 | 2014-01-09T10:38:00.000 | python,windows,progress-bar,progress,s3cmd | s3cmd tool on Windows server with progress support | 1 | 1 | 2 | 21,165,278 | 0 |
0 | 0 | Is there a way to urlencode/urldecode a string in Python? I want to emphasize that I want a single string, not a dict. I know of the function that does that to a dict. I could easily construct out of it a function that does that for a string, but I'd rather not have to construct my own function. Does Python have this functionality built-in?
(Answers for both Python 2.7 and 3.3 would be appreciated.) | false | 21,041,787 | 0.197375 | 0 | 0 | 2 | urllib.unquote will do the trick | 1 | 4,761 | 0 | 8 | 2014-01-10T10:24:00.000 | python,urlencode | URL-encoding and -decoding a string in Python | 1 | 1 | 2 | 21,041,834 | 0 |
1 | 0 | I've written a Spider which has one start_url. The parse method of my spider scraps some data and returns a list of FormRequests.
The problem comes with the response of that post request. It redirects me to another site with some irrelevant GET Parameters. The only parameter which seems to matter is a SESSION_ID posted along in the header. Unfortunately Scrapys behavior is to execute my requests, one after another and queues the redirect response at the end of the queue. If all returned FormRequests are executed, scrapy starts to execute all redirects, which all return the same site.
How can I circumvent this behavior, so that a FormRequest is executed, and the redirect returned in the requests response is executed befor any new FormRequest? Maybe there is another way, like forcing the site somehow to get a new SESSION_ID cookie for each FormRequest. I'm open to any idea that could probably solve the problem. | true | 21,064,467 | 1.2 | 0 | 0 | 0 | I found my own solution to this promlem. Instead of building a list of requests and return them at once, I build a chain of them and passed the next one inside the requests meta_data.
Inside the callback I pass either the next request, storing the parsed item in a spider member, or the parsed list of items if there is no next request to execute. | 0 | 302 | 0 | 0 | 2014-01-11T16:01:00.000 | redirect,python-2.7,scrapy,http-post | Handle Redirects one by one with scrapy | 1 | 1 | 1 | 21,065,555 | 0 |
0 | 0 | Software like CCProxy in windows allows you to setup a cascading proxy.
In squid we can do the same by mentioning a cache_peer directive?
How does this work at application and TCP/IP layer ? Does it form a socket connection to the upstream proxy server ? Any details or RFCs in relation to this?
PS - I want to implement it in Python for some testing purposes. | true | 21,106,004 | 1.2 | 0 | 0 | 1 | Cascading proxy is just the proxy connecting to an upstream proxy. It speaks the same HTTP proxy requests to the upstream proxy as a browser does, e.g. using full urls (method://host[:port]/path..) in the requests instead of just /path and using CONNECT for https tunneling instead of directly connecting with SSL. | 0 | 1,271 | 1 | 0 | 2014-01-14T04:17:00.000 | python,networking,proxy,network-programming,squid | How does a cascading http proxy server works? | 1 | 1 | 1 | 21,121,694 | 0 |
0 | 0 | I've been trying to figure out a way to send commands to a chrome extension via python. My goal is to automate browser functions such as opening a new tab or reloading a page remotely, but on the same computer. What would be the best/simplest way to do this? | false | 21,153,521 | 0 | 0 | 0 | 0 | Try with a tool selenium, you only need add a driver for your web browser. | 0 | 675 | 0 | 1 | 2014-01-16T04:48:00.000 | python,google-chrome | How can I send commands to Chrome extension via a local python application? | 1 | 1 | 2 | 51,508,462 | 0 |
0 | 0 | I have a problem when I try to get (https://XXXXX.jpg)
I'm using this format: (https://.*.jpg) However it doesn't find what I want.
It returns, for example, (https://XXXXX.jpg <## Heading ##div> bla bla bla </div> bla bla https://XYZ .jpg)
Startswith https, endswith jpg.
What should I do? | false | 21,160,593 | 0 | 0 | 0 | 0 | Split on the whole string, then use a loop to iterate the splitted items, each time comparing with startswith("http" and endswith(".jpg") | 1 | 843 | 0 | 0 | 2014-01-16T11:32:00.000 | python,regex,python-2.7,jpeg | Regex for finding jpg | 1 | 1 | 3 | 21,161,910 | 0 |
0 | 0 | We're using Selenium's Python bindings at work. Occasionally I forget to put the call to WebDriver.quit() in a finally clause, or the tear down for a test. Something bad happens, an exception is thrown, and the session is abandoned and stuck as "in use" on the grid.
How can I quit those sessions and return them to being available for use without restarting the grid server? | false | 21,168,690 | 0 | 0 | 0 | 0 | You can restart the node instead of the server. | 0 | 288 | 0 | 0 | 2014-01-16T17:26:00.000 | python,selenium,selenium-grid2 | how do I quit a web driver session after the code has finished executing? | 1 | 1 | 2 | 21,194,979 | 0 |
0 | 0 | How about using CSS selector in Selenium Python if I am not getting id or name or class of that HTML element ? How about preferring CSS in comparison to XPath? | false | 21,172,756 | 0.761594 | 0 | 0 | 5 | No idea what you are trying to ask here. I can only take a guess.
How about using css selector in Selenium Python if I am not getting id
or name or class of that html element ?
If you are testing a complex web application, you have to learn CSS Selector and/or XPath. Yes, other locating methods are somewhat limited.
How about preferring CSS in comparison to xpath?
Generally speaking, CSS Selectors are always in favor of XPath, because
CSS Selectors are more elegant, more readable
CSS Selectors are faster
XPath engines are different in each browser
IE does not have a native xpath engine
However, there are situations XPath is the only way to go. For example
Find element by its text
Find element from its descendants (if there are no other better methods)
Few other rare situations | 0 | 712 | 0 | 0 | 2014-01-16T20:59:00.000 | python-2.7,selenium,xpath,selenium-webdriver,css-selectors | CSS selector and XPath in Selenium Python | 1 | 1 | 1 | 21,173,350 | 0 |
0 | 0 | Does selenium automatic follow redirects? Because it seems that the webdriver isn't loading the page I requested.
And when it does automatic follow the redirects is there any possibility to prevent this?
ceddy | false | 21,173,816 | 0.197375 | 0 | 0 | 1 | No, Selenium drives the browser like a regular user would, which means redirects are followed when requested by the web application via either a 30X HTTP status or when triggered by javascript.
I suggest you consider a legitimate bug in the application if you consider it problematic when it happens to users. | 0 | 3,114 | 0 | 2 | 2014-01-16T22:01:00.000 | python,selenium,selenium-webdriver | Selenium prevent redirect | 1 | 1 | 1 | 21,173,887 | 0 |
0 | 0 | I have a python program that takes pictures and I am wondering how I would write a program that sends those pictures to a particular URL.
If it matters, I am running this on a Raspberry Pi.
(Please excuse my simplicity, I am very new to all this) | false | 21,200,565 | 0 | 1 | 0 | 0 | The requests library is most supported and advanced way to do this. | 0 | 2,487 | 0 | 0 | 2014-01-18T05:46:00.000 | python,linux,curl | Curl Equivalent in Python | 1 | 1 | 4 | 21,940,288 | 0 |
1 | 0 | I am building a small program with Python, and I would like to have a GUI for some configuration stuff. Now I have started with a BaseHTTPServer, and I am implementing a BaseHTTPRequestHandler to handle GET and POST requests. But I am wondering what would be best practice for the following problem.
I have two separate requests that result in very similar responses. That is, the two pages that I return have a lot of html in common. I could create a template html page that I retrieve when either of these requests is done and fill in the missing pieces according to the specific request. But I feel like there should be a way where I could directly retrieve two separate html pages, for the two requests, but still have one template page so that I don't have to copy this.
I would like to know how I could best handle this, e.g. something scalable. Thanks! | true | 21,206,568 | 1.2 | 0 | 0 | 1 | This has nothing to do with BaseHTTPRequestHandler as its purpose is to serve HTML, how you generate the HTML is another topic.
You should use a templating tool, there are a lot available for Python, I would suggest using Mako or Jinja2. then, on your code, just get the real HTML using the template and use it on your handler response. | 0 | 205 | 0 | 0 | 2014-01-18T16:14:00.000 | python,html,webserver | Use template html page with BaseHttpRequestHandler | 1 | 1 | 1 | 21,260,040 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.