Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | So I want to implement a simple comms protocol where reads & writes are completely asynchronous. That means that client sends some data and then server may or may not respond with an answer. So I can't just call reader.read() because that blocks until at least something is returned. And I may have something more to send in the mean time.
So is there a way to check if reader has something to read?
(please note that I'm talking specifically about the streams version: I'm fully aware that protocols version has separate handlers for reading and writing and does not suffer from this issue) | true | 26,755,660 | 1.2 | 0 | 0 | 2 | There is no way to ask reader has incoming data or not.
I guess to create asyncio.Task for reading data from asyncio stream reader in loop.
If you need to write data asynchronously feel free to call StreamWriter.write() from any task that have some outgoing data.
I strongly dont recommend to use protocols directly -- they are low-level abstraction useful for flow control but for application code is better to use high-level streams. | 1 | 719 | 0 | 2 | 2014-11-05T11:06:00.000 | python-3.x,python-asyncio | asyncio streams check if reader has data | 1 | 1 | 1 | 26,763,233 | 0 |
1 | 0 | I am trying to setup token based authentication in python. using Django/DRF. but this is more about http in general i think.
When users put in username/password I return to them their token via JSON.
The client then can post the token in HTTP Header for me to check.
My problem is I want the token to persist in the header automatically, just like cookies.
When the server says "set-cookie" to browser (Chrome/FF), the browser will automatically send up the cookie without me actually doing anything. Is there something I can do with this token?
I have tried storing it in header: "Authorization", but the browser didn't return it. Is there something like "Set-Authorization"?
thanks | false | 26,792,751 | 0.664037 | 0 | 0 | 4 | No, only cookies stored in browser persistently. All other headers are transient by HTTP protocol definition. | 0 | 364 | 0 | 4 | 2014-11-07T01:50:00.000 | python,django,http,authentication,cookies | How to get Authentication header to persist | 1 | 1 | 1 | 26,803,136 | 0 |
0 | 0 | I have created a simple chat program in python that allows many clients to connect to a single server. I now want to create a two server model, still with many clients, so that the clients will be able to connect to either server. Then when a client sends a message to server1 it will broadcast to all its connected clients and also send to server2, which will then broadcast to all its (server2's) connected clients. The part I am stuck on is the server to server communication. I would like to do this on my local LAN on the same subnet; I am not trying to do this across the internet so no need to worry about dns or other protocols.
Not sure if I need to do multi-threading here or just add to the while loop that maintains connects and sends/receives the data.
Any help here would be appreciated. | false | 26,813,652 | 0.197375 | 0 | 0 | 1 | I don't think that threading is an issue here. You can design a solution with or without it.
In short, your servers are not really different from your clients. They connect to other servers and send text/data to them. The only thing you have to handle specifically is the re-broadcasting of the clients chats.
This is particularly tricky and is subject to a lot of problems. You can check out how IRC handles it. You will face multiple problems which might be mitigated if you stay within a LAN. In IRC, all servers are rather equal so bringing a bunch of them down shouldn't affect the whole network (in reality this is different :p).
What if server1 broadcasted message1 with timestamp[5 secs] then receive message2 from server2 with timestamp[2 secs]. For the client messages will appear out of order.
One thing you could do is to elect a master server for all other servers. This one will contain the master list and will manage the timestamps. All other servers will send their clients messages to it and will wait for a synchronization of the "master chat log", then broadcast the new data to all connected clients. Some messages might appear out of order for the clients if there is a lot of lag, but at least the timestamps will match and will be incremental | 0 | 379 | 0 | 1 | 2014-11-08T04:12:00.000 | python,sockets,chat,distributed | Distributed Local Chat Servers | 1 | 1 | 1 | 26,814,280 | 0 |
0 | 0 | let me explain the problem:
I have pyCharm, SublimeText, CodeRunner, and IPython Notebook. I've been exploring each one to see which i like best. Turns out to be PyCharm.
Here's the problem - CodeRunner recognizes the package for "Selenium", and gladly imports the module.
However, when I use pyCharm and iPython notebook, an import error occurs - which befuddles me. Why would it work for one IDE and not the another?
Also - i used "sudo pip install selenium" in the terminal. What exactly is the root of the problem? I feel like it has something to do with permissions, but am not knowledgeable of how to modify permissions for python packages.
Thanks. | false | 26,832,106 | 0 | 0 | 0 | 0 | In Pycharm you need to source the directory that contains the package Selenium.
In your project browser right-click a directory and select Mark directory as > source
eg:
/path/to/py/packages/Selenium/
You need to source the "packages" directory. | 1 | 220 | 0 | 0 | 2014-11-09T19:06:00.000 | python,selenium,ide,installation,package-managers | Import Error Python Varies by IDE | 1 | 1 | 1 | 26,832,348 | 0 |
0 | 0 | I've come to the realization where I need to change my design for a file synchronization program I am writing.
Currently, my program goes as follows:
1) client connects to server (and is verified)
2) if the client is verified, create a thread and begin a loop using the socket the client connected with
3) if a file on the client or server changes, send the change through that socket (using select for asynchronous communication)
My code sucks because I am torn between using one socket for file transfer or using a socket for each file transfer. Either case (in my opinion) will work, but for the first case I would have to create some sort of protocol to determine what bytes go where (some sort of header), and for the second case, I would have to create new sockets on a new thread (that do not need to be verified again), so that files can be sent on each thread without worrying about asynchronous transfer.
I would prefer to do the second option, so I'm investigating using SocketServer. Would this kind of problem be solved with SocketServer.ThreadingTCPServer and SocketServer.ThreadingMixIn? I'm having trouble thinking about it because I would assume SocketServer.ThreadingMixIn works for newly connected clients, unless I somehow have an "outer" socket server which servers "inner" socket servers? | true | 26,848,683 | 1.2 | 0 | 0 | 0 | SocketServer will work for you. You create one SocketServer per port you want to listen on. Your choice is whether you have one listener that handles the client/server connection plus per file connections (you'd need some sort of header to tell the difference) or two listeners that separate client/server connection and per file connections (you'd still need a header so that you knew which file was coming in).
Alternately, you could choose something like zeromq that provides a message transport for you. | 1 | 158 | 0 | 0 | 2014-11-10T16:44:00.000 | python,sockets,socketserver | Can I use the Python SocketServer class for this implementation? | 1 | 1 | 1 | 26,849,719 | 0 |
1 | 0 | We are thinking to move away from Django and separate the backend and frontend.
The backend is straight forward as I have done it plenty of times by exposing it as a Python RESTful API.
Whats new to me is the thin client part.
Theoretically I could just write HTML and plain javascript to talk to the API.
Is there a macro-framework that would help me to achieve that? Beside AngularJS, what other thin client frameworks could I utilise without reinventing the wheel? | false | 26,849,884 | 0.099668 | 0 | 0 | 1 | If I understand what you're trying to do, you might be looking for something like jQuery. It's a subtle JS framework that will make it easier to talk to your Django API, especially using Ajax and JSON. | 0 | 146 | 0 | 0 | 2014-11-10T17:54:00.000 | javascript,python,rest | How to move from Django to REST-API/Thin Client? | 1 | 1 | 2 | 26,850,033 | 0 |
1 | 0 | how to protect request information from 3rd party android apps using server side.
Hello,
anybody can help, how to identify 3rd party android apps and we protect access from 3rd party | true | 26,859,236 | 1.2 | 0 | 0 | 1 | Assuming you will be using a REST-based setup you might want to look into using ssl certificates and use https for verification and signal protection.
For a simpler solution, use a pre-shared-key and put it in the header of the request.
With that said your setup will only be as safe as your key management. Encryption and information security is hard.
Good luck! | 0 | 39 | 0 | 0 | 2014-11-11T06:53:00.000 | android,python | how to protect request information from 3rd party android apps using python | 1 | 1 | 1 | 26,860,181 | 0 |
1 | 0 | Here's just about the simplest open and close you can do with webdriver and phantom:
from selenium import webdriver
crawler = webdriver.PhantomJS()
crawler.set_window_size(1024,768)
crawler.get('https://www.google.com/')
crawler.quit()
On windows (7), every time I run my code to test something out, new instances of the conhost.exe and phantomjs.exe processes begin and never quit. Am I doing something stupid here? I figured the processes would quit when the crawler.quit() did... | true | 26,868,492 | 1.2 | 0 | 0 | 0 | Go figure. Problem resolved with a reboot. | 0 | 809 | 0 | 1 | 2014-11-11T15:34:00.000 | python,selenium-webdriver,phantomjs | Selenium webdriver + PhantomJS processes not closing | 1 | 1 | 2 | 29,312,102 | 0 |
0 | 0 | I'm a newbie on selenium and I wrote a little python script using selenium to test some functionalities on our website.
But I notice something strange in my code. I have several functions for example :
Login to website
List item
Click on a link etc.
But, each time selenium hits the bottom of a function it closes the browser and I lose my session. This forces me to put all the test in just one function.
Is there a way to prevent this behavior? I'm using selenium RC not a Webdriver. | true | 26,889,799 | 1.2 | 0 | 0 | 1 | I figure it out :) It seems that wen you remove the test_ from the beginning of the function I can setup function wihtout the browser to be closed | 0 | 6,289 | 0 | 3 | 2014-11-12T14:51:00.000 | python,selenium,selenium-rc | How to prevent Selenium from closing browser after each function? | 1 | 1 | 2 | 26,892,369 | 0 |
0 | 0 | I'm writing a neat little async HTTP library for Python that is a ported (yet slightly more robust) version of the standard library's included HTTP utilities. I keep coming back to trying to figure out whether it is necessary to even write a Protocol subclass or just implement a StreamReader/StreamWriter.
I have done a good amount of reading through the PEPs and what not. I'm the kind of person who comes here as an absolute last resort because I hate asking unnecessary questions or preventing those who truly need help from receiving it. Thank you in advance for your exceptional advice and wisdom, friends :) | true | 26,897,876 | 1.2 | 0 | 0 | 0 | reader/writer pair should be enough.
While we use own implementation in aiohttp for some, (partially historic) reasons. | 1 | 137 | 0 | 2 | 2014-11-12T22:27:00.000 | python,http,asynchronous,stream,protocols | Are Python's AsyncIO protocols necessary for writing an async HTTP library? | 1 | 1 | 1 | 26,914,159 | 0 |
0 | 0 | I have a web crawling python script that takes hours to complete, and is infeasible to run in its entirety on my local machine. Is there a convenient way to deploy this to a simple web server? The script basically downloads webpages into text files. How would this be best accomplished?
Thanks! | false | 26,901,882 | -0.039979 | 1 | 0 | -1 | If you have a google e-mail account you have an access to google drive and utilities. Choose for colaboratory (or find it in more... options first). This "CoLab" is essentially your python notebook on google drive with full access to your files on your drive, also with access to your GitHub. So, in addition to your local stuff you can edit your GitHub scripts as well. | 0 | 26,999 | 0 | 23 | 2014-11-13T05:23:00.000 | python,cloud,web-crawler,virtual,server | What is the easiest way to run python scripts in a cloud server? | 1 | 2 | 5 | 65,263,186 | 0 |
0 | 0 | I have a web crawling python script that takes hours to complete, and is infeasible to run in its entirety on my local machine. Is there a convenient way to deploy this to a simple web server? The script basically downloads webpages into text files. How would this be best accomplished?
Thanks! | false | 26,901,882 | 0.039979 | 1 | 0 | 1 | In 2021, Replit.com makes it very easy to write and run Python in the cloud. | 0 | 26,999 | 0 | 23 | 2014-11-13T05:23:00.000 | python,cloud,web-crawler,virtual,server | What is the easiest way to run python scripts in a cloud server? | 1 | 2 | 5 | 67,290,539 | 0 |
0 | 0 | I'm developing a project in my university where I need to receive some data from a set of an Arduino, a sensor and a CuHead v1.0 Wifi-shield on the other side of the campus. I need to communicate thru sockets to transfer the data from sensor to the server but I need to ensure that no one will also send data thru this open socket. On the server side, I need to open the socket using Python 3 or 2.7. On the sensor side, the library of CuHead WiFi Shield lacks important functions to create secure connections. It is possible to ensure this level of security only on the server side? What suggestions do you have for me? | false | 26,909,777 | 0 | 1 | 0 | 0 | use stunnel at both ends, so all traffic at both ends goes to localhost that stunnel encrypts and sends to the other end | 0 | 34 | 0 | 0 | 2014-11-13T13:15:00.000 | python,security,sockets,arduino | How to ensure that no one will access my socket? | 1 | 1 | 1 | 26,910,064 | 0 |
1 | 0 | I'm able to verify whether or not an element exists and whether or not it is displayed, but can't seem to find a way to see whether it is 'clickable' (Not talking about disabled).
Problem is that often when filling a webform, the element i want may be overlayed with a div while it loads. The div itself is rather hard to detect as it's id, name and even the css is flexible. Hence i'm trying to detect whether or not a input field can be 'clicked' or 'filled'. When the overlaying div is present, the field cannot be filled by a regular user (As the div would overlay the input field and not allow the user to fill it) but it can be filled by selenium. I want to prevent that and only allow selenium to fill it once the user can also fill it. | false | 26,943,847 | 1 | 0 | 0 | 9 | You can use webElement.isEnabled() and webElement.isDisplayed() methods before performing any operation on the input field...
I hope this will solve you problem...
Else, you can also put a loop to check above 2 conditions and if those are true you can specify the input text and again you can find the same element and you can get the text of the webelement and compare that text with the text you entered. If those matches you can come out of the loop. | 0 | 53,151 | 0 | 20 | 2014-11-15T08:01:00.000 | python,html,selenium,selenium-webdriver | Check whether element is clickable in selenium | 1 | 1 | 3 | 26,947,299 | 0 |
0 | 0 | I am running a python script on my raspberry pi, via ssh. I use the google oauth library to fetch events from google calendar, but I have problems with the authentication.
When I run the program on my main computer (which has a GUI and a web browser), it works as expected, but not on my Pi. I am running the program with the flag --noauth_local_webserver because of that there exists no web browser on the Pi. Instead I get a link to click on but when I do that, google answers with the redirect_uri_mismatch error. I am running this locally at home, but it works on my main computer so I cannot figure out what is wrong. Any suggestions? | false | 26,952,168 | 0.197375 | 1 | 0 | 1 | Ok, so I found the answer!
The problem is that if the registred application is set to web application in the google developer console settings, this is the error message you will get. To solve this I just changed the type to desktop application instead. | 0 | 1,538 | 0 | 1 | 2014-11-15T23:52:00.000 | python | Redirect error when using google oauth and the flag --noauth_local_webserver | 1 | 1 | 1 | 26,955,746 | 0 |
0 | 0 | I am trying to get different JSON from bitcoin market RESTful API.
Here is the problem:
I can only send my single GET request to API one by one, so that I cannot get all the data from all the bitcoin market at the same time.
Is there a way to use Python thread(each thread send GET request using different client ports) to get multiple data simultaneously? | false | 26,953,276 | 0 | 0 | 0 | 0 | Yes.
Use threading for concurrent network requests.
If you do computations on the response you will be bound by the GIL (Global Interpreter Lock) in which case you can start other processes to do the computations on the data using the multiprocessing library.
Requests lib supports threading and Doug Hellemans "Python Module of the Week" blog posts and book are a great place to read and understand the Apis for threading and multiprocessing in python. | 0 | 704 | 0 | 2 | 2014-11-16T03:01:00.000 | python,multithreading,api,httprequest,bitcoin | Send Multiple GET Requests to Server:80 port Simultaneously | 1 | 1 | 1 | 27,347,087 | 0 |
0 | 0 | I created a label, "handler_gmail", in Gmail's web interface via the typical approach.
When I try to set that label for a message via the Gmail API (Python client library), I get the response HttpError 400, returned "Invalid label: handler_gmail".
I am sure the label exists (I've checked multiple times – no typos).
I am trying to set it like so:
gs.users().messages().modify(userId=username, id=m['id'], body={"addLabelIds": ['handler_gmail']}).execute()
I tried adding a "removeLabelIds": [] key/value pair to the dictionary (thinking maybe it was required), but got the same "Invalid label" error.
Any help with this is greatly appreciated! So close to being able to do what I need to with this project! | false | 26,963,868 | 0.321513 | 0 | 0 | 5 | I was able to list the system and user labels via print(gs.users().labels().list(userId=username).execute())
This revealed that the label ID for my label was something else ("Label_1" -- the name was "handler_gmail"). So I will make a support method that gets me a label ID by name, and add the label to the message (via modify) using the ID.
Thanks! | 0 | 6,142 | 0 | 9 | 2014-11-17T00:46:00.000 | python,gmail-api | Gmail API: "Invalid Label" When Trying to Set Message Label to One That I Created | 1 | 1 | 3 | 26,964,386 | 0 |
1 | 0 | I'm trying to figure out how to have a real time data displayed on a webpage through the use of an android app.
For example, users are using android app and are scored (answering questions). My webpage will display the scores in order in real-time.
Iv come to the conclusion to use Redis, but what exactly do i need for this to function? Do i need a web socket where my web page will communicate with. This socket could be python where it accesses the database and then responds with the scores in order?
Im struggling to find out how exactly this would work as this is new to me. What exactly should i be looking for (terminology etc)?
Thanks | true | 26,985,577 | 1.2 | 0 | 0 | 2 | One approach might be:
set up a database on a server on the internet
program your app to save data to that remote database
set up a webservice
program the webservice to read data from the database and serve it as an html page
For extra credit
implement a REST style API which retrieves and serves data from the database
create an ajax webpage which uses the API to fetch and display data (so you don't have to refresh the page constantly) | 0 | 651 | 0 | 1 | 2014-11-18T02:28:00.000 | android,python,sockets,redis,real-time | Real-time communication between app and webpage | 1 | 1 | 1 | 26,985,616 | 0 |
1 | 0 | Using mechanize, how can I wait for some time after page load (some websites have a timer before links appear, like in download pages), and after the links have been loaded, click on a specific link?
Since it's an anchor tag and not a submit button, will browser.submit() work(I got errors while doing that)? | false | 26,986,623 | 0 | 0 | 0 | 0 | If it's an anchor tag, then just GET/POST whatever it is.
The timer between links appearing is generally done in javascript - some sites you are attempting to scrape may not be usable without javascript, or requires a token generated in javascript with clientside math.
Depending on the site, you can either extract the wait time in msec/sec and time.sleep() for that long, or you'll have to use something that can execute javascript | 0 | 1,315 | 0 | 1 | 2014-11-18T04:31:00.000 | python,mechanize,mechanize-python | Python mechanize wait and click | 1 | 1 | 2 | 26,986,649 | 0 |
0 | 0 | Is is possible to actually fake your IP details via mechanize? But if it's not, for what is br.set_proxies() used? | true | 26,987,797 | 1.2 | 1 | 0 | 0 | You cant' fake you IP address because IP is third layer and HTTP works on 7th layer.
So, it's imposible to send ip with non-your ip.
(you can set up second interface and IP address using iproute2 and set routing throught that interface, but it's not python/mechanize level. it's system level) | 0 | 711 | 0 | 0 | 2014-11-18T06:13:00.000 | python,mechanize-python | Change IP with Python mechanize | 1 | 2 | 2 | 26,988,092 | 0 |
0 | 0 | Is is possible to actually fake your IP details via mechanize? But if it's not, for what is br.set_proxies() used? | false | 26,987,797 | 0 | 1 | 0 | 0 | You don't fake your IP details, set_proxy is to configure a HTTP proxy. You still need legitimate access to the IP. | 0 | 711 | 0 | 0 | 2014-11-18T06:13:00.000 | python,mechanize-python | Change IP with Python mechanize | 1 | 2 | 2 | 26,987,818 | 0 |
0 | 0 | I use eval('string') in a python script to execute a piece of java scrit, and I would like to store the string in a xml and then parse use element tree as text string, the problem is in this way, the eval() will return nothing since the parsed string is a string object not a original string which can be reconigized by eval(), anybody knows how to solve this problem? I am a freshman on programming, any suggestions will be highly appriciated. | false | 26,989,736 | 0 | 0 | 0 | 0 | You probably shouldn't use eval... But to answer your question on how to store an object to be later executed but eval, you do repr(my_object), this will often return a string, suitable for eval, but this is not always true. | 0 | 207 | 0 | 0 | 2014-11-18T08:28:00.000 | python,xml,string,eval | element tree parse xml text string to be called by eval | 1 | 1 | 1 | 26,990,360 | 0 |
1 | 0 | I want to use javascript to retrieve a json object from a python script
Ive tried using various methods of ajax and post but cant get anything working.
For now I have tried to set it up like this
My Javascript portion:
I have tried
$.post('./cgi-bin/serverscript.py', { type: 'add'}, function(data) {
console.log('POSTed: ' + data);
});
and
$.ajax({
type:"post",
url:"./cgi-bin/serverscript.py",
data: {type: "add"},
success: function(o){ console.log(o); alert(o);}
});
My Python
import json import cgi import cgitb cgitb.enable() data = cgi.FieldStorage()
req = data.getfirst("type") print "Content-type: application/json"
print print (json.JSONEncoder().encode({"status":"ok"}))
I am getting a 500 (internal server error) | false | 27,019,558 | 0 | 1 | 0 | 0 | Have you checked your host's server logs to see if it's giving you any output?
Before asking here, a good idea would be to ssh to your host, if you can, and running the program directly, which will most likely print the error in the terminal.
This is far too general at the moment, there are so many reasons why a CGI request can fail ( misconfigured environment, libraries not installed, permissions errors )
Go back and read your servers logs and see if that shines any more light on the issue. | 0 | 847 | 0 | 0 | 2014-11-19T14:38:00.000 | javascript,python,ajax,json,cgi | Running CGI Python Javascript to retrieve JSON object | 1 | 1 | 2 | 29,558,961 | 0 |
1 | 0 | Coding in Python 2.7.
I have a crawler I setup and it works perfectly fine when driven by FireFox, but breaks when driven by PhantomJS.
I'm trying to click on an element with href="#"
The crux of the issue is that when the FF driver clicks on this element with the # href, it performs the javascript action (in this case revealing a hidden part of a layer), but when PhantomJS does it, it either doesn't perform the click or it does click it but # just reloads the same page (I can't tell which one).
I've tried everything I can think of, including multiple ActionChains and clicking element by element all the way down to this one with the link. Nothing seems to work.
Any ideas? | false | 27,028,599 | 0 | 0 | 0 | 0 | href="#" just refreshes the page when you select the link, for example if it was href=#top" when you select the link you would be brought to the top of that same page.
You are probably doing it correctly, you can use driver.find_element_by_link_text('some text') or driver.find_element_by_partial_link_text('some text'), but clicking on that element is just routing you to that same page. | 0 | 800 | 0 | 2 | 2014-11-19T22:54:00.000 | python,selenium,selenium-webdriver,phantomjs | Selenium PhantomJS click element with href="#" | 1 | 1 | 1 | 51,226,805 | 0 |
0 | 0 | i am quite new to curl development. I am working on centOS and i want to install pycurl 7.19.5, but i am unable to as i need libcurl 7.21.2 or above.I tried installing an updated curl but it is still pointing to the old libcurl.
curl-config --version
libcurl 7.24.0
curl --version
curl 7.24.0 (x86_64-unknown-linux-gnu) libcurl/7.21.1 OpenSSL/1.0.1e zlib/1.2.3 c-ares/1.7.3 libidn/1.29 Protocols: dict file ftp ftps http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp Features: AsynchDNS Debug TrackMemory IDN IPv6 Largefile NTLM SSL libz.
Can anyone please help me how i can update the libcurl version in curl | true | 27,041,862 | 1.2 | 1 | 0 | 2 | You seem to have two versions installed, curl-config --version shows the newer version (7.24.0) and curl (the tool) is the newer version but when it runs the run-time linker ld.so finds and uses the older version (7.21.1).
Check /etc/ld.so.conf for which dirs that are check in which order and see if you can remove one of the versions or change order of the search. | 0 | 4,612 | 0 | 2 | 2014-11-20T14:21:00.000 | python,linux,curl,libcurl,centos6 | libcurl mismatch in curl and curl-config | 1 | 1 | 2 | 27,042,314 | 0 |
0 | 0 | i am planning to make web crawler which can crawl 200+ domain, which of the language will be suitable for it. I am quite familiar with PHP but an amateur at Python. | true | 27,054,206 | 1.2 | 1 | 0 | 1 | I have built crawlers in both languages. While I personally find it easy to make a crawler in python because of huge number of freely available libraries for html parsing, I would recommend that you go with the language you are most comfortable with. Build a well designed and efficient crawler in a language you know well and you will get even better at that language. There is no feature which can not be implemented in either of the two languages so just make a decision and start working.
Good luck. | 0 | 1,196 | 0 | 0 | 2014-11-21T04:37:00.000 | php,python,web-crawler | PHP vs Python For Web Crawler | 1 | 2 | 2 | 27,055,726 | 0 |
0 | 0 | i am planning to make web crawler which can crawl 200+ domain, which of the language will be suitable for it. I am quite familiar with PHP but an amateur at Python. | false | 27,054,206 | 0.099668 | 1 | 0 | 1 | You could just try both. Make one in php and one in python. It'll help you learn the language even if you're experienced. Never say no to opportunities to practice. | 0 | 1,196 | 0 | 0 | 2014-11-21T04:37:00.000 | php,python,web-crawler | PHP vs Python For Web Crawler | 1 | 2 | 2 | 27,055,630 | 0 |
0 | 0 | I'm using Python server-side and I need to put a Google Maps Marker for every database document.
Is it possible and how? | false | 27,132,481 | 0 | 0 | 0 | 0 | There are a number of ways to handle this. I would create an API with the locations (lat, long). Then on the client side, use AJAX to consume the API and then use a loop to create new markers, appending each to the DOM. Make sense? | 0 | 448 | 0 | 0 | 2014-11-25T16:59:00.000 | python,google-maps,google-maps-api-3 | How to send Google Maps API Marker from Python server? | 1 | 1 | 2 | 27,132,863 | 0 |
0 | 0 | Is there a way to get socket id of current client? I have seen it is possible to get socket id in node.js. But it seems flask's socket.io extension is a little bit different than node's socketio. | false | 27,177,999 | 0.197375 | 0 | 0 | 3 | Consider using request.sid which gets its value populated from the FlaskSocketIO library. | 0 | 10,949 | 0 | 8 | 2014-11-27T20:00:00.000 | python,flask,socket.io | getting socket id of a client in flask socket.io | 1 | 1 | 3 | 50,516,756 | 0 |
0 | 0 | I am doing a project on media (more like a video conferencing).
The problem is , although I am able to send text/string data from one peer to another, I am still not sure about video files.
Using gstreamer, I am able to capture video stream from my webcam and doing the encoding/coding (H.264) , I am able to write the video stream into actual mp4 contanier directly using a file sink
Now my problem is, I am not sure on reading the video files as it contains both audio and video streams, convert into transmission stream to transmit it using packets
(I am able to send a very small jpeg file although).
I am using socket module and implementing UDP | true | 27,198,697 | 1.2 | 0 | 0 | 1 | If you are to send a video(with audio) to a peer in the network, you would better use RTP(Real time Transfer Protocol) which works on top of UDP. RTP provides feature of timestamps and profile which help you syncronize the audio and video sent through two ports. | 1 | 73 | 0 | 0 | 2014-11-29T03:58:00.000 | python,linux,sockets,video,udp | python - how to have video file(data) into list? | 1 | 1 | 1 | 27,199,731 | 0 |
0 | 0 | So I'm writting an IRC bot in Python. Right now, the first thing I'm trying to get it to do is Log all the important things in every channel it's on (as well as private messages to the bot itself).
So far I've gotten it to log the JOIN, PRIVMSG (including CTCP commands) and PART. However I'm having some trouble with the QUIT command. Now I know the QUIT command doesn't include the <channel> parameter because it doesn't need it. However my bot is connected to multiple channels and I need to be able to differentiate which channels the user was connected to when he/she issued the QUIT command to appropriate log it. Many users won't be connected to every channel the bot is.
What would be the ideal way to go about this? Thanks for your help. | true | 27,279,922 | 1.2 | 1 | 0 | 0 | It sounds like you want to write the same QUIT log message to multiple per-channel log files, but only specific ones the bot is in?
To accomplish something similar, I ended up getting the list of names in a channel when the bot joins, then keeping track of every nick change, join, part, kick, and quit, and adjusting the bot's internal list. That way, on a quit, I could just check the internal list and see what channels they were on. | 0 | 73 | 0 | 0 | 2014-12-03T19:21:00.000 | python,python-3.x,bots,irc | Logging where the QUIT command comes from in IRC | 1 | 1 | 1 | 27,316,992 | 0 |
0 | 0 | I haven't been able to find an answer to this, and I was hoping y'all could help me out. When accessing the API for one of the products we use at work, a date field comes back in two different forms depending on if you request JSON or XML.
The first, which I understand, is what XML sends:
2014-12-03T23:59:00
The second, is what I can't figure out. When requesting JSON, it sends the same date in the following format:
\/Date(1417672740000-0600)\/
Forgive me if this has already been answered, I just haven't been able to find it here or on Google. Hopefully it's something I should already know.
FYI, I'm using the requests module for Python. | true | 27,283,326 | 1.2 | 0 | 0 | 0 | Jon Skeet:
Looks like it's the format from DataContractJsonSerializer apart from anything else - millis since the Unix epoch, and offset from UTC | 0 | 43 | 0 | 0 | 2014-12-03T23:02:00.000 | python,json,python-requests | API Date in a Strange Format | 1 | 1 | 1 | 27,283,764 | 0 |
0 | 0 | I am working on an attendance system on Local server which contains the database file in json format and when a request is made to the system via RFID the script looks into the local json file for the RFID and respond accordingly. Now the problem is that for single request it is performing well but as our script(made in Python 3.3) works on multiple threading so when two or more request hits at the same time i get an exception of "expecting object:line 1 column" ... like this. So what is the problem in the script or what can be the possible solution for this. Remember there is no online request made during the script working. | true | 27,309,902 | 1.2 | 0 | 0 | 1 | close the reading instances with instance.close() after loading the values with json_object.loads(). | 1 | 135 | 0 | 0 | 2014-12-05T06:13:00.000 | python,json | Error on reading json file by multiple instances at the same time | 1 | 1 | 1 | 27,310,481 | 0 |
1 | 0 | I'm trying to select all the tables inside a division which has xpath similar to //*[@id="mw-content-text"]/table[@class="wikitable sortable jquery-tablesorter"]. But the selector doesn't returns any value. How can I get through those tags which have spaces in their id/class ? | false | 27,331,444 | 0 | 0 | 0 | 0 | I had the same issue because I was trying to scrape a wikipedia page. The class name for the table shows up as "wikitable sortable jquery-tablesorter" because of the plugin mentioned in the other answer which adds to the class name after it is used.
In order to pick up the table you can just look for the following class instead "wikitable sortable". This picks up the code for me. | 0 | 2,468 | 0 | 0 | 2014-12-06T11:59:00.000 | python,xpath,web-scraping,scrapy | How to select tables in scrapy using selectors whose class id have spaces in it? | 1 | 1 | 2 | 41,831,890 | 0 |
0 | 0 | How can I create a UDP server in Python that is possible to know when a client has disconnected? The server needs to be fast because I will use in an MMORPG. Never did a UDP server then I have a little trouble. | false | 27,342,216 | 0 | 0 | 0 | 0 | UDP is not connection-based. Since no connection exists when using UDP, there is nothing to disconnect. Since there is nothing to disconnect, you can't ever know when something disconnects. It never will because it was never connected in the first place. | 0 | 291 | 0 | 0 | 2014-12-07T11:34:00.000 | python,windows,sockets,networking,python-3.x | UDP Server in Python | 1 | 1 | 2 | 27,344,058 | 0 |
0 | 0 | I've coded a small raw packet syn port scanner to scan a list of ips and find out if they're online. (btw. for Debian in python2.7)
The basic intention was to simply check if some websites are reachable and speed up that process by preceding a raw syn request (port 80) but I stumbled upon something.
Just for fun I started trying to find out how fast I could get with this (fastest as far as i know) check technique and it turns out that despite I'm only sending raw syn packets on one port and listening for responses on that same port (with tcpdump) the connection reliability quite drops starting at about 1500-2000 packets/sec and shortly thereafter almost the entire networking starts blocking on the box.
I thought about it and if I compare this value with e.g. torrent seeding/leeching packets/sec the scan speed is quiet slow.
I have a few ideas why this happens but I'm not a professional and I have no clue how to check if I'm right with my assumptions.
Firstly it could be that the Linux networking has some fancy internal port forwarding stuff running to keep the sending port opened (maybe some sort of feature of iptables?) because the script seems to be able to receive syn-ack even with closed sourceport.
If so, is it possible to prevent or bypass that in some fashion?
Another guess is that the python library is simply too dumb to do real proper raw packet management but that's unlikely because its using internal Linux functions to do that as far as I know.
Does anyone have a clue why that network blocking is happening?
Where's the difference to torrent connections or anything else like that?
Do I have to send the packets in another way or anything? | true | 27,351,360 | 1.2 | 1 | 0 | 0 | Months ago I found out that this problem is well known as c10k problem.
It has to do amongst other things with how the kernel allocates and processes tcp connections internally.
The only efficient way to address the issue is to bypass the kernel tcp stack and implement various other low-level things by your own.
All good approaches I know are working with low-level async implementations
There are some good ways to deal with the problem depending on the scale.
For further information i would recommend to search for the c10k problem. | 0 | 126 | 1 | 0 | 2014-12-08T04:18:00.000 | python,linux,performance | speed limit of syn scanning ports of multiple targets? | 1 | 1 | 1 | 29,195,455 | 0 |
0 | 0 | I have a proxy setup on my machine (Win -7).
I have written a python program which tries to open new tab of a browser with given URL, with the help of webbrowser module in python.
But webbrowser.open_new_tab(URL) fails when I check the "Use proxy server for your LAN" checkbox in Internet Explorer settings (under LAN Settings), but it works perfectly fine when I uncheck this box .
I don't understand why this is happening. Is there any way by which the webbroser module works with a proxy?
Am I doing anything wrong here? | false | 27,359,258 | 0 | 0 | 0 | 0 | Sorry for the noise.
I just realized that urllib2.urlopen(url) was breaking the things, not webbrowser.open_new_tab(url).
We need to use proxyhandlers of urllib2 to get away with this.
@Lawrence, Thanks for the help | 0 | 1,204 | 0 | 0 | 2014-12-08T13:36:00.000 | python,proxy,python-webbrowser | Pythons webbrowser.open_new_tab(url) with proxy | 1 | 1 | 1 | 27,373,111 | 0 |
0 | 0 | I am using Python Beautiful Soup for website Scrapping. My program hits different urls of a website more than thousand times. I don not wish to get banned. As a first step, I would like to introduce IPmasking in my project.
Is there any possible way to hit different urls of a website from a pool of rotating IPs with the help of Python modules like ipaddress, socket etc? | false | 27,394,554 | 0.197375 | 0 | 0 | 1 | The problem is your public IP address. What you can do is use a list of proxy's and rotate through them. | 0 | 2,628 | 0 | 0 | 2014-12-10T06:28:00.000 | python,web-scraping,beautifulsoup,masking | IP masking with Python | 1 | 1 | 1 | 27,397,328 | 0 |
1 | 0 | I'm building a small RESTful API with bottle in python and am currently experiencing an issue with character encodings when working with the request object.
Hitting up http://server.com/api?q=äöü and looking at request.query['q'] on the server gets me "äöü", which is obviously not what I'm looking for.
Same goes for a POST request containing the form-urlencoded key q with the value äöü. request.forms.get('q') contains "äöü".
What's going on here? I don't really have the option of decoding these elements with a different encoding or do I? Is there a general option for bottle to store these in unicode?
Thanks. | false | 27,432,211 | 0 | 0 | 0 | 0 | in this case, to convert it ,I did like this search_field.encode("ISO-8859-1").decode("utf-8") | 0 | 3,378 | 0 | 8 | 2014-12-11T20:56:00.000 | python,unicode,bottle | Python bottle requests and unicode | 1 | 1 | 2 | 61,842,769 | 0 |
0 | 0 | Lets say I have an application written in python to send a ping or e-mail. How can I change the source IP address of the sent packet to a fake one, using, e.g., Scapy?
Consider that that the IP address assigned to my eth0 is 192.168.0.100. My e-mail application will send messages using this IP. However, I want to manipulate this packet, as soon as it is ready to be sent, so its source IP is not 192.168.0.100 but 192.168.0.101 instead.
I'd like to do this without having to implement a MITM. | false | 27,448,905 | 0.099668 | 0 | 0 | 1 | You basically want to spoof your ip address.Well I suggest you to read Networking and ip header packets.This can be possible through python but you won't be able to see result as you have spoofed your ip.To be able to do this you will need to predict the sequence numbers. | 0 | 31,903 | 1 | 7 | 2014-12-12T17:26:00.000 | python,ip,packet,scapy | Send packet and change its source IP | 1 | 1 | 2 | 28,396,576 | 0 |
1 | 0 | I want to parse some data from a website. However, there's a certain peculiarity:
There is a dropdown list (layed out using div and child a tags, made functional with a jQuery script). Upon selecting one of the values, a subsequent text field would change its value.
I want to retrieve the first dropdown value, and the respective text field, then the next dropdown value and the updated text field, and so forth.
How would I go about this? | true | 27,468,688 | 1.2 | 0 | 0 | 0 | What happens here is, upon selecting a value form the dropdown, an AJAX request is generated and gets the data.
You can analyze the request url in your browser. If you use firefox, use firebug and take a look at the Net tab, what requests are generating and what is the url. In google chorme, look in Network tab. If you want to parse the data you have to make a request in that url. | 0 | 57 | 0 | 0 | 2014-12-14T11:07:00.000 | jquery,python,xpath,lxml | Python/lxml: Retrieving variable data | 1 | 1 | 1 | 27,468,960 | 0 |
0 | 0 | I am trying to create a simple form of client-server application, using python. So I got started with sockets, and facing some errors I searched a bit and saw that no two sockets can be listening to the same port at the same time. Is that true? And if so, the only way to handle multiple requests towards the server, as regards the sockets, is to have a single socket do the listening and take turns at the incoming requests? | false | 27,469,113 | 0 | 0 | 0 | 0 | When you call accept(), it returns a new socket for the connection, the original socket is still listening for new connections.
– Barmar | 0 | 28 | 0 | 0 | 2014-12-14T12:07:00.000 | python,sockets | Handling mutliple client connections as a server | 1 | 1 | 1 | 42,805,997 | 0 |
1 | 0 | I have a python script on my raspberry-pi continuously (every 5 seconds) running a loop to control the temperature of a pot with some electronics through GPIO.
I monitor temperature on a web page by having the python script write the temperature to a text file witch I request from java script and HTTP on a web page.
I would like to pass a parameter to the python script to make changes to the controlling, like change the target temperature.
What would be the better way to do this?
I'm working on a solution, where the python script is looking for parameters in a text file and then have a second python script write changes to this file. This second python script would be run by a http request from the web page.
Is this a way to go? Or am I missing a more direct way to do this.
This must be done many time before and described on the web, but I find nothing. Maybe I don't have the right terms to describe the problem.
Any hints is appreciated.
Best regards Kresten | true | 27,474,557 | 1.2 | 1 | 0 | 1 | You have to write somewhere your configuration for looping script. So file or database are possible choices but I would say that a formatted file (ini, yaml, …) is the way to go if you have a little number of parameters. | 0 | 229 | 0 | 0 | 2014-12-14T21:52:00.000 | python,http,web,raspberry-pi | Interact with python script running infinitive loop from web | 1 | 1 | 2 | 27,474,586 | 0 |
1 | 0 | How to debug a scrapy Request object?
requestobj= FormRequest.from_response(response, formxpath =form_xpath,callback=self.parse1)
I need to check formdata of requestobj .But I didn't find any documentation for debugging Request object | false | 27,499,546 | 0.291313 | 0 | 0 | 3 | Use some traffic monitoring software , i personally use fiddler. it will help you to check the requests sent from python as well as from browsers | 0 | 764 | 0 | 4 | 2014-12-16T07:24:00.000 | python,post,scrapy | scrapy debug Request object | 1 | 2 | 2 | 27,499,859 | 0 |
1 | 0 | How to debug a scrapy Request object?
requestobj= FormRequest.from_response(response, formxpath =form_xpath,callback=self.parse1)
I need to check formdata of requestobj .But I didn't find any documentation for debugging Request object | false | 27,499,546 | 0.099668 | 0 | 0 | 1 | try sending request to:
http://httpbin.org/
or
http://echo.opera.com/
you will get a response with information your request | 0 | 764 | 0 | 4 | 2014-12-16T07:24:00.000 | python,post,scrapy | scrapy debug Request object | 1 | 2 | 2 | 27,615,272 | 0 |
1 | 0 | In cherrypy is there a configuration option so that sessions do not have a "timeout", or if they do, expire immediately when the browser is closed? Right now tools.sessions.on is true and tools.sessions.timeout is 60 minutes. | false | 27,517,192 | 0 | 0 | 0 | 0 | Set tools.sessions.persistent to False | 0 | 474 | 0 | 0 | 2014-12-17T01:59:00.000 | python,cherrypy | Cherrypy session timeout on browser closed? | 1 | 1 | 2 | 27,565,293 | 0 |
0 | 0 | I am new to Python programming. While making an application, I ran into this problem.
I am parsing URL using urllib library of python. I want to convert any relative url into its corresponding absolute url. I get relative and absolute URLs in random fashiion and they may not be from the same domain. Now how do I store the last known absolute url to extract the netloc from it and append it to relative url? Should I save the last known absolute URL in a text file? Or is there any better option to this problem? | false | 27,553,319 | 0 | 0 | 0 | 0 | Now how do I store the last known absolute url to extract the netloc
from it and append it to relative url? Should I save the last known
absolute URL in a text file?
What do you think is wrong with this? Seems to make sense to me... (depending on context, obviously) | 0 | 117 | 0 | 0 | 2014-12-18T18:30:00.000 | python,python-3.x,urllib | URL parsing issue in python | 1 | 1 | 1 | 27,556,712 | 0 |
0 | 0 | I am writing a program in python that will automatically download pdf files from a website once a day.
When trying to test I noticed that the files downloaded had the correct extension but they are very small (<1kB) compared to the normal size of about 100kB when downloaded manually.
Can a website block a program from automatically downloading files?
Is there anything that can be done about this? | true | 27,577,032 | 1.2 | 0 | 0 | 3 | Yes. Cloudflare can block bots from downloading files. Blocking is usually done by detecting the user-agent or including javascript in a webpage. I would examine the pdf file in notepad and see what it contains also try adding a user-agent option in your python code. | 0 | 76 | 0 | 1 | 2014-12-20T04:49:00.000 | python,html | Can a website stop a program from downloading files automatically? | 1 | 1 | 1 | 27,577,078 | 0 |
0 | 0 | I have tried searching the official documentation, but no result. Does Chrome Web Driver needs to be in client's system for a python script using Selenium to run? I basically want to distribute compiled or executable file versions of the application to the end user. How do i include Chrome Web driver with that package? | false | 27,577,584 | 0 | 0 | 0 | 0 | chromedriver needs to be installed on the machine that will launch the instance of the Chrome browser.
If the machine that launches the Chrome instance is the same machine as where the Python script resides, then the answer to your question is "yes".
If the machine that launches the Chrome instance is a machine different from the machine that runs your Python script, then the answer to your question is "no". | 0 | 179 | 0 | 0 | 2014-12-20T06:13:00.000 | python,selenium | Does Chrome Web Drivers needs to be in Client's System while using Selenium | 1 | 1 | 2 | 27,580,663 | 0 |
0 | 0 | I am using amazon dynamodb and accessing it via the python boto query interface. I have a very simple requirement
I want to get 1000 entries. But I don't know the primary keys beforehand. I just want to get 1000 entries. How can I do this? ...I know how to use the query_2 but that requires knowing primary keys beforehand.
And maybe afterwards I want to get another different 1000 and go on like that. You can consider it as sampling without replacement.How can I do this?
Any help is much appreciated. | true | 27,584,321 | 1.2 | 0 | 0 | 2 | Use Table.scan(max_page_size=1000) | 0 | 1,903 | 0 | 2 | 2014-12-20T21:05:00.000 | python,amazon-dynamodb,boto,sample | python dynamodb get 1000 entries | 1 | 1 | 2 | 27,599,549 | 0 |
1 | 0 | So I've been wondering regarding a spider, if maybe some of my requests might be getting filtered because they are to the url endpoint but with different body arguments (form data). Does the dont_filter=True make sense with FormRequest object? | true | 27,600,761 | 1.2 | 0 | 0 | 1 | If the request has the dont_filter attribute set, the offsite middleware will allow the request even if its domain is not listed in allowed domains. | 0 | 134 | 0 | 1 | 2014-12-22T10:24:00.000 | python,scrapy | scrapy: does it make sense to disable filtering on a formrequest? | 1 | 1 | 1 | 27,602,635 | 0 |
0 | 0 | I have created a little screen scraper and everything seems to be working great, the information is being pulled and saved in a db. The only problem I am having is sometimes Python doesn't use the driver.back() so it then trys to get the information on the wrong page and crashes. I have tried adding a time.sleep(5) but sometimes it still isn't working. I am trying to optimise it to take as little time as possible. So making it sleep for 30 seconds doesn't seem to be a good solution. | false | 27,626,783 | 0 | 0 | 0 | 0 | Try relocating elements each time it goes back to previous page. That will surely work but it is time consuming. | 0 | 58,493 | 0 | 44 | 2014-12-23T19:28:00.000 | python,selenium | Python selenium browser driver.back() | 1 | 1 | 2 | 70,415,232 | 0 |
0 | 0 | I'm trying to build a simple python server that a client can connect to without the client having to know the exact portnumber. Is that even possible? The thought is to choose a random portnumber and using it for clients to connect.
I know you could use bind(host, 0) to get a random port number and socket.getsockname()[1] within the server to get my portnumber. But how could my client get the portnumber?
I have tried socket.getnameinfo() but I don't think I understand how that method really works. | false | 27,628,753 | 0.066568 | 0 | 0 | 1 | In order to do that the server must listen on a certain port(s).
This means the client(s) will need to interact on these ports with it.
So... no it is impossible to do that on some random unknown port. | 0 | 689 | 0 | 1 | 2014-12-23T22:10:00.000 | python,network-programming,client-server,port,server | How to give a python client a port number from a python server | 1 | 3 | 3 | 27,628,809 | 0 |
0 | 0 | I'm trying to build a simple python server that a client can connect to without the client having to know the exact portnumber. Is that even possible? The thought is to choose a random portnumber and using it for clients to connect.
I know you could use bind(host, 0) to get a random port number and socket.getsockname()[1] within the server to get my portnumber. But how could my client get the portnumber?
I have tried socket.getnameinfo() but I don't think I understand how that method really works. | false | 27,628,753 | 0 | 0 | 0 | 0 | Take a look at Zeroconf, it seems to be the path to where you are trying to get to. | 0 | 689 | 0 | 1 | 2014-12-23T22:10:00.000 | python,network-programming,client-server,port,server | How to give a python client a port number from a python server | 1 | 3 | 3 | 27,630,673 | 0 |
0 | 0 | I'm trying to build a simple python server that a client can connect to without the client having to know the exact portnumber. Is that even possible? The thought is to choose a random portnumber and using it for clients to connect.
I know you could use bind(host, 0) to get a random port number and socket.getsockname()[1] within the server to get my portnumber. But how could my client get the portnumber?
I have tried socket.getnameinfo() but I don't think I understand how that method really works. | false | 27,628,753 | 0 | 0 | 0 | 0 | You need to advertise the port number somehow. Although DNS doesn't do that (well, you could probably cook up some resource record on the server object, but that's not really done) there are many network services that do. LDAP like active directory (you need write rights), DNS-SD dns service discovery, universal plug and play, service location protocol, all come to mind. You could even record the port number on some web page somewhere and have the client read it. | 0 | 689 | 0 | 1 | 2014-12-23T22:10:00.000 | python,network-programming,client-server,port,server | How to give a python client a port number from a python server | 1 | 3 | 3 | 27,628,855 | 0 |
1 | 0 | I have about 600k images url (in a list) and I would like to achieve the following :
Download all of them
generate a thumbnail of specific dimensions
upload them to Amazon s3
I have estimated my images average to about 1mb which would be about 600gb of data transfer for downloads. I don't believe my laptop and my Internet connection can take it.
Which way should I go? I'd like preferably to have a solution that minimizes the cost.
I was thinking of a Python script or a JavaScript job, run in parrallel if possible to minimize the time needed
Thanks! | true | 27,633,641 | 1.2 | 0 | 0 | 1 | I'd suggest spinning up one or more EC2 instances and running your thumbnail job there. You'll eliminate almost all lot of the bandwidth costs (free from ec2 instances in the right region to s3), and certainly the transfer speed will be faster within the AWS network.
For 600K files to process, you may want to consider loading each of those 'jobs' into an SQS queue, and then have multiple EC2 instances polling the queue for 'work to do' - this will allow you to spin up as many ec2 instances as you want to run in parallel and distribute the work.
However, the work to setup the queue may or may not be worth it depending on how often you need to do this, and how quickly it needs to finish - i.e. if this is a one time thing, and you can wait a week for it to finish, a single instance plugging away may suffice. | 0 | 83 | 0 | 0 | 2014-12-24T08:09:00.000 | javascript,python,amazon-s3 | Bulk download of web images | 1 | 1 | 1 | 27,647,682 | 0 |
1 | 0 | I have an api with publishers + subscribers and I want to stop a publisher from uploading a lot of data if there are no subscribers. In an effort to avoid another RTT I want to parse the HTTP header, see if there are any subscribers and if not return an HTTP error before the publisher finishes sending all of the data.
Is this possible? If so, how do I achieve it. I do not have post-buffering enabled in uwsgi and the data is being uploaded with a transfer encoding of chunked. Therefore, since uWSGi is giving me a content-length header, it must have buffered the whole thing somewhere previously. How do I get it to stop?
P.S. uWSGi is being sent the data via nginx. Is there some configuration that I need to set there too perhaps? | false | 27,645,172 | 0 | 1 | 0 | 0 | The limit here is nginx. It cannot avoid (unless when in websockets mode) buffering the input. You may have more luck with apache or the uWSGI http router (albeit i suppose they are not a viable choice) | 0 | 244 | 0 | 0 | 2014-12-25T07:32:00.000 | python,nginx,uwsgi | Can uWSGI return a response before the POST data is uploaded? | 1 | 1 | 1 | 27,646,090 | 0 |
1 | 0 | I'm trying, in Python, using the requests library, to get the HTML for a website that automatically redirects to another one. How do I avoid this and get the HTML for the original site, if possible? I know it exists and has HTML for it because I have accessed it via the Chrome view-source function. Any help appreciated. | false | 27,651,530 | 0.099668 | 0 | 0 | 1 | Basically you don't. If the web server is returning a 302 unless it decides to include the old html(which would be very odd ) you are basically out of luck.
Now if you hit it with a web browser and it doesn't redirect you, then perhaps it is doing something like user agent sniffing and redirecting based on that. So in that case you would need your code to claim to be that ua. | 0 | 66 | 0 | 0 | 2014-12-25T23:26:00.000 | python,http,python-requests | If a website gives a 302 HTTP response code, can I get the original link's raw HTML still? | 1 | 1 | 2 | 27,651,589 | 0 |
0 | 0 | I am trying to use python to scrape a website implemented with infinite scroll. Actually, the web is pinterest. I know how to use selenium webdriver to scrape a web with infinite scroll. However, the webdriver basically imitates the process of visiting the web and is slow, much slower than using BeautifulSoup and urllib for scraping. Do you know any time efficient ways to scrape a web with infinite scroll? Thanks. | true | 27,682,975 | 1.2 | 0 | 0 | 3 | The infinite scroll is probably using an Ajax query to retrieve more data as you scroll. Use your browser's dev tools to inspect the request structure and try to hit the same endpoint directly. In this way you can get the data you need, often in json or xml format.
In chrome open the dev tools (Ctrl + shift + I in windows) and switch to the network tab. Then begin to scroll, when more content is loaded you should see new network activity. Specifically a Ajax request, you can filter by "xhr". Click on the new network item and you will get detailed info on the request such as headers, the request body, the structure of the response, and the url (endpoint) the request is hitting. Scraping this url is the same as scraping a website except there will be no html to parse through just formatted data.
Some websites will try to block this type of behavior. If that happens I suggest using phantomjs without selenium. It can be very fast (in comparison to selenium) for mimicking human interaction on websites. | 0 | 3,050 | 0 | 0 | 2014-12-29T03:17:00.000 | python,web-scraping | Is there any fast ways to scrape a website with infinite scroll? | 1 | 1 | 1 | 27,683,149 | 0 |
1 | 0 | I am running test scripts in Robot framework using Google Chrome browser.
But when I run scripts consecutively two times, no message log gets generated in message log section in Run tab. This problem is being encountered only while using Chrome.
Can anyone help me on this as why it is occurring. | false | 27,717,059 | 0 | 1 | 0 | 0 | Have you tried running the test with 'pybot -L TRACE' ? | 0 | 580 | 0 | 0 | 2014-12-31T06:21:00.000 | python-2.7,selenium-webdriver,robotframework | Using Robot framework with Google Chrome browser | 1 | 1 | 1 | 27,816,296 | 0 |
0 | 0 | I am having a bit of trouble understanding API calls and the URLs I'm supposed to use for grabbing data from Imgur. I'm using the following URL to grab JSON data, but I'm receiving old data: http://imgur.com/r/wallpapers/top/day.json
But if I strip the .json from the end of the URL, I see the top pictures from today.
All I want is the JSON data from the top posts of today from Imgur, but keep getting data the refers to Dec 18th, 2014.
I'm using the call in a Python script. I have a token from Imgur to do the stuff, and reading the API documentation, I see a lot of the examples start with https://api. instead of http://imgur.
Which one should I use? | true | 27,746,182 | 1.2 | 0 | 0 | 0 | Imgur updated their docs, so the new and correct form of the URL I used was:
r = requests.get("https://api.imgur.com/3/gallery/r/earthporn/top/") | 1 | 627 | 0 | 0 | 2015-01-02T17:37:00.000 | python,json,api,imgur | Correct API call to request JSON-formatted data from Imgur? | 1 | 1 | 2 | 28,462,956 | 0 |
0 | 0 | I need to open link in new tab using Selenium.
So is it possible to perform ctrl+click on element in Selenium to open it in new tab? | false | 27,775,759 | 0 | 0 | 0 | 0 | By importing Keys Class, we can open page in new tab or new window with CONTROL or SHIFT and ENTER these keys:
driver.find_element_by_xpath('//input[@name="login"]').send_keys(Keys.CONTROL,Keys.ENTER)
or
driver.find_element_by_xpath('//input[@name="login"]').send_keys(Keys.SHIFT,Keys.ENTER) | 0 | 102,116 | 0 | 26 | 2015-01-05T08:27:00.000 | python,selenium,selenium-webdriver,functional-testing | Send keys control + click in Selenium with Python bindings | 1 | 1 | 6 | 55,522,157 | 0 |
1 | 0 | I'm working on a script to create epub from html files, but when I check my epub I have the following error : Mimetype entry missing or not the first in archive
The Mimetype is present, but it's not the first file in the epub. Any idea how to put it in first place in any case using Python ? | true | 27,799,692 | 1.2 | 0 | 0 | 0 | The solution I've found:
delete the previous mimetype file
when creating the new archive create an new mimetype file before adding anything else : zipFile.writestr("mimetype", "application/epub+zip")
Why does it work : the mimetype is the same for all epub : "application/epub+zip", no need to use the original file. | 0 | 1,098 | 0 | 0 | 2015-01-06T13:28:00.000 | python,zip,epub,epub3 | epub3 : how to add the mimetype at first in archive | 1 | 1 | 2 | 28,436,076 | 0 |
1 | 0 | I have a python script that sends a mail from Yahoo including an attachment. This script runs on a Freebsd 7.2 client and uses firefox : Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.0.10) Gecko/2009072813 Firefox/3.0.10. The script fails with the error - Element xpath=//input[@value="Send"] not found. Checked the Page source, the x-Path exists. However, it is not visible in the compose page.
Kindly help me sort out this issue. | false | 27,814,764 | 0 | 1 | 0 | 0 | Check if the button is inside an iframe. If it is then switch to frame and try it again. | 0 | 673 | 0 | 0 | 2015-01-07T07:54:00.000 | python,firefox,freebsd,yahoo-mail | Send button in yahoo mail page is not not visible - Firefox, Freebsd 7.2 | 1 | 1 | 1 | 27,815,406 | 0 |
0 | 0 | Is it possible to add new methods to a standard asyncio transport?
e.g: Adding a send method to the SSL transport that serializes a protocol buffer, constructs a frame and uses the transports own write method to do a buffered write to the underlying socket.
There are plenty of asyncio server/client examples out there, but I have not been able to find ones that implement their own transport or extends an already existing one. | false | 27,949,714 | 0.197375 | 0 | 0 | 1 | No. You cannot add a new method or inherit from existing asyncio transport.
Consider transports as final or sealed, like sockets.
You should never want to inherit from socket but make your class that embeds the socket instance inside, right?
The same for transport. See asyncio.streams as example of building new API layer on top of transport/protocol pair. | 0 | 253 | 0 | 1 | 2015-01-14T18:14:00.000 | python,python-3.4,python-asyncio | Adding methods to an asyncio Transport | 1 | 1 | 1 | 27,988,750 | 0 |
1 | 0 | I'm trying to find a way to test whether a URL is absolute or relative in Python. In the href attribute of an HTML tag, is the lack of a scheme (e.g. http, ftp, etc.) sufficient to mark a URL as relative, or is it possible to have an absolute URL as a href attribute without explicitly specifying the scheme (e.g. 'www.google.com')? I'm getting the scheme by using urlparse.urlparse('some url').scheme. | true | 27,955,463 | 1.2 | 0 | 0 | 2 | If you don't include the URI scheme (http://, https://, //, etc) then the browser will assume it to be a relative URL.
You should be aware of scheme-relative URLs like //www.google.com for your script. In short your should look for a double forward slash // to figure out whether or not a URL will be treated as relative or not. | 0 | 173 | 0 | 0 | 2015-01-15T01:29:00.000 | python,html,tags,uri,url-scheme | Is it possible to have an absolute href attribute with no scheme? | 1 | 1 | 2 | 27,955,487 | 0 |
1 | 0 | Let's say I have a billing agreement, which I just executed on the callback from the PayPal site:
resource = BillingAgreement.execute(token)
The resource returned does not have any payer information (name, email, etc). I can use the ID to load the full BillingAgreement:
billing_agreement = BillingAgreement.find(resource.id)
That returns successfully, but the resulting object also lacks an payer info.
This seems like a critical oversight in the REST API's design. If I just got a user to sign up for a subscription, don't I need to know who they are? How else will I send them a confirmation email, allow them to cancel later, etc?
Thanks for the help! | false | 27,972,459 | 0.379949 | 0 | 0 | 2 | Got a reply from PayPal support. Apparently you can take the same token you pass to BillingAgreement.execute() and pass it to GetExpressCheckoutDetails in their classic API. I tried it and it works. It means you have to use both APIs (which we weren't planning to do) and store both API auth info, which is annoying. Hopefully they'll fix it someday, but if it's been high-priority for two months I'm not holding my breath. | 0 | 555 | 0 | 1 | 2015-01-15T20:32:00.000 | python,rest,paypal | In the PayPal REST API, how can I get payer information from a newly executed BillingAgreement? | 1 | 1 | 1 | 27,989,541 | 0 |
0 | 0 | The documentation of recv_exit_status() in paramiko/channel.py says
"If no exit status is provided by the server, -1 is returned.".
In which situation can it happen that the server (Unix) is providing no exit status? | false | 27,985,812 | 0 | 0 | 0 | 0 | For example, when the connection was unexpectedly dropped. | 0 | 233 | 0 | 0 | 2015-01-16T14:11:00.000 | python,unix,ssh,paramiko,robotframework | When does recv_exit_status() return -1? | 1 | 1 | 1 | 27,986,741 | 0 |
0 | 0 | Given that every library is a python code itself, I think its logical that instead of using the import command, we can actually copy the whole code of that library and paste it to the top of our main.py.
I'm working on a remote pc, I cannot install libraries, can I use a library by just doing this?
Forgive me if this a very silly question.
Thanks | false | 28,028,695 | 0 | 0 | 0 | 0 | Most modules are actually written in C, like Pygame for example. Python itself is based on C. You can't jump to conclusions, but if the library is pure Python, I'd suggest copying the package into your project directory and importing, rather than copying and pasting code snippets. | 1 | 231 | 0 | 1 | 2015-01-19T16:00:00.000 | python | Manually adding libraries | 1 | 1 | 3 | 28,028,959 | 0 |
1 | 0 | Client side tool to extract values such as common name serial number and public key from pfx file which is loaded by client, and then sign the public key and send to server..
I have completed the backend python code which will import modules from OpenSSL.Crypto library..
How to execute the same in client side?.. i.e signing operation should be done in client side,
In google i found like Brython, skulpt, pyjams help in this.. But i m confused to start.. Any suggestions? | false | 28,038,384 | 0.197375 | 0 | 0 | 1 | So, first things first: it is not usueal to have the same code to run on server side and client side.
Second thing: be aware that no authentication (or "signing") done on client side can be regarded as safe. At most, the client side can take care of closely coupling the signing with the UI to give the user dynamic feedback - but since whatever requests the client side send to the server can very easily be impersonated by an script, authentication must be performed server side for each request - for example, a variable saying the current user has authenticated correctly can just be sent as "True", regardless of usernames and passwords actually known.
Third thing: despite all this, since there are these frameworks for using Python or a Python like language client side, it is indeed possible to have some modules in a code base that are used both client side and server side. Of those, I am most familiar with Brython which has achieved a nice level of Python 3.x compatibility and I indeed have a project sharing code client side and server side.
The re-used code would have to be refactored to abstract any I/O (like getting user input, or getting values from a database), of couse, since these are fundamentally different things on server side, and client side. Still, some core logic can be in place that is reused both sides.
However third party python modules, like Pycrypto won't work client side (you could probably code a xmlrpc/jsonrpc like call to use it server side) -
and although Python's hashlib is not implemented now in Brython, the project has got a momentum that opening a feature request for at least the same codecs that exist in javascript could be fulfilled in a matter of days.
(bugs/features requests can be open at github.com/brython-dev/brython)
(PS. I just found out, to my dismay, that currently there is no standard way to compute any hash, not even md5, in Javascript without third party modules - that just emphasize the utilities of frameworks like Brython or coffescript which can bring up a bundle of functionality in a clean way) | 0 | 146 | 0 | 1 | 2015-01-20T05:30:00.000 | python,python-2.7,openssl,pycrypto,brython | Python Client side tool( should work in browser) to extract values from a pfx file and sign it | 1 | 1 | 1 | 28,065,519 | 0 |
0 | 0 | Socket module has a socket.recv_into method, so it can use user-defined bytebuffer (like bytearray) for zero-copy. But perhaps BaseEventLoop has no method like that. Is there a way to use method like socket.recv_into in asyncio? | false | 28,044,269 | 0.066568 | 0 | 0 | 1 | You may implement own asyncio transport which utilizes .recv_into() function but yes, for now asyncio has not a way to use .recv_into() out-the-box.
Personally I doubt in very big speedup: when you develop with C the zero-copy is extremely important but for high-level languages like Python benefits are much lesser. | 1 | 1,555 | 0 | 5 | 2015-01-20T11:29:00.000 | python,python-asyncio | is there any operation like socket.recv_into in python 3 asyncio? | 1 | 2 | 3 | 28,050,981 | 0 |
0 | 0 | Socket module has a socket.recv_into method, so it can use user-defined bytebuffer (like bytearray) for zero-copy. But perhaps BaseEventLoop has no method like that. Is there a way to use method like socket.recv_into in asyncio? | false | 28,044,269 | 0.066568 | 0 | 0 | 1 | The low-level socket operations defined for BaseEventLoop require a socket.socket object to be passed in, e.g. BaseEventLoop.sock_recv(sock, nbytes). So, given that you have a socket.socket, you could call sock.recv_into(). Whether it is a good idea to do that is another question. | 1 | 1,555 | 0 | 5 | 2015-01-20T11:29:00.000 | python,python-asyncio | is there any operation like socket.recv_into in python 3 asyncio? | 1 | 2 | 3 | 28,045,430 | 0 |
0 | 0 | I was looking at how to send a TCP FIN using python socket.
I tried using the socket.socket.close and it sends out a TCP RST and not a TCP FIN. Can someone please let me know what is the API? | false | 28,057,383 | 0.664037 | 0 | 0 | 4 | From the python doc:
Note
close() releases the resource associated with a connection but does
not necessarily close the connection immediately. If you want to close
the connection in a timely fashion, call shutdown() before close(). | 0 | 3,320 | 0 | 3 | 2015-01-20T23:56:00.000 | python,sockets,network-programming | how to send a TCP FIN using python sockets | 1 | 1 | 1 | 28,057,748 | 0 |
0 | 0 | I'm trying to learn how socket module works and I have a dumb question:
Where is socket.send()'s sent data stored before being cleared by socket.recv()?.
I believe there must be a buffer somewhere in the middle awaiting for socket.recv() calls to pull this data out.
I just made a test with a server which sent a lot of data at once, and then connected to a client that (intentionally) pulled data very slowly. The final result was that data was sent within a fraction of a second and, on the other side, it was entirely received in small chunks of 10 bytes .recv(10), which took 20 seconds long .
Where has this data been stored meanwhile??, what is the default size of this buffer?, how can it be accessed and modified?
thanks. | false | 28,057,552 | 0 | 0 | 0 | 0 | Learn about OSI layers, and different connections such as TCP and UDP. socket.send implements a TCP transmission of data. If you look into the OSI layers, you will find out that the 4th layer (i.e. transport layer) will buffer the data to be transmitted. The default size of the buffer depends on the implementation. | 0 | 184 | 0 | 0 | 2015-01-21T00:14:00.000 | python,sockets,buffer,recv | Trying to understand buffering in Python socket module | 1 | 2 | 2 | 28,057,641 | 0 |
0 | 0 | I'm trying to learn how socket module works and I have a dumb question:
Where is socket.send()'s sent data stored before being cleared by socket.recv()?.
I believe there must be a buffer somewhere in the middle awaiting for socket.recv() calls to pull this data out.
I just made a test with a server which sent a lot of data at once, and then connected to a client that (intentionally) pulled data very slowly. The final result was that data was sent within a fraction of a second and, on the other side, it was entirely received in small chunks of 10 bytes .recv(10), which took 20 seconds long .
Where has this data been stored meanwhile??, what is the default size of this buffer?, how can it be accessed and modified?
thanks. | true | 28,057,552 | 1.2 | 0 | 0 | 0 | The OS (kernel) buffers the data.
On Linux the buffer size parameters may be accessed through the /proc interface. See man 7 socket for more details (towards the end.) | 0 | 184 | 0 | 0 | 2015-01-21T00:14:00.000 | python,sockets,buffer,recv | Trying to understand buffering in Python socket module | 1 | 2 | 2 | 28,057,652 | 0 |
0 | 0 | I have written a python server that does a task depending on the input given by the user through a client. Unfortunately, this requires the user to use the terminal.
I'd like the user to use a browser instead to send the data to the server. How would I go on about this?
Does anyone here have suggestions? Perhaps even an example?
Thank you all in advance, | false | 28,109,630 | 0 | 0 | 0 | 0 | Answering my question for future reference or other people with similar requests.
All of the requirements for this can be found in the standard module BaseHTTPServer | 0 | 231 | 0 | 1 | 2015-01-23T12:10:00.000 | python,browser,client,server | Python - Server and browser-client | 1 | 1 | 2 | 28,124,355 | 0 |
1 | 0 | In my Django app I need to send an HTTP PUT request to a url.
What is the proper syntax for that? | false | 28,115,111 | 0 | 0 | 0 | 0 | Found the answer:
req = urllib2.Request(url=your_url, data=your_data)
req.get_method = lambda: 'PUT' | 0 | 831 | 0 | 0 | 2015-01-23T17:10:00.000 | django,python-2.7 | How to send an HTTP PUT request using urllib2 or requests | 1 | 1 | 1 | 28,115,962 | 0 |
0 | 0 | I am using an API that returns me articles in my language in determined categories. This API limits me in 100 calls for each interval of 60 minutes.
I don't want to make 100 calls straight away and make my script wait until 60 minutes has passed.
I could then shoot an API call every 36 seconds, but I also don't want the API calls to be shot evenly.
What is a feasible way to make my script make 100 API calls at random intervals of time, as long as the 100 fits in 60 minutes?
I thought of making a function that would generate 100 timestamps in this 60 minutes interval, and then at the right time of each timestamp, it would shoot an API call, but I think that'd be overkill, and I'm not sure how I could do that either. | true | 28,124,450 | 1.2 | 0 | 0 | 2 | What you could do is choose a min/max interval of how long you want to wait. Keep a note of how many requests have been made in the last 60 minutes and if you're still below the quota, download a document and wait for rand(min, max). This is not very fancy and doesn't distribute the wait times across the whole 60 minutes interval, but it's easy to implement.
Another way would be to randomly choose 100 numbers between 0 and 60*60. These are the seconds on which you make requests. Sort them and as you progress through the array, each time you wait for next - current seconds. (or even use the scheduler module to simplify it a bit) | 0 | 303 | 0 | 0 | 2015-01-24T10:20:00.000 | python,python-2.7 | How can I distribute 100 API calls randomly in a 60 minutes time interval? | 1 | 1 | 1 | 28,124,506 | 0 |
0 | 0 | Which is the more resource-friendly way to collect SNMP traps from a Cisco router via python:
I could use a manager on a PC running a server, where the Cisco SNMP traps are sent to in case one occurs
I could use an agent to send a GET/GETBULK request every x timeframe to check if any new traps have occurred
I am looking for a way to run the script so that it uses the least resources as possible. Not many traps will occur so the communication will be low mostly, but as soon as one does occur, the PC should know immediately. | false | 28,152,579 | 0 | 1 | 0 | 0 | Approach 1 is better from most perspectives.
It uses a little memory on the PC due to running a trap collecting daemon, but the footprint should be reasonably small since it only needs to listen for traps and decode them, not do any complex task.
Existing tools to receive traps include the net-snmp suite which allows you to just configure the daemon (i e you don't have to do any programming if you want to save some time).
Approach 2 has a couple of problems:
No matter what polling interval you choose, you run the risk of missing an alarm that was only active on the router for a short time.
Consumes CPU and network resources even if no faults are occurring.
Depending on the MIB of the router, some types of event may not be stored in any table for later retrieval. For Cisco, I would not expect this problem, but you do need to study the MIB and make sure of this. | 0 | 2,151 | 0 | 2 | 2015-01-26T14:54:00.000 | python,cisco,pysnmp | Collecting SNMP traps with pySNMP | 1 | 1 | 1 | 48,458,525 | 0 |
0 | 0 | I'm using Google Drive's API to download a video from my Drive account. However, I'm using Partial Download, receiving the file by chunks. This is done by a Python script.
If I try to play it using html5 , it won't work as I want because it plays until the size the file had before loading the page and then it stops, without waiting for the next chunk or updating to the current filesize.
Is there a way to play it as if it was streaming?
In the near future I also need to stream this way to chromecast and other devices (android, ps4, etc). | true | 28,154,828 | 1.2 | 0 | 0 | 1 | What you are looking to do is called progressive download, or pseudo-streaming. It is possible with the mp4 container if the moov atom is before the mdat atom. This is sometimes called faststart | 0 | 369 | 0 | 2 | 2015-01-26T16:53:00.000 | python,video,google-drive-api,html5-video,chromecast | How to play a video being downloaded as if it was a stream? | 1 | 1 | 1 | 28,157,049 | 0 |
0 | 0 | After adding a user,
how can I reload the Dial-plan XML dynamical using script. | true | 28,170,340 | 1.2 | 0 | 0 | 2 | Using the EventSocket or simply a shell script calling fscli -x "reloadxml" | 0 | 864 | 0 | 0 | 2015-01-27T12:16:00.000 | javascript,python,lua,freeswitch | In freeswitch is there any possibility to reload xml using script | 1 | 1 | 2 | 28,245,548 | 0 |
0 | 0 | I have an embedded system connected with an ethernet port to one of my 2 ethernet interfaces, but I have the problem that my python code for the socket connection does not know where to connect to the embedded.
I mean, sometimes I get the connection and sometimes I just don't (and I have to change the cable to the other interface), because I don't know how the socket functionality is getting the right ethernet port in which it has to connect.
is there anything I can do on my python code to know the correct ethernet port in which the embedded is connected? (in order to know every time I connect it without changing the cable to another interface) | false | 28,174,487 | 0.53705 | 0 | 0 | 3 | Unplug one and if it stops working - you found the right one.
If it does not stop working it is the other one. | 0 | 172 | 0 | 1 | 2015-01-27T15:50:00.000 | python,sockets,ethernet | I have 2 ethernet interfaces on the same PC, how to know which interface is connected in python? | 1 | 1 | 1 | 28,174,528 | 0 |
0 | 0 | Hi I want to make a web crawler that checks URL for data, If i make a simple Gui that would make the script easier to look for variables in that data, Would adding code for the gui make my web crawler less efficient ?
I need the crawler to be as efficient as possible, to be able to process data as fast as possible. Would making a gui for this Python script, hinder the performance of the web crawler ? | true | 28,177,807 | 1.2 | 0 | 0 | 1 | As long as you clearly separate your GUI logic from your processing logic, it shouldn't. As is the case with all optimization though, the best approach is to make the thing work first, then profile and optimize if it isn't performing fast enough for you.
I would suggest you first create your web crawler and forget the GUI. Profile and optimize it if you think it's too slow. Build it with the goal of it being an importable library. Once that works from the command line, then create your GUI front-end and bind your web crawling library functions and classes to the buttons and fields in your GUI. That clear separation should ensure that the GUI logic doesn't interfere with the performance of the web crawling. In particular, try to avoid updating the GUI with status information in the middle of the web crawling. | 0 | 150 | 0 | 0 | 2015-01-27T18:49:00.000 | python,user-interface,tkinter | If I make a simple gui for my python script, would it affect its efficiency? | 1 | 1 | 1 | 28,177,945 | 0 |
0 | 0 | I have some git commands scripted in python and occasionally git fetch can hang for hours if there is no connectivity. Is there a way to time it out and report a failure instead? | true | 28,180,013 | 1.2 | 1 | 0 | 1 | No you can't, you need to timeout the command using a wrapper. | 0 | 2,127 | 0 | 1 | 2015-01-27T21:07:00.000 | python,git,timeout | Can you specify a timeout to git fetch? | 1 | 1 | 2 | 29,615,466 | 0 |
0 | 0 | I am trying to extract mails from gmail using python. I noticed that I can get mails from "[Gmail]/All Mail", "[Gmail]/Drafts","[Gmail]/Spam" and so on. However, is there any method to retrieve mails that are labeled with "Primary", "Social", "Promotions" etc.? These tags are under the "categories" label, and I don't know how to access it.
By the way, I am using imaplib in python. Do I need to access the "categories" with some pop library? | false | 28,183,527 | 0 | 1 | 0 | 0 | Unfortunately the categories are not exposed to IMAP. You can work around that by using filters in Gmail to apply normal user labels. (Filter on, e.g., category:social.) | 0 | 596 | 0 | 0 | 2015-01-28T02:18:00.000 | python,email,gmail | How to extract mail in "categories" label in gmail? | 1 | 2 | 2 | 28,184,142 | 0 |
0 | 0 | I am trying to extract mails from gmail using python. I noticed that I can get mails from "[Gmail]/All Mail", "[Gmail]/Drafts","[Gmail]/Spam" and so on. However, is there any method to retrieve mails that are labeled with "Primary", "Social", "Promotions" etc.? These tags are under the "categories" label, and I don't know how to access it.
By the way, I am using imaplib in python. Do I need to access the "categories" with some pop library? | false | 28,183,527 | 0 | 1 | 0 | 0 | Yes, categories are not available in IMAP. However, rather than filters, I found that gmail api is more favorable for me to get mail by category. | 0 | 596 | 0 | 0 | 2015-01-28T02:18:00.000 | python,email,gmail | How to extract mail in "categories" label in gmail? | 1 | 2 | 2 | 28,218,594 | 0 |
0 | 0 | In C socket programming at the server-side, after a connection is accepted we can get a handle of the new socket (who is transmitting the data) by "connfd" which is the return value of "accept".
Now I'm trying to implement a web server with Python, I have a handler based on BaseHttpRequestHandler, who handles the requests with the do-Get method.
How can I get ahold of the socket that is transmitting data now (the socket created after the accept and not the one created after bind)?
The reason I need the socket is that I need to read TCP_info from getsockopt with it.
Thanks! | false | 28,205,527 | 0.379949 | 0 | 0 | 2 | Found it! It is "self.request"!! Confusing style of naming. | 0 | 141 | 1 | 1 | 2015-01-29T01:25:00.000 | python,sockets,http,webserver | where's the equivalent of connfd (created socket in C) in python's BaseHTTPRequestHandler | 1 | 1 | 1 | 28,205,847 | 0 |
1 | 0 | I am having a simple python script scraping some data from html page and writing out results to a csv file. How can I automate the scraping, i.e. kick it off every five minutes under Windows.
Thanks
Peter | false | 28,215,153 | 0.291313 | 0 | 0 | 3 | If you are using Windows i would suggest using Windows Task Scheduler, it's quite simple thanks to the UI and from there Trigger your Python code.
For a server environment like Linux you could set up a Cron task. | 0 | 11,625 | 0 | 3 | 2015-01-29T12:46:00.000 | python | Run python script every 5 minutes under Windows | 1 | 1 | 2 | 28,215,366 | 0 |
0 | 0 | I am sending 20000 messages from a DEALER to a ROUTER using pyzmq.
When I pause 0.0001 seconds between each messages they all arrive but if I send them 10x faster by pausing 0.00001 per message only around half of the messages arrive.
What is causing the problem? | false | 28,246,973 | 1 | 1 | 0 | 6 | What is causing the problem?
A default setup of the ZMQ IO-thread - that is responsible for the mode of operations.
I would dare to call it a problem, the more if you invest your time and dive deeper into the excellent ZMQ concept and architecture.
Since early versions of the ZMQ library, there were some important parameters, that help the central masterpiece ( the IO-thread ) keep the grounds both stable and scalable and thus giving you this powerful framework.
Zero SHARING / Zero COPY / (almost) Zero LATENCY are the maxims that do not come at zero-cost.
The ZMQ.Context instance has quite a rich internal parametrisation that can be modified via API methods.
Let me quote from a marvelous and precious source -- Pieter HINTJENS' book, Code Connected, Volume 1.
( It is definitely worth spending time and step through the PDF copy. C-language code snippets do not hurt anyone's pythonic state of mind as the key messages are in the text and stories that Pieter has crafted into his 300+ thrilling pages ).
High-Water Marks
When you can send messages rapidly from process to process, you soon discover that memory is a precious resource, and one that can be trivially filled up. A few seconds of delay somewhere in a process can turn into a backlog that blows up a server unless you understand the problem and take precautions.
...
ØMQ uses the concept of HWM (high-water mark) to define the capacity of its internal pipes. Each connection out of a socket or into a socket has its own pipe, and HWM for sending, and/or receiving, depending on the socket type. Some sockets (PUB, PUSH) only have send buffers. Some (SUB, PULL, REQ, REP) only have receive buffers. Some (DEALER, ROUTER, PAIR) have both send and receive buffers.
In ØMQ v2.x, the HWM was infinite by default. This was easy but also typically fatal for high-volume publishers. In ØMQ v3.x, it’s set to 1,000 by default, which is more sensible. If you’re still using ØMQ v2.x, you should always set a HWM on your sockets, be it 1,000 to match ØMQ v3.x or another figure that takes into account your message sizes and expected subscriber performance.
When your socket reaches its HWM, it will either block or drop data depending on the socket type. PUB and ROUTER sockets will drop data if they reach their HWM, while other socket types will block. Over the inproc transport, the sender and receiver share the same buffers, so the real HWM is the sum of the HWM set by both sides.
Lastly, the HWM-s are not exact; while you may get up to 1,000 messages by default, the real buffer size may be much lower (as little as half), due to the way libzmq implements its queues. | 0 | 1,622 | 0 | 4 | 2015-01-31T00:52:00.000 | python,zeromq,pyzmq | ZMQ DEALER ROUTER loses message at high frequency? | 1 | 1 | 1 | 28,255,012 | 0 |
0 | 0 | Suppose I have to create a graph with 15 nodes and certain nodes. Instead of feeding the nodes via coding, can the draw the nodes and links using mouse on a figure? Is there any way to do this interactively? | false | 28,260,051 | 0.197375 | 0 | 0 | 1 | No.
Sorry. In principle it could be possible to create a GUI which interfaces with networkx (and maybe some people have), but it's not built directly into networkx. | 0 | 105 | 0 | 0 | 2015-02-01T06:14:00.000 | python,graph,networkx | Python : Is there a way to interactively draw a graph in NetworkX? | 1 | 1 | 1 | 28,291,575 | 0 |
1 | 0 | I'm trying to send a file via post request to server on localhost. I'm using HttpRequester in Firefox (also tried Postman in Chrome and Tasker on Android) to sumbit request.
The problem is that request.FILES is always empty. But when I try to print request.body it shows some non-human-readable data which particularly include the data from the file I want to upload (it's a database). So it makes sense to me that somehow file arrives to the server.
From Django docs:
Note that FILES will only contain data if the request method was POST
and the that posted to the request had
enctype="multipart/form-data". Otherwise, FILES will be a blank
dictionary-like object.
There was an error 'Invalid boundary in multipart: None' when I tried to set Content-type of request to 'multipart/form-data'. An error disappeared when I added ';boundary=frontier' to Content-type.
Another approach was to set enctype="multipart/form-data".
Therefore I have several questions:
Should I use exactly multipart/form-data content-type?
Where can I specify enctype? (headers, parameters, etc)
Why the file's data contains in request.body but request.FILES is empty?
Thanks | false | 28,295,059 | 0.53705 | 0 | 0 | 3 | Should I use exactly multipart/form-data content-type?
Django supports only multipart/form-data, so you must use that content-type.
Where can I specify enctype? (headers, parameters, etc)
in normal HTML just put enctype="multipart/form-data" as one of parameters of your form element. In HttpRequester it's more complicated, because I think it lacks support for multipart/form-data by default. http://www.w3.org/TR/html4/interact/forms.html#h-17.13.4.2 is more details about multipart/form-data, it should be possible to run it in HttpRequester by hand.
Why the file's data contains in request.body but request.FILES is empty?
You've already answered that:
Note that FILES will only contain data if the request method was POST and the that posted to the request had enctype="multipart/form-data". Otherwise, FILES will be a blank dictionary-like object. | 0 | 3,002 | 0 | 2 | 2015-02-03T09:08:00.000 | python,django,forms,post | Empty request.FILES in Django | 1 | 1 | 1 | 28,295,685 | 0 |
0 | 0 | Since my scaper is running so slow (one page at a time) so I'm trying to use thread to make it work faster. I have a function scrape(website) that take in a website to scrape, so easily I can create each thread and call start() on each of them.
Now I want to implement a num_threads variable that is the number of threads that I want to run at the same time. What is the best way to handle those multiple threads?
For ex: supposed num_threads = 5 , my goal is to start 5 threads then grab the first 5 website in the list and scrape them, then if thread #3 finishes, it will grab the 6th website from the list to scrape immidiately, not wait until other threads end.
Any recommendation for how to handle it? Thank you | false | 28,308,285 | 0 | 0 | 0 | 0 | It depends.
If your code is spending most of its time waiting for network operations (likely, in a web scraping application), threading is appropriate. The best way to implement a thread pool is to use concurrent.futures in 3.4. Failing that, you can create a threading.Queue object and write each thread as an infinite loop that consumes work objects from the queue and processes them.
If your code is spending most of its time processing data after you've downloaded it, threading is useless due to the GIL. concurrent.futures provides support for process concurrency, but again only works in 3.4+. For older Pythons, use multiprocessing. It provides a Pool type which simplifies the process of creating a process pool.
You should profile your code (using cProfile) to determine which of those two scenarios you are experiencing. | 1 | 1,243 | 0 | 0 | 2015-02-03T20:38:00.000 | python,multithreading | Python what is the best way to handle multiple threads | 1 | 1 | 2 | 28,308,422 | 0 |
1 | 0 | I am using PhantomJS via Python through Selenium+Ghostdriver.
I am looking to load several pages simultaneously and to do so, I am looking for an async method to load pages.
From my research, PhantomJS already lives in a separate thread and supports multiple tabs, so I believe the only missing piece of the puzzle is a method to load pages in a non-blocking way.
Any solution would be welcome, be it a simple Ghostdriver method I overlooked, bypassing Ghostdriver and interfacing directly with PhantomJS or a different headless browser.
Thanks for the help and suggestions.
Yuval | true | 28,319,579 | 1.2 | 0 | 0 | 2 | If you want to bypass ghostdriver, then you can directly write your PhantomJS scripts in JavaScript or CoffeeScript. As far as I know there is no way of doing this with the selenium webdriver except with different threads in the language of your choice (python).
If you are not happy with it, there is CasperJS which has more freedom in writing scripts than with selenium, but you will only be able to use PhantomJS or SlimerJS. | 0 | 691 | 0 | 1 | 2015-02-04T10:55:00.000 | python,selenium-webdriver,phantomjs,headless-browser,ghostdriver | Opening pages asynchronously in headless browser (PhantomJS) | 1 | 1 | 2 | 28,319,699 | 0 |
1 | 0 | I am currently authenticating via a RESTful http api that generates a token which is then used for subsequent request.
The api server is written with python twisted and works great
the auth token generation works fine in browsers
When requesting from software written in pyqt
the first request hands over a token to the pyqt app
while subsequent request from the pyqt app fails because the remote twisted server believes it is another browser entirely.
javascript ajax does this too but is solvable by sending xhrFields: {withCredentials: true} along with the request.
How do I resolve this in PyQt? | false | 28,324,393 | 0.197375 | 0 | 0 | 1 | So i figured out that Qt isn't sending the TWISTED_SESSION cookie back with subsequent requests.
all i did was send the cookie along with subsequent requests and it worked fine.
i had to sqitch to python's request to ease things | 0 | 49 | 0 | 1 | 2015-02-04T14:56:00.000 | python,qt,pyqt,twisted,twisted.web | IQtNetwork.QHttp request credential issue | 1 | 1 | 1 | 28,327,369 | 0 |
0 | 0 | I am doing a digital signage project using Raspberry-Pi. The R-Pi will be connected to HDMI display and to internet. There will be one XML file and one self-designed HTML webpage in R-Pi.The XML file will be frequently updated from remote terminal.
My idea is to parse the XML file using Python (lxml) and pass this parsed data to my local HTML webpage so that it can display this data in R-Pi's web-browser.The webpage will be frequently reloading to reflect the changed data.
I was able to parse the XML file using Python (lxml). But what tools should I use to display this parsed contents (mostly strings) in a local HTML webpage ?
This question might sound trivial but I am very new to this field and could not find any clear answer anywhere. Also there are methods that use PHP for parsing XML and then pass it to HTML page but as per my other needs I am bound to use Python. | false | 28,329,977 | 0.099668 | 1 | 0 | 1 | I think there are 3 steps you need to make it work.
Extracting only the data you want from the given XML file.
Using simple template engine to insert the extracted data into a HTML file.
Use a web server to service the file create above.
Step 1) You are already using lxml which is a good library for doing this so I don't think you need help there.
Step 2) Now there are many python templating engines out there but for a simple purpose you just need an HTML file that was created in advance with some special markup such as {{0}}, {{1}} or whatever that works for you. This would be your template. Take the data from step 1 and just do find and replace in the template and save the output to a new HTML file.
Step 3) To make that file accessible using a browser on a different device or a PC you need to service it using a simple HTTP web server. Python provides http.server library or you can use an 3rd party web server and just make sure it can access the file created on step 2. | 0 | 4,898 | 0 | 2 | 2015-02-04T19:40:00.000 | python,html,xml,parsing,raspberry-pi | Parse XML file in python and display it in HTML page | 1 | 1 | 2 | 28,361,971 | 0 |
0 | 0 | How I can use some 'extra' python module which is not located localy but on a remote server? Somthing like using maven dependencies with Java | true | 28,379,325 | 1.2 | 1 | 0 | 1 | You would be better off to install those modules locally.
Can you create packages of those modules (either using pip or something similar) then you can distribute them to your local box.
Of what I know, there is nothing similar. | 0 | 32 | 1 | 0 | 2015-02-07T06:30:00.000 | python-3.x,python-import,python-module | How to use remote python modules | 1 | 1 | 1 | 29,157,556 | 0 |
0 | 0 | How can I recognize SSL packets when I sniff in scapy?
I know that SSL packets are going through port 443, can I assume that all the TCP packets that go through port 443 are SSL packets? | false | 28,406,798 | 0.291313 | 0 | 0 | 3 | You can neither assume that all traffic using port 443 is SSL and also that SSL can only be found on port 443. To detect SSL traffic you might try to look at the first bytes, i.e. a data stream starting with \x16\x03 followed by [\x00-\x03] might be a ClientHello for SSL 3.0 ... TLS 1.2. But of course it might also be some other protocol which just uses the same initial byte sequence. | 0 | 6,880 | 0 | 1 | 2015-02-09T09:47:00.000 | python,ssl,packet,scapy | Scapy sniffing SSL | 1 | 1 | 2 | 28,407,181 | 0 |
0 | 0 | I have a local path like C:\Users\some_user\Desktop\some_folder\attachments\20150210115. I need to generate network path with python like \\PC_NAME\C:\Users\some_user\Desktop\some_folder\attachments\20150210115 or something like this to get this folder from other windows pc. Is it possible to do in python automatically or I need just to hardcode local path replacing pc name and other staff?
UPDATE
Sorry, I not so familiar with Windows path as I live in Linux. I just need to generate network path for local path and send it to other device. | false | 28,432,231 | 0 | 0 | 0 | 0 | Having thought about the problem itself: No, it's not possible to do this in any general way, because there's absolutely no reason why C:\path\to\file should correspond to \\PCNAME\C:\path\to\file; that requires that the folder C:\ has the share name C:, which might be canonical on windows machines, but not fixed. In fact, normal users don't even get to read the full list of shares of a windows computer including their local bindings.
EDIT: as the others pointed out, there are ways to get your machines network name, but you'll have to build the network path with that manually, and I think your question was about automatically matching a file to it's windows share "URL". | 0 | 1,113 | 1 | 1 | 2015-02-10T12:55:00.000 | python,python-2.7 | How to get network path from local path? | 1 | 1 | 3 | 28,432,344 | 0 |
0 | 0 | I have a lot of data files (almost 150) in binary structure created according the .proto scheme of Protocol Buffer. Is there any efficient solution how to merge all the files to just one big binary data file without losing any information? | false | 28,497,204 | 0 | 0 | 0 | 0 | Thanks to @Likor, I can just combine the binaries using cat proto1.bin proto1.bin > combined_proto.bin then de-serialize the binary to string. | 0 | 720 | 0 | 3 | 2015-02-13T10:15:00.000 | python,protocol-buffers | Protocol Buffer - merge binary data files with the same .proto file to the one file | 1 | 1 | 2 | 71,591,118 | 0 |
0 | 0 | I am iterating a list of links for screen scraping. The pages have JavaScript so I use Selenium. I have a defined a function to get the source for each page.
Should I instantiate the WebDriver inside that function, which will happen once per loop?
Or should I instantiate outside the function and pass the WebDriver in?
Or assign the WebDriver to a variable that will be visible from inside the function, without explicitly passing it? | true | 28,514,950 | 1.2 | 0 | 0 | 1 | Each instantiation of WebDriver launches a new browser, which is a very costly operation, so option 1 is not what you want to do.
I would also not do option 3 because it is not good coding practice to depend on global variables when it can easily be avoided.
This leaves you option 2: instantiate WebDriver once and pass the instance to your function(s). | 0 | 197 | 0 | 2 | 2015-02-14T11:03:00.000 | python,selenium | Where should I instantiate my WebDriver instance when looping? | 1 | 1 | 1 | 28,515,299 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.