Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I want to implement a Python script to act as my OAuth2 endpoint, since I'm trying to write a Soundcloud app. Part of the authentication process involves visiting a Soundcloud page where you can sign in and grant access to the given Soundcloud application. I'd like to be able to open that webpage in a browser using Python 3, which you can do with the webbrowser object. You can see on the documentation that launching a text-based browser blocks execution; I want to block execution whilst the webpage is open in a GUI-based browser. Does anyone know whether this is possible?
true
24,706,850
1.2
0
0
0
This might be impossible to do portably. For example, if Firefox is already running on Linux, the second invocation of firefox http://url will find out that an instance using the same profile, will send a message to the other process to open that URL in a tab, then exits immediately. However, you could accomplish the same thing by sending the authentication tokens to a server, and simultaneously polling the server for credentials in the python script.
0
225
0
0
2014-07-11T21:29:00.000
python,flask,soundcloud
How can I make Python's 'webbrowser' block execution?
1
1
1
24,741,514
0
1
0
So, do they? That is the question. I'm not seeing any increase in my coverage reports with my integration tests done with selenium.
false
24,766,819
0.197375
0
0
2
Well, they increase coverage if they execute code which is not executed by your other tests. However, that won't show up in your reports unless you figure out a way to capture which lines are executed during selenium testing, and add that into the data about coverage.
0
826
0
0
2014-07-15T19:41:00.000
python,django,selenium,integration-testing,code-coverage
Do selenium tests in django applications increase coverage?
1
1
2
24,768,403
0
1
0
I've searched thoroughly and still can't find the answer to this question. I finally figured out how to prefill a form in an iframe using splinter, but it only works in firefox on my computer, while not working in another browser, let alone a mobile device. I've tried importing webdriver from selenium, etc. Still, nothing. So far, the webbrowser works both on the pc and my android device to easily pull up a website; unfortunately, I can't get it to prefill forms in iframes. Can anybody help??? Thank you!!!
false
24,846,309
0
0
0
0
You can provide it as the argument when you instantiate the splinter browser object.
0
128
0
0
2014-07-20T00:45:00.000
android,python,iframe,selenium,splinter
How can I force splinter to use a default browser?
1
1
1
30,110,730
0
0
0
I have two servers: Golang and Python (2.7). The Python (Bottle) server has a computation intensive task to perform and exposes a RESTful URI to start the execution of the process. That is, the Go server sends: HTTP GET to myserver.com/data The python server performs the computation and needs to inform the Go server of the completion of the processing. There are two ways in which I see this can be designed: Go sends a callback URL/data to Python and python responds by hitting that URL. E.g: HTTP GET | myserver.com/data | Data{callbackURI:goserver.com/process/results, Type: POST, response:"processComplete"} Have a WebSocket based response be sent back from Python to Go. What would be a more suitable design? Are there pros/cons of doing one over the other? Other than error conditions (server crashed etc.,) the only thing that the Python server needs to actually "inform" the client is about completing the computation. That's the only response. The team working on the Go server is not very well versed with having a Go client based on websockets/ajax (nor do I. But I've never written a single line of Go :) #1 seems to be easier but am not aware of whether it is an accepted design approach or is it just a hack? What's the recommended way to proceed in this regard?
true
24,873,787
1.2
0
0
2
It depends on how often these signal are being sent. If it's many times per second, keeping a websocket open might make more sense. Otherwise, use option #1 since it will have less overhead and be more loosely coupled.
0
1,211
0
1
2014-07-21T20:08:00.000
python,rest,go,websocket
HTTP Callback URL vs. WebSocket for ansynchronous response?
1
1
2
24,876,825
0
0
0
I want to establish one session at the starting of the suite. That session should be stay for longer time for multiple test cases.That session should end at the last. That session should be implement in Selenium Web driver by using Unittest frame works in python language. please can anyone suggest any methods or how to implement it.
false
24,890,579
0
1
0
0
The simplest way to achieve this is not to use the Setup() and TearDown() methods, or more specifically not to create a new instance of the WebDriver object at that start or each test case, and not to use the Quit() method at the end of each test case. In your first test case create a new instance of the WebDriver object and use this object for all of your test cases. At the end of your last test case use the Quit() method to close the browser.
0
739
0
0
2014-07-22T14:47:00.000
python,session,selenium,webdriver,python-unittest
how to establish a session thourghout the selenium webdriver suite by using python in firefox
1
1
2
24,891,062
0
1
0
I used to create web app in the same computer, but if the server and the client is not in the same computer, how can we access to the web page ? I mean, for example I have an html form and a button "ok" : If the server and the client are in the same computer, in action = " " we put localhost/file.py , but if the server and the client are not in the same computer how to do this ? Because the client can't to have localhost in his webbrower (url).
false
24,890,739
0.099668
0
0
1
The "action" part of a form is an url, and If you don't specify the scheme://host:port part of the URL, the client will resolve it has the current page one. IOW: just put the path part of your script's URL and you'll be fine. FWIW hardcoding the scheme://host:port of your URLs is an antipattern, as you just found out.
0
136
0
0
2014-07-22T14:54:00.000
python,url,client-server,client,webclient
make a Client-Server application
1
2
2
24,893,137
0
1
0
I used to create web app in the same computer, but if the server and the client is not in the same computer, how can we access to the web page ? I mean, for example I have an html form and a button "ok" : If the server and the client are in the same computer, in action = " " we put localhost/file.py , but if the server and the client are not in the same computer how to do this ? Because the client can't to have localhost in his webbrower (url).
true
24,890,739
1.2
0
0
0
Your script is supposed to be run as a CGI script by a web-server, which sets environment variables like REMOTE_ADDR, REQUEST_METHOD ... You are running the script by yourself, and this environment variable are not available. That's why you get the KeyError.
0
136
0
0
2014-07-22T14:54:00.000
python,url,client-server,client,webclient
make a Client-Server application
1
2
2
24,907,187
0
0
0
For example when I have ids lookup and want to search one by by one to see if the document already exists or not. One of two things happen: First -> first search request returns the correct doc and all the calls after that returns the same doc as first one even though I was searching for different Ids Second -> first search request returns the correct doc and all the calls after that returns empty hits array even though I was searching for different Ids. The search metadata does tell me that "total" was one for this request but no actual hits are returned. I have been facing this weird behaviour with ElasticSearch.py and using raw http requests as well. Could it be firewall that is causing some sort of weird caching behaviour? Is there anyway to force the results? Any ideas are welcome at this point. Thanks in advance!
true
24,956,390
1.2
0
0
0
It was the firewall caching that was causing the havoc! Once the caching was disabled for certain endpoints, the issue resolved itself. Painful!
0
112
0
1
2014-07-25T13:02:00.000
python,elasticsearch,firewall,python-3.4
ElasticSearch doesn't return hits when making consecutive calls - Python
1
1
1
25,247,538
0
1
0
I'd like to get the data from inspect element using Python. I'm able to download the source code using BeautifulSoup but now I need the text from inspect element of a webpage. I'd truly appreciate if you could advise me how to do it. Edit: By inspect element I mean, in google chrome, right click gives us an option called inspect element which has code related to each element of that particular page. I'd like to extract that code/ just its text strings.
false
25,027,339
0
0
0
0
BeautifulSoup could be used to parse the html document, and extract anything you want. It's not designed for downloading. You could find the elements you want by it's class and id.
0
34,323
0
6
2014-07-30T01:11:00.000
python,html,extract
How to get data from inspect element of a webpage using Python
1
1
4
25,027,442
0
0
0
I'm developing a client-server game in python and I want to know more about port forwarding. What I'm doing for instance is going to my router (192.168.0.1) and configure it to allow request for my real IP-adress to be redirected to my local adress 192.168.0.X. It works really well. But I'm wondering if I can do it by coding something automatically ? I think skype works like a kind of p2p and I can see in my router that skype is automatically port forwarded to my pc adress. Can I do it in Python too?
false
25,069,005
0.099668
0
0
2
So your application needs to do TCP/UDP networking if I understand correctly. That means that at least one of the connecting clients needs a properly open port, and if both of them is behind NAT (a router) and have no configured open ports, your clients cannot connect. There are several possible solutions for this, but not all are reliable: UPnP, as suggested here, can open ports on demand but isn't supported (or enabled) on all routers (and it is a security threat), and P2P solutions are complex and still require open ports on some clients. The only reliable solution is to have a dedicated server that all clients can connect to that will negotiate the connections, and possibly proxy between them.
0
9,022
0
6
2014-07-31T21:11:00.000
python,port,forwarding
Python port forwarding
1
1
4
25,069,162
0
1
0
There is a webpage which when loaded uses a random placement of forms / controls / google ads. However, the set is closed--from my tests there are at least three possible variations, with two very common and the third very rare. I would like to be able to classify this webpage according to each variation. I tried analyzing the html source of each variation, but the html of all the variations is exactly the same, according to both Python string equals and the Python difflib. There doesn't seem to be any information specifying where to put the google ads or the controls. For an example, consider a picture with two boxes, a red one (call it box A) and a blue one (call it box B). The boxes themselves never change position, but what takes their position does. Now consider two possible variations, one of which is chosen everytime the webpage is loaded / opened. Variation 1: Suppose 50% of the time, the google ad is positioned at box A (the red one) and the website control is thus placed at box B (the blue one). Variation 2: Suppose also 50% of the time, the google ad is positioned at box B (the blue one) and the website control is thus placed at box A (the red one). So if I load the webpage, how can I classify it based on its variation?
true
25,090,619
1.2
0
0
0
If the HTML is definitely the same every time, the variations are probably being done on the client side using javascript. The answer depends on what you mean by "classify." If you just want to know, on any given load of the page, where the widgets are, you will probably have to use something like Selenium that actually opens the page in a browser and runs javascript, rather than just fetching the HTML source. Then you will need to use Selenium to eval some javascript that detects the widget locations. There is a selenium module for python that is fairly straightforward to use. Consider hooking it up to PhantomJS so you don't have to have a browser window up.
0
34
0
0
2014-08-02T00:57:00.000
python,html,ads,adsense
Classify different versions of the same webpage
1
1
1
25,090,652
0
0
0
My RDS is in a VPC, so it has a private IP address. I can connect my RDS database instance from my local computer with pgAdmin using SSH tunneling via EC2 Elastic IP. Now I want to connect to the database instance in my code in python. How can I do that?
false
25,112,648
0
0
1
0
Point your python code to the same address and port you're using for the tunnelling. If you're not sure check the pgAdmin destination in the configuration and just copy it.
0
667
0
2
2014-08-04T06:08:00.000
postgresql,python-2.7,amazon-web-services,psycopg2,amazon-vpc
AWS - Connect to RDS via EC2 tunnel
1
1
1
25,115,502
0
0
0
When using recv() method, sometimes we can't receive as much data as we want, just like using send(). But we can use sendall() to solve the problem of sending data, how about receiving data? Why there is no such recvall() method?
false
25,120,761
1
0
0
8
send has extra information that recv doesn't: how much data there is to send. If you have 100 bytes of data to send, sendall can objectively determine if fewer than 100 bytes were sent by the first call to send, and continually send data until all 100 bytes are sent. When you try to read 1024 bytes, but only get 512 back, you have no way of knowing if that is because the other 512 bytes are delayed and you should try to read more, or if there were only 512 bytes to read in the first place. You can never say for a fact that there will be more data to read, rendering recvall meaningless. The most you can do is decide how long you are willing to wait (timeout) and how many times you are willing to retry before giving up. You might wonder why there is an apparent difference between reading from a file and reading from a socket. With a file, you have extra information from the file system about how much data is in the file, so you reliably distinguish between EOF and some other that may have prevented you from reading the available data. There is no such source of metadata for sockets.
0
16,283
0
17
2014-08-04T14:13:00.000
python,sockets,python-2.7
why does python socket library not include recvall() method like sendall()?
1
2
3
25,121,548
0
0
0
When using recv() method, sometimes we can't receive as much data as we want, just like using send(). But we can use sendall() to solve the problem of sending data, how about receiving data? Why there is no such recvall() method?
false
25,120,761
0
0
0
0
Because recvall is fundamentally confusing: your assumption was that it would read exactly-N bytes, but I would have thought it would completely exhaust the stream, based on the name. An operation that completely exhausts the stream is a dangerous API for a bunch of reasons, and the ambiguity in naming makes this a pretty unproductive API.
0
16,283
0
17
2014-08-04T14:13:00.000
python,sockets,python-2.7
why does python socket library not include recvall() method like sendall()?
1
2
3
25,121,894
0
0
0
I would like to be able to use the multiprocessing library in python to continuously stream from multiple live web apis with python-request (Using the Stream option). Would this be possible on a dual-core Linux system or am I better off running them as single programs in multiple screen sessions? Would I also want to use a pool of workers? Thank you for the help, let me know if the question is unclear.
false
25,126,843
0.099668
0
0
1
Figure out what your bottleneck is going to be before designing a solution; The first thing to look at is probably network bandwith. If one stream can saturate your network, downloading more than one toghether won't be faster. The second thing is disk write throughput. Can your disk and OS handle all these concurrent writes? If you want to do transcoding, you might also run into computational limits.
1
218
0
0
2014-08-04T20:16:00.000
python,multiprocessing,python-requests
Is it possible to process X amount of streaming API streams with multiprocessing?
1
1
2
25,128,782
0
0
0
I'm trying to write a server for a chat program. I want the server to have a tcp connection with every chat user. Is there way for the server to have multiple tcp connections at the same time without creating a socket for every connection? And if yes, how?
true
25,171,044
1.2
0
0
2
No. Unlike UDP sockets, TCP sockets work are connection-oriented. Whatever data is written into a socket, "magically" appears to come out of the socket at the other end as a stream of data. For that, both sockets maintain a virtual connection, a state. Among other things, the state defines both endpoints of the conntection - IP and port numbers of both sockets. So a single TCP socket can only talk to a single TCP socket at the other end. UDP sockets on the other hand operate on a per-packet basis (connectionless), allowing you to send a receive packets to/from any destination using the same socket. However, UDP does not guarantee reliability and in-order delivery. Btw, your question has nothing to do with python. All sockets (except raw sockets) are operating system sockets that work in the same way in all languages.
0
1,292
0
1
2014-08-06T22:05:00.000
python,sockets,python-3.x,tcp
Multiple TCP connections in python
1
1
1
25,171,183
0
1
0
I want my search tool to have a similar behaviour to Google Search when all of the elements entered by the user are excluded (eg.: user input is -obama). In those cases, Google returns an empty result. In my current code, my program just makes an empty Solr query, which causes error in Solr. I know that you can enter *:* to get all the results, but what should I fill in my Solr query so that Solr will return an empty search result? EDIT: Just to make it clearer, the thing is that when I have something like -obama, I want Solr to return an empty search result. If you google -obama, that's what you get, but if you put -obama on Solr, it seems that the result is everything (all the documents), except for the ones that have "obama"
false
25,193,192
0
0
0
0
You shouldn't query solr when there is no term being looked for (and I seriously doubt google looks over it's searchable indexes when a search term is empty). This logic should be built into whatever mechanism you use to parse the user supplied query terms before constructing the solr query. Lets say the user's input is represented as a simple string where each word is treated as a unique query term. You would want to split the string on spaces into an array of strings, map over the array and remove strings prefixed by "-", and then construct the query terms from what remains in the array. If flattening the array yields an empty string, return an empty array instead of querying solr at all.
0
660
0
0
2014-08-07T22:18:00.000
python,solr,lucene,solr-query-syntax
How do I make Solr return an empty result?
1
1
2
25,193,866
0
0
0
I've setup an XML-RPC server/client communication under Windows. What I've noticed is that if exchanged data volume become huge, there's a difference in starting the server listening on "localhost" vs. "127.0.0.1". If "127.0.0.1" is set, the communication speed is faster than using "localhost". Could somebody explain why? I thought it could be a matter on naming resolving, but....locally too?
false
25,199,405
1
0
0
10
Every domain name gets resolved. There is no exception to this rule, including with regards to a local site. When you make a request to localhost, localhost's IP gets resolved by the host file every time it gets requested. In Windows, the host file controls this. But if you make a request to 127.0.0.1, the IP address is already resolved, so any request goes directly to this IP.
0
5,148
1
15
2014-08-08T08:47:00.000
python,windows,ip,xml-rpc
"localhost" vs "127.0.0.1" performance
1
1
1
25,221,564
0
0
0
I am looking for libraries that can help in displaying a progress message so a user knows how far along the task is. I found Celery, and I was wondering if it is possible to use Celery to display a progress message that updates on the page during the load. For example: "Loaded x amount of data" and enumerate x.
false
25,212,475
0
0
0
0
Celery is meant to do background jobs. That means it cannot directly interact to update your web page. To achieve something like what you described, you would need to store the progress data externally (like say into a database) and then query this data source in the webpage via AJAX. Yeah, it is a bit complicated.
0
61
0
0
2014-08-08T21:24:00.000
python,django,celery
Using celery to update a progress message?
1
1
1
25,213,065
0
0
0
I am trying to run a simple Python TCP server on my EC2, listening on port 6666. I've created an inbound TCP firewall rule to open port 6666, and there are no restrictions on outgoing ports. I cannot connect to my instance from the outside world however, testing with telnet or netcat can never make the connection. Things do work if I make a connection from localhost however. Any ideas as to what could be wrong? #!/usr/bin/env python import socket TCP_IP = '127.0.0.1' TCP_PORT = 6666 BUFFER_SIZE = 20 # Normally 1024, but we want fast response s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((TCP_IP, TCP_PORT)) s.listen(1) conn, addr = s.accept() print 'Connection address:', addr while 1: data = conn.recv(BUFFER_SIZE) if not data: break print "received data:", data conn.send(data) # echo conn.close()
false
25,234,336
1
0
0
10
Your TCP_IP is only listening locally because you set your listening IP to 127.0.0.1. Set TCP_IP = "0.0.0.0" and it will listen on "all" interfaces including your externally-facing IP.
0
3,240
1
5
2014-08-11T00:30:00.000
python,amazon-web-services,tcp,amazon-ec2,firewall
Simple Python TCP server not working on Amazon EC2 instance
1
1
1
25,234,530
0
1
0
Using Boto, you can create an S3 bucket and configure a lifecycle for it; say expire keys after 5 days. I would like to not have a default lifecycle for my bucket, but instead set a lifecycle depending on the path within the bucket. For instance, having path /a/ keys expire in 5 days, and path /b/ keys to never expire. Is there a way to do this using Boto? Or is expiration tied to buckets and there is no alternative? Thank you
true
25,245,710
1.2
0
1
0
After some research in the boto docs, it looks like using the prefix parameter in the lifecycle add_rule method allows you to do this.
0
41
0
0
2014-08-11T14:28:00.000
python,amazon-web-services,amazon-s3,boto
Setting a lifecycle for a path within a bucket
1
1
1
25,245,827
0
1
0
I am doing a data extraction project where i am required to build a web scraping program written using python using selenium and phantomjs headless webkit as browser for scaping public information like friendlist in facebook.The program is starting fairly fast but after a day of running it is getting slower and slower and I cannot figure out why ?? Can anyone give me an idea why it is getting slower ? I am running on a local machine which pretty good specs of 4gb ram and quad core processor . Does FB provide any API to find friends of friends ?
false
25,255,414
0
0
0
0
We faced the same issue. We resolved this by closing browser automatically after particular time interval. Clear temporary cache and open new browser instance and continue the process.
0
204
0
3
2014-08-12T02:49:00.000
python-2.7,facebook-graph-api,selenium-webdriver,phantomjs,facebook-sdk-3.0
Web Crawler gets slower with time
1
1
1
25,302,804
0
0
0
I am trying to create a multiplayer game for iPhone(cocos2d), which is almost finished, but the multiplayer part is left. I have searched the web for two days now, and can´t find any thing that answers my question. I have created a search room(tcp socket on port 2000) that matches players that searches for a quick match to play. After two players has been matched, that server disconnects them from the search room to leave space for incoming searchers(clients/players) Now I´m wondering how to create the play room(where two players interact and play)? I was thinking I could create a new tcp/udp socket on a new port and let the matched(matched in the search room) players connect to that socket and then have a perfect isolated room for the two to interact with each another. Or do I need a new server (machine/hardware) and than create a new socket on that one and let the peered players connect to it. Or maybe there is another way of doing this. OBS. I am not going to have the game running on the server to deal with cheaters for now. Because this will be to much load for the server cpu on my setup.
true
25,261,552
1.2
0
0
1
I was thinking I could create a new tcp/udp socket on a new port and let the matched(matched in the search room) players connect to that socket and then have a perfect isolated room for the two to interact with each another. Yes, you can do that. And there shouldn't be anything hard about it at all. You just bind a socket on the first available port, pass that port to the two players, and wait for them to connect. If you're worried about hackers swooping in by, e.g., portscanning for new ports opening up, there are ways to deal with that, but given that you're not attempting any cheap protection I doubt it's an issue. Or do I need a new server (machine/hardware) and than create a new socket on that one and let the peered players connect to it. Why would you need that? What could it do for you? Sure, it could take some load off the first server, but there are plenty of ways you could load-balance if that's the issue; doing it asymmetrically like this tends to lead to one server at 100% while the other one's at 5%… Or maybe there is another way of doing this. One obvious way is to not do anything. Just let them keeping talking to the same port they're already talking to, just attach a different handler (or a different state in the client state machine, or whatever; you haven't given us any clue how you're implementing your server). I don't know what you think you're getting out of "perfect isolation". But even if you want it to be a different process, you can just migrate the two client sockets over to the other process; there's no reason to make them connect to a new port. Another way to do it is to get the server out of the way entirely—STUN or hole-punch them together and let them P2P the game protocol. Anything in between those two extremes doesn't seem worth doing, unless you have some constraints you haven't explained. OBS. I am not going to have the game running on the server to deal with cheaters for now. Because this will be to much load for the server cpu on my setup. I'm guessing that if putting even minimal game logic on the server for cheat protection is too expensive, spinning off a separate process for every pair of clients may also be too expensive. Another reason not to do it.
0
902
0
1
2014-08-12T10:04:00.000
python,sockets,tcp,udp,multiplayer
Multi multiplayer server sockets in python
1
1
1
25,261,988
0
0
0
I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer? Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course)
false
25,265,148
0
1
0
0
Have you considered Dropbox or SVN ?
0
2,518
0
0
2014-08-12T13:08:00.000
python,ssh
update file on ssh and link to local computer
1
2
3
25,265,180
0
0
0
I may be being ignorant here, but I have been researching for the past 30 minutes and have not found how to do this. I was uploading a bunch of files to my server, and then, prior to them all finishing, I edited one of those files. How can I update the file on the server to the file on my local computer? Bonus points if you tell me how I can link the file on my local computer to auto update on the server when I connect (if possible of course)
false
25,265,148
0
1
0
0
I don't know your local computer OS, but if it is Linux or OSX, you can consider LFTP. This is an FTP client which supports SFTP://. This client has the "mirror" functionality. With a single command you can mirror your local files against a server. Note: what you need is a reverse mirror. in LFTP this is mirror -r
0
2,518
0
0
2014-08-12T13:08:00.000
python,ssh
update file on ssh and link to local computer
1
2
3
25,265,274
0
1
0
I am writing a web crawler, but I have a problem with function which recursively calls links. Let's suppose I have a page: http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind. I am looking for all links, and then open each link recursively, downloading again all links etc. The problem is, that some links, although have different urls, drive to the same page, for example: http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind#mw-navigation gives the same page as the previous link. And I have an infinite loop. Is any possibility to check if two links drive to the same page without comparing the all content of this pages?
false
25,275,654
0.099668
0
0
1
You can store the hash of the content of pages previously seen and check if the page has already been seen before continuing.
0
52
0
1
2014-08-12T23:19:00.000
python,url,web-crawler,urllib2
Predict if sites returns the same content
1
1
2
25,275,720
0
0
0
I have a bit of a problem with the ftplib from python. It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?). However my ftpserver only accepts one connection at any given time. Since the only file that needs to be transfered is only about 1 MB large, the reasoning of being able to abort inflight commands does not apply here. Previously the same job was done by the windows commandline ftp client. So I could just call this client from python, but I would really prefer a complete python solution. Is there a way to tell ftplib, that it should limit itself to a single connection? In filezilla I'm able to "limit the maximum number of simultanious connections", ideally I would like to reproduce this functionality. Thanks for your help.
true
25,302,979
1.2
0
0
0
It seems that it uses, per default, two connections (one for sending commands, one for datatransfer?). That's how ftp works. You have a control connection (usually port 21) for commands and a data connection for data transfer, file listing etc and a dynamic port. However my ftpserver only accepts one connection at any given time. ftpserver might have a limit for multiple control connections, but it must still accept data connections. Could you please show from tcpdump, wireshark, logfiles etc why you think multiple connections are the problem? In filezilla I'm able to "limit the maximum number of simultanious connections" This is for the number of control connections only. Does it work with filezilla? Because I doubt that ftplib opens multiple control connections.
0
691
1
0
2014-08-14T08:04:00.000
python,ftp,ftplib
python ftpclient limit connections
1
1
1
25,303,266
0
0
0
I have written an application in Python 2.7 and I'm using UDP sockets to implement networking capabilities. Though my application is not a game, I would consider it a game for networking purposes because the screen is redrawn 60 times per second. I do not need extreme precision, so I don't need to send a ton of packets per second, but the way I have implemented networking causes fellow users to look "choppy" if there aren't a fair amount of packets sent per second. After some research and fiddling, I've decided to send one packet every 50 milliseconds. This makes the other users look fairly "smooth" for a while, but after about a minute they get more and more choppy, eventually to the point of no updates happening. How am I supposed to implement networking like the networking done in video games? It seems like I am fundamentally missing something.
false
25,334,643
0
0
0
0
It looks to me that the network is getting congested, transmitting old packets that got into the routers' queues and dropping the new packets. Programming UDP is something of a black art — it requires reacting to network congestion when needed, and slowing down your sending rate. A simple solution would be to have the receiver send a periodic summary of received packets (say, once per RTT), and reduce your sending rate when you're seeing too many losses. Ideally, you'd combine that with a precise RTT estimator and reduce your sending rate preemptively when the RTT suddenly increases.
0
769
0
3
2014-08-15T21:43:00.000
python,sockets,networking,udp,frame-rate
How often should UDP packets be sent?
1
1
1
26,517,593
0
1
0
I'm using the following code to click a button on a page but the XPath keeps changing so the code keeps breaking: mydriver.find_element_by_xpath("html/body/div[2]/div[3]/div[1]/div/div[2]/div[2]/div[4]/div/form[2]/span/span/input").click() Is there a better way I should be doing this? Here is the code for the button I am trying to click: <input class="a-button-input" type="submit" title="Button 2" name="submit.button2-click.x" value="Button 2 Click"/>
false
25,355,287
0.099668
0
0
1
I'd use findelement(by.name(" submit.button2-click.x")).click() or use find element(by.cssSelector("selector ")).click()
0
531
0
0
2014-08-18T01:36:00.000
python,selenium,selenium-webdriver
Python - Issues with selenium button click using XPath
1
1
2
25,355,323
0
1
0
I made a local GUI which requires the users to enter their usernames and passwords. Once they click submit, I want to have a pop out window which directs them to a website with their personal information through POST, which requires a request. I know that there is webbroswer.open() to open a website, but it doesn't take any requests, how would I be able to do what I want it to do? I am using django 1.6 and python 2.7
true
25,368,199
1.2
0
0
1
Solution #1) skip all this and see Rjzheng's link below -- it's much simpler. Solution #2) Since webbrowser.open() doesn't take POST args: 1) write a javascript page which accepts args via GET, then does an Ajax POST 2) have webbbrowser.open() open URL from step #1 Not glamorous, but it'll work :) Be careful with security, you don't want to expose someone's password in the GET URL!
0
2,534
0
0
2014-08-18T16:48:00.000
python,django,post,browser
How to use webbrowser.open() with request in python
1
1
1
25,370,027
0
0
0
I am looking for a recipe for writing and reading the raw data generated by a requests transaction from files rather than a socket. By "raw data" I mean the bytes just before they are written to or read from the underlying socket. I've tried: Using "hooks". This seems to be mostly deprecated as the only remaining hook is "response". mount()ing a custom Adapter. Some aggressive duck-typing here provides access to the underlying httplib.HTTPConnection objects, but the call stack down there is complicated and quite brittle. The final solution does not need to be general-purpose as I am only interested in vanilla HTTP functionality. I won't be streaming or using the edgier parts of the protocol. Thanks!
false
25,396,421
0
0
0
0
Spawn a thread (import threading). Run an HTTP server in there. You can generate a unique port on demand by socket.socket().bind(0). In the HTTP server, just write the incoming data to a file (perhaps named by timestamp and incoming port number). Then send your requests there.
0
107
0
0
2014-08-20T03:48:00.000
python,http,python-requests
Perofrming a python-requests Request/Reponse transaction using files rather than a socket
1
1
1
25,396,973
0
1
0
I decided to use social media benefits on my page and currently I'm implementing Google+ Sign-In. One of the pages on my website should be accessible for logged in users only (adding stuff to the page). I am logging user to website via JavaScript. I'm aware that javascript is executed on client-side but I am curious is it possible to restrict access to the certain page using only javascript.
true
25,412,094
1.2
0
0
1
You cannot perform reliable access control using only client-side javascript. This is because since the javascript is executed on the user's browser, the user will be able to bypass any access control rule you've set there. You must perform your access control on server-side, in your case in Python code. Generally, people also perform some kind of access control check on the client side, not to prevent access, but for example to hide/disable buttons that the user cannot use.
0
73
0
0
2014-08-20T18:39:00.000
javascript,python,google-app-engine,google-plus,google-signin
Google+ Sign-In - Page accessible for logged in users only
1
1
2
25,419,890
0
0
0
How to structure GET 'review link' request from Vimeo API? New to python and assume others might benefit from my ignorance. I'm simply trying to upload via the new vimeo api and return a 'review link'. Are there current examples of the vimeo-api in python? I've read the documentation and can upload perfectly fine. However, when it comes to the http GET I can't seem to figure it out. Im using python2.7.5 and have tried requests library. Im ready to give up and just go back to PHP because its documented so much better. Any python programmers out there familiar?
true
25,464,255
1.2
1
0
1
EDIT: Since this was written the vimeo.py library was rebuilt. This is now as simple as taking the API URI and requesting vc.get('/videos/105113459') and looking for the review link in the response. The original: If you know the API URL you want to retrieve this for, you can convert it into a vimeo.py call by replacing the slashes with dots. The issue with this is that in Python attributes (things separated by the dots), are syntax errors. With our original rule, if you wanted to see /videos/105113459 in the python library you would do vc.videos.105113459() (if you had vc = vimeo.VimeoClient(<your token and app data>)). To resolve this you can instead use python's getattr() built-in function to retrieve this. In the end you use getattr(vc.videos, '105113459')() and it will return the result of GET /videos/105113459. I know it's a bit complicated, but rest assured there are improvements that we're working on to eliminate this common workaround.
0
435
0
0
2014-08-23T16:56:00.000
python,vimeo,vimeo-api
How to structure get 'review link' request from Vimeo API?
1
1
1
25,668,644
0
1
0
Wasn't able to find anything in the docs/SO relating to my question. So basically I'm crawling a website with 8 or so subdomains They are all using Akamai/CDN. My question is if I can find Ips of a few different Akamai data centres can, I somehow explicitly say this subdomain should use this ip for the host name etc.. So basically override the auto dns resolving... As this would allow greater efficiency and I would imagine less likely to be throttled as I'd be distributing the crawling. Thanks
false
25,521,837
0.197375
0
0
1
You can just set your DNS names manually in your hosts file. On windows this can be found at C:\Windows\System32\Drivers\etc\hosts and on Linux in /etc/hosts
0
953
0
0
2014-08-27T07:59:00.000
python,scrapy,web-crawler
Scrapy - use multiple IP Addresses for a host
1
1
1
25,521,907
0
0
0
I used pip to install all python packages, and the path is: PYTHONPATH="/usr/local/lib/python2.7/site-packages" I found all the packages I tried to install were installed under this path, but when I tried to import them, it always said module not found. MacBook-Air:~ User$ pip install tweepy Requirement already satisfied (use --upgrade to upgrade): tweepy in /usr/local/lib/python2.7/site-packages import tweepy Traceback (most recent call last): File "", line 1, in ImportError: No module named tweepy I tried with tweepy, httplib2, oauth and some others, none of these can work. Can anyone tell how can I solve this problem? Thanks!!!!
false
25,535,192
0
1
0
0
Feel like comment has too many things.. So as @Zahir Jacobs said, this problem is because pip is installing all packages in different path. After I moved all the packages to $which python path, I can import these modules now. But the following question is, if I still want to use pip installation in the future, I have to move it by manually again. is there anyway to change the path for pip? I tried to move pip package, but it returned: MacBook-Air:~ User$ pip install tweepy Traceback (most recent call last): File "/usr/local/bin/pip", line 5, in from pkg_resources import load_entry_point ImportError: No module named pkg_resources
1
770
0
0
2014-08-27T19:11:00.000
python,module,pip
Python2.7 package installed in right path but not found
1
1
1
25,537,762
0
0
0
I'm wondering what the advantages/disadvantages of one over the other are? I'm running against Selenium Server on a headless remote instance with Xvfb acting as the display. Both methods work fine and the resulting screen capture file (if I convert the base64 one and save it as an image file) are identical file size and look identical. So why would I want to use/not use one over the other?
true
25,565,737
1.2
0
0
3
With the get_screenshot_as_file, the screenshot get saves into a binary file, while with get_screenshot_as_base64 this will return you base64 encoded version of that screenshot. So, why would anyone use the base64 version? The whole idea behind base64 is that it allows you to create ASCII representation of binary data, which will increase the data size but will also allow you to actually work with it. For example if you've tried to send over a stream of binary data to a socket, without encoding it then unless server was prepared to handle binary data, the result is hard to predict. As a result of that the data transferred may be malformed, cut the transfer early and cause many other results that are almost impossible to predict. For example if you were to run a very simple socket server that just prints out everything as it receives to std::out, receiving a binary file would most likely corrupt your console terminal (you can try it on your very own Linux box). Of course if the server is designed to receive and handle binary data then this will not be an issue, but most often the server-end will interpret user input as string which makes using base64 a wise choice.
0
3,273
0
0
2014-08-29T09:46:00.000
python,selenium,screenshot,xvfb
Selenium get_screenshot_as_file vs get_screenshot_as_base64?
1
1
1
25,566,080
0
1
0
I've been tasked with creating a cookie audit tool that crawls the entire website and gathers data on all cookies on the page and categorizes them according to whether they follow user data or not. I'm new to Python but I think this will be a great project for me, would beautifulsoup be a suitable tool for the job? We have tons of sites and are currently migrating to Drupal so it would have to be able to scan Polopoly CMS and Drupal.
false
25,565,774
-0.066568
0
0
-1
I don't think you need BeautifulSoup for this. You could do this with urllib2 for connection and cookielib for operations on cookies.
0
4,870
0
0
2014-08-29T09:48:00.000
python,drupal,cookies,web-crawler
BeautifulSoup crawling cookies
1
1
3
25,565,899
0
0
0
How to obtain the public ip address of the current EC2 instance in python ?
false
25,589,259
0.158649
1
0
4
If you are already using boto you can also use the boto.utils.get_instance_metadata function. This makes the call to the metadata server, gathers all of the metadata and returns it as a Python dictionary. It also handles retries.
0
7,397
0
8
2014-08-31T05:41:00.000
python,amazon-web-services,amazon-ec2,boto
How to get the public ip of current ec2 instance in python?
1
1
5
25,592,181
0
0
0
Basically I need a way to check my internet connectivity in a sense. I've been having trouble with my net dropping out randomly and know it's not my end. But the ISP wants a little more proof. Basically I need something that can check latency and if its connecting at all on roughly an hourly basis and recording this information to a text file that I can view (and read back to them when I call them up next.) I was originally thinking of using python but my python is dodgy at best. But if another way is easier (either using a different scripting language or some program) I'm happy to use that as well. EDIT: I'm not sure if that was clear so I'll summarize. It needs to ping, then record the response and the time it was pinged in a text document in a readable way. It must ping roughly every hour.
false
25,599,932
0
0
0
0
You could just redirect constant ping results to a txt file. from command prompt (as admin) ping (address) -t >log.txt
0
166
0
0
2014-09-01T06:24:00.000
python,networking,scripting
In need of a way to ping a specific IP address and record the time of each ping aswell as the result
1
2
3
25,599,980
0
0
0
Basically I need a way to check my internet connectivity in a sense. I've been having trouble with my net dropping out randomly and know it's not my end. But the ISP wants a little more proof. Basically I need something that can check latency and if its connecting at all on roughly an hourly basis and recording this information to a text file that I can view (and read back to them when I call them up next.) I was originally thinking of using python but my python is dodgy at best. But if another way is easier (either using a different scripting language or some program) I'm happy to use that as well. EDIT: I'm not sure if that was clear so I'll summarize. It needs to ping, then record the response and the time it was pinged in a text document in a readable way. It must ping roughly every hour.
false
25,599,932
0
0
0
0
If you mean associate time with the pings - you could write it as a batch file where you called the time (with a /T so it doesn't ask for input) and does a ping (don't add the -t there so it runs just the standard 4) and then loops. Or you could consider running a tracert (in a loop) which would give a longer but more meaningful output as to whether the failure might be happening (Maybe you're getting out past your router but not getitng DNS, that type of thing).
0
166
0
0
2014-09-01T06:24:00.000
python,networking,scripting
In need of a way to ping a specific IP address and record the time of each ping aswell as the result
1
2
3
25,600,187
0
0
0
Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko. I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner This is strange because it happens randomly on any connections. Is there any way to fix it ?
false
25,609,153
0
1
0
0
Well, I was also getting this with one of the juniper devices. The timeout didn't help at all. When I used pyex with this, it created multiple ssh/netconf sessions with juniper box. Once I changed the "set system services ssh connection-limit 10" From 5, it started working.
0
118,288
0
77
2014-09-01T15:36:00.000
python,paramiko
Paramiko : Error reading SSH protocol banner
1
2
7
71,871,218
0
0
0
Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko. I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner This is strange because it happens randomly on any connections. Is there any way to fix it ?
false
25,609,153
0.085505
1
0
3
paramiko seems to raise this error when I pass a non-existent filename to kwargs>key_filename. I'm sure there are other situations where this exception is raised nonsensically.
0
118,288
0
77
2014-09-01T15:36:00.000
python,paramiko
Paramiko : Error reading SSH protocol banner
1
2
7
68,010,453
0
1
0
I need to take a screenshot of a particular given dom element including the area inside scroll region. I tried to take a screen shot of entire web page using selenium and crop the image using Python Imaging Library with the dimensions given by selenium. But I couldnt figure out a way to capture the are under scroll region. for example I have a class element container in my page and it is height is dynamic based on the content. I need to take screenshot of it entirely. but the resulting image skips the region inside scrollbar and the cropped image results with just the scroll bar in it Is there any way to do this? Solution using selenium is preferable, if it cannot be done with selenium alternate solution will also do.
false
25,643,996
0
0
0
0
You can scrool with driver.execute_script method and then take a screenshot. I scroll some modal windowns this way with jQuery: driver.execute_script("$('.ui-window-wrap:visible').scrollTop(document.body.scrollHeight);")
0
1,055
0
0
2014-09-03T12:12:00.000
python,selenium,screenshot,python-imaging-library
Taking screenshot of particular div element including the area inside scroll region using selenium and python
1
1
1
25,644,516
0
1
0
My problem is that I want to create a data base of all of the questions, answers, and most importantly, the tags, from a certain (somewhat small) Stack Exchange. The relationships among tags (e.g. tags more often used together have a strong relation) could reveal a lot about the structure of the community and popularity or interest in certain sub fields. So, what is the easiest way to go through a list of questions (that are positively ranked) and extract the tag information using Python?
false
25,644,432
0
0
0
0
Visit the site to find the URL that shows the information you want, then look at the page source to see how it has been formatted.
0
191
0
0
2014-09-03T12:32:00.000
python,tags,extract
How to scrape tag information from questions on Stack Exchange
1
1
3
25,644,601
0
1
0
I am needing to use the ENTER key in Safari. Turns out Webdriver does not have the Interactions API in the Safari driver. I saw some code from a question about this with a java solution using Robot, and was wondering if there is a purely Python way to do a similar thing.
false
25,694,785
0.197375
0
0
1
Darth, Mac osascript has libraries for Python. Be sure to 'import os' to gain access to the Mac osascript functionality. Here is the command that I am using: cmd = """ osascript -e 'tell application "System Events" to keystroke return' """ os.system(cmd) This does a brute force return. If you're trying to interact with system resources such as a Finder dialog, or something like that, make sure you give it time to appear and go away once you interact with it. You can find out what windows are active (as well as setting Safari or other browsers 'active', if it hasn't come back to front) using Webdriver / Python. Another thing that I have to do is to use a return call after clicking on buttons within Safari. Clicks are a little busted, so I will click on something to select it (Webdriver gets that far), then do an osascript 'return' to commit the click. I hope this helps. Best wishes, -Vek If this answer appears on ANY other site than stackoverflow.com, it is without my authorization and should be reported
0
95
0
1
2014-09-05T22:24:00.000
python,safari,selenium-webdriver
Getting Around Webdriver's Lack of Interactions API in Safari
1
1
1
26,345,400
0
0
0
Is there any method to return the SOAP Request XML before triggering a call to SOAP method in Suds library? The client.last_sent() returns request XML after triggered the call. But I want to see before triggering the call.
false
25,697,623
0
0
0
0
Yes this is possible and seems to be used in different "fixer" implementations to take care of buggy servers. Basically you should write a MessagePlugin and implement the sending method.
0
219
0
0
2014-09-06T06:45:00.000
python,soap,suds
Request XML from SUDS python
1
1
1
27,094,874
0
0
0
I have an external server that I can SSH into. It runs bots on reddit. Whenever I close the terminal window the bot is running in, the process stops, which means the bot stops as well. I've tried using nohup python mybot.py but it doesn't work - when I close the window and check the processes (ps -e), python does not show up. Are there any alternatives to nohup? Ideally, that print the output to the terminal, instead of an external file.
true
25,731,886
1.2
0
0
1
Have you considered using tmux/screen? They have lots of features and can help you detach a terminal and re-attach to it at a later date without disrupting the running process.
0
45
0
0
2014-09-08T19:40:00.000
python,ssh
Keeping a program going on an external server
1
1
1
25,732,718
0
0
0
I need to build a server which can connect to the chrom browser (The browser need to be a client) in TCP protocol, and get from the browser some URL and check if the file it recieved exist in my computer. This code below is work only I use a regular client that I build, and not a browser client. My question is: How and where do I can send data from the client browser to the server? How do I connect to the browser from the server?
true
25,747,431
1.2
0
0
0
In order to connect the browser to the server (your app ) your url should look like this 127.0.0.1:12397 . i am just not sure how are you planning to send the GET request , but now the request will be intercepted by your app ONCE SENT.
0
381
0
0
2014-09-09T14:38:00.000
python,sockets,client
Using the browser as a client in python sockets
1
1
1
25,748,541
0
0
0
I'm using Python to send mass emails. It seems that I send too many and too fast that I am getting SMTPRecipientsRefused(senderrs) errors. I used server = smtplib.SMTP('smtp.gmail.com', 587) server.starttls() Any ideas? Thanks!
false
25,778,029
0
1
0
0
It turns out that I was sending to empty recepients.
0
361
0
0
2014-09-11T02:20:00.000
python,smtp
Getting SMTPRecipientsRefused(senderrs) because of sending too many?
1
1
1
25,808,100
0
0
0
I am trying to create a python application that can continuously receive data from a webserver. This python application will be running on multiple personal computers, and whenever new data is available it will receive it. I realize that I can do this either by long polling or web sockets, but the problem is that sometimes data won't be transferred for days and in that case long polling or websockets seem to be inefficient. I won't be needing that long of a connection but whenever data is available I need to be able to push it to the application. I have been reading about webhooks and it seems that if I can have a url to post that data to, I won't need to poll. But I am confused as to how each client would have a callback url because in that case a client would have to act as a server. Are there any libraries that help in getting this done? Any kind of resources that you can point me to would be really helpful!
false
25,797,494
0
0
0
0
There is no way to send data to the client without having some kind of connection, e.g. either websockets or (long) polling done by the client. While it would be possible in theory to open a listener socket on the client and let the web server could connect to and sent the data to this socket, this will not work in reality. The main reason for thi is, that the client is often inside an internal network not reachable from outside, i.e. a typical home setup with multiple computers behind a single IP or a corporate setup with a firewall in between. In this case it is only possible to establish a connection from inside to outside, but not the other way.
0
191
0
0
2014-09-11T21:48:00.000
python,sockets,real-time,long-polling,webhooks
Is it possible to implement webhooks on regular clients?
1
1
2
25,800,249
0
0
0
i'm writing a script that puts a large number of xml files into mongodb, thus when i execute the script multiple times the same object is added many times to the same collection. I checked out for a way to stop this behavior by checkinng the existance of the object before adding it, but can't find a way. help!
false
25,807,271
0
0
1
0
You can index on one or more fields(not _id) of the document/xml structure. Then make use of find operator to check if a document containing that indexed_field:value is present in the collection. If it returns nothing then you can insert new documents into your collection. This will ensure only new docs are inserted when you re-run the script.
1
491
0
0
2014-09-12T11:28:00.000
python,mongodb,pymongo,upsert
add if no duplicate in collection mongodb python
1
1
2
25,807,361
0
1
1
I am currently searching for the best solution + environment for a problem I have. I'm simplifying the problem a bit, but basically: I have a huge number of small files uploaded to Amazon S3. I have a rule system that matches any input across all file content (including file names) and then outputs a verdict classifying each file. NOTE: I cannot combine the input files because I need an output for each input file. I've reached the conclusion that Amazon EMR with MapReduce is not a good solution for this. I'm looking for a big data solution that is good at processing a large number of input files and performing a rule matching operation on the files, outputting a verdict per file. Probably will have to use ec2. EDIT: clarified 2 above
true
25,818,198
1.2
0
0
1
Problem with Hadoop is when you get a very large number of files that you do not combine with CombineFileInput format, it makes the job less efficient. Spark doesnt seem to have a problem with this though, Ive had jobs run without problems with 10s of 1000s of files and output 10s of 1000s of files. Not tried to really push the limits, not sure if there even is one!
0
148
0
0
2014-09-12T23:14:00.000
python,amazon-ec2,bigdata,amazon-sqs
What big data solution can I use to process a huge number of input files?
1
1
1
25,824,846
0
0
0
I created a graph in Networkx by importing edge information in through nx.read_edgelist(). This all works fine and the graph loads. The problem is when I print the neighbors of a node, I get the following for example... [u'own', u'record', u'spending', u'companies', u'back', u'shares', u'their', u'amounts', u'are', u'buying'] This happens for all calls to the nodes and edges of the graph. It is obviously not changing the names of the nodes seeing as it is outside of the quotations. Can someone advise me how to get rid of these 'u's when printing out the graph nodes. I am a Python novice and I'm sure it is something very obvious and easy.
false
25,872,043
0.066568
0
0
1
You don't need to get rid of them, they don't do anything other than specify the encoding type. This can be helpful sometimes, but I can't think of a time when it isn't helpful.
0
539
0
1
2014-09-16T14:54:00.000
python,graph,networkx
Networkx appends 'u' before node names after reading in from an edge list. How to get rid?
1
1
3
25,872,307
0
0
0
I have a /64 IP subnet, I need to subnet that /64 and I need to get 100 /126 ip subnets from it. I am trying to use Python netaddr library to do it. Can anyone help? Thanks
false
25,902,674
0.197375
0
0
1
You do not want to break a /64 into smaller networks. See RFC 5375, IPv6 Unicast Address Assignment Considerations, "Using a subnet prefix length other than a /64 will break many features of IPv6..." RFC 6164, Using 127-Bit IPv6 Prefixes on Inter-Router Links, allows for /127 point-to-point links, "Routers MUST support the assignment of /127 prefixes on point-to-point inter-router links." And, of course you are allowed to us /128 for loppback addresses. All that said, you should only take a single /127 or /128 out of a /64. Subdividing a /64 into multiple subnets is unnecessary and just asking for trouble. We need to change our mindsets from IPv4 scarcity to IPv6 plenty since there is no problems getting as many /64 blocks as you need; anyone can request and get a /48 which is 65536 /64 networks.
0
520
0
0
2014-09-18T01:11:00.000
python-2.7,python-3.x,ipv6,subnet
Subnet IPv6 subnet /64 into /126 using python netaddr library
1
1
1
25,939,078
0
0
0
In my project, I have bridged C++ module with python using plain (windows) sockets with proto buffer as serializing/ de-serializing mechanism. Below are requirements of this project:- 1) C++ module will have 2 channels. Through one it will accept request and send appropriate reply to python module. Through other it will send updates which it gets from backend to python side. 2) Today we are proposing it for 100 users ( i.e request/reply for 100 users + updates with each messages around 50 bytes ). But I want to make sure it works fine later even with 100 K users. Also, I am planning to use ZMQ with this but I don't know much about its performance / latency / bottlenecks. Can anyone please suggest me if its appropriate choice OR are there better tools available. Thanks in advance for you advice.
true
25,904,218
1.2
1
0
0
Instead of doing IPC you can simply call Python from C++. You can do this either using the Python C API, or perhaps easier for some people, Boost.Python. This is referred to as "embedding" Python within your application, and it will allow you to directly invoke functions and pass data around without undue copying between processes.
0
114
0
0
2014-09-18T04:32:00.000
python,c++
C++ and python communication
1
1
1
25,904,243
0
0
0
I have an existing chat socket server built on top of twisted's NetstringReceiver class. There exist Android/iOS clients that work with it fine. Apparently web sockets use a different protocol and so it is unable to connect to my server. Does a different chat server need to be written to support web sockets?
false
25,921,478
0
0
0
0
“Yes.” If you'd like a more thorough answer, you'll have to include more information in your question.
0
151
0
1
2014-09-18T20:08:00.000
python,sockets,websocket,twisted
do i need to rewrite my twisted chat server if i want to support web sockets?
1
1
1
25,923,833
0
0
0
Say, If I am hosting a website say www.mydomain.com on EC2 instance then apache would be running on port 80. Now If I want to host a RESTful API (say mydomain.com/MyAPI) using a python script(web.py module). How can I do that? Wouldn't running a python script cause a port conflict?
false
25,949,382
0.53705
1
0
3
No. Apache is your doorman. Your python script are the workers inside the building. The public comes to the door and talks to the doorman. The doorman hands everything to the workers inside the building, and when the work is done, hands it back to the appropriate person. Apache manages the coming and going of individual TCP/IP messages, and delegates the work that each request needs to do to your script. If the request asks for the API it hands it to the api script; if the request asks for the website it hands it to the website script. Your script passes back the response to apache, which handles the job of giving it to the client over port 80. As @Lafada comments: you can have a backdoor—another port—but apache is still the doorman.
0
607
0
2
2014-09-20T13:39:00.000
python,web-services,api,amazon-web-services,amazon-ec2
Amazon AWS EC2 : How to host an API and a website on EC2 instance
1
1
1
25,949,451
0
0
0
How can you search Instagram by hashtag and filter it by searching again using its location? I checked the Instagram API, but I don't see a solution.
false
25,957,739
0.197375
1
0
1
If u can't find ready solution for this in api, try to search by location first. As u receive location search results, you can manually filter them by hashtags. The idea is that every photo has tags and location attributes you can filter by. But everything depends on your specific task here.
0
1,078
0
2
2014-09-21T09:24:00.000
python,geolocation,instagram,hashtag
Search hashtag in instagram then filter by location
1
1
1
26,262,484
0
1
0
One of the Scrapy spiders (version 0.21) I'm running isn't pulling all the items I'm trying to scrape. The stats show that 283 items were pulled, but I'm expecting well above 300 here. I suspect that some of the links on the site are duplicates, as the logs show the first duplicate request, but I'd like to know exactly how many duplicates were filtered so I'd have more conclusive proof. Preferably in the form of an additional stat at the end of the crawl. I know that the latest version of Scrapy already does this, but I'm kinda stuck with 0.21 at the moment and I can't see any way to replicate that functionality with what I've got. There doesn't seem to be a signal emitted when a duplicate url is filtered, and DUPEFILTER_DEBUG doesn't seem to work either. Any ideas on how I can get what I need?
false
25,965,606
0
0
0
0
You can maintain a list of urls that u have crawled , whenever you came across a url that is already in the list you can log it and increment a counter.
0
108
0
1
2014-09-22T01:33:00.000
python,scrapy
Display the number of duplicate requests filtered in the post-crawler stats
1
1
1
25,967,819
0
0
0
I was wondering if in an xml file I could launch a python file and based on what python returns run different parts of the xml code. Something like IN THE XML FILE: run the python file if the file retuns A, run the A code of this XML file else run the B code of this XML file thanks!
true
26,076,796
1.2
0
0
2
An XML file can't run a python script since XML is a descriptive language used to represent data. However you can do the opposite, that's to say, use python to read the XML file and do something. By the way when you say "run the A code of this XML file", i'm not sure you're doing it right, since XML is not supposed to contain "active code".
0
633
0
0
2014-09-27T16:33:00.000
python,xml
launch python file from an xml file
1
1
1
26,076,987
0
0
1
I have stack overflow data dump file in .xml format,nearly 27GB and I want to convert them in .csv file. Please somebody tell me, tools to convert xml to csv file or python program
false
26,081,880
0
0
0
0
Use one of the python xml modules to parse the .xml file. Unless you have much more that 27GB ram, you will need to do this incrementally, so limit your choices accordingly. Use the csv module to write the .csv file. Your real problem is this. Csv files are lines of fields. They represent a rectangular table. Xml files, in general, can represent more complex structures: hierarchical databases, and/or multiple tables. So your real problem to to understand the data dump format well enough to extract records to write to the .csv file.
0
704
0
2
2014-09-28T05:16:00.000
python-3.x,data-dump
How to convert xml file of stack overflow dump to csv file
1
1
2
26,089,798
0
0
0
telnet server ←————→ device ↑ | SSH ↓ localhost (me) I have a device that is connected with one server computer and I want to talk to the device by ssh'ing into the server computer and send telnet commands to my device. How do I setup things in Python to make this happen?
false
26,112,179
0
1
0
0
Use tunneling by setting up a SSH session that tunnels the Telnet traffic: So ssh from localhost to server with the option -L xxx:deviceip:23 (xxx is a free port on localhost, device is the IP address of "device", 23 is the telnet port). Then open a telnet session on your localhost to localhost:xxx. SSH will tunnel it to the device and response is sent back to you.
0
2,315
0
2
2014-09-30T03:11:00.000
python,ssh,telnet,forwarding
How to run telnet over ssh in python
1
2
2
44,082,308
0
0
0
telnet server ←————→ device ↑ | SSH ↓ localhost (me) I have a device that is connected with one server computer and I want to talk to the device by ssh'ing into the server computer and send telnet commands to my device. How do I setup things in Python to make this happen?
true
26,112,179
1.2
1
0
3
You can use Python's paramiko package to launch a program on the server via ssh. That program would then in turn receive commands (perhaps via stdin) and return results (via stdout) from the controlling program. So basically you'll use paramiko.SSHClient to connect to the server and run a second Python program which itself uses e.g. telnetlib to talk to the device.
0
2,315
0
2
2014-09-30T03:11:00.000
python,ssh,telnet,forwarding
How to run telnet over ssh in python
1
2
2
26,112,392
0
0
0
I am trying to remove null padding from UDP packets sent from a Linux computer. Currently it pads the size of the packet to 60 bytes. I am constructing a raw socket using AF_PACKET and SOCK_RAW. I created everything from the ethernet frame header, ip header (in which I specify a packet size of less than 60) and the udp packet itself. I send over a local network and the observed packet in wireshark has null padding. Any advice on how to overcome this issue?
true
26,113,419
1.2
0
0
2
This is pretty much impossible without playing around with the Linux drivers. This isn't the best answer but it should guide anyone else looking to do this in the right direction.3 Type sudo ethtool -d eth0 to see if your driver has pad short packets enabled.
0
1,669
1
0
2014-09-30T05:33:00.000
python,sockets,udp,padding
Removing padding from UDP packets in python (Linux)
1
1
1
26,326,206
0
0
0
I want to write a simple Python script using Selenium to scrape information from a website but in collaboration with a (standing by) human user who at some point will provide information in the browser. How do I get the following behaviour from the script: Wait until the human user enters information (such as login details) submit the information then (and only then) do something with the page loaded after human submission.
false
26,133,396
0
0
0
0
Let's pretend, that it isn't a human user, who enters the information and submits it, but it all is done automatically with some delay. How would you handle it then? I guess, that you would wait for a certain element to appear or the page something. The same procedure does work with the human user, only wait times may be a bit longer.
0
151
0
0
2014-10-01T02:53:00.000
python,selenium
Selenium to interact with human user to provide login information
1
1
1
26,134,128
0
0
0
I'm wondering if it's possible to combine coverage.xml files into 1 file to see global report in HTML output. I've got my unit/functional tests running as 1 command and integration tests as the second command. That means my coverage for unit/functional tests are overridden by unit tests. That would be great if I had some solution for that problem, mainly by combining those files into 1 file.
false
26,214,055
0.039979
1
0
1
I had similar case where I had multiple packages and each of them had its tests and they were run using own testrunner. so I could combine all the coverage xml by following these steps. Indivisually generate the coverage report. You would need to naviagte to each package and generate the report in that package. This would create .coverage file. You can also add [run]parallel=True in your .coveragerc to create coverage file appended with machine name and processid. Aggregate all the reports. You need to copy all the .coverage files to for these packages to a seaparte folder. You might want to run a batch or sh script to copy all the coverage files. Run combine. Now naviagte tp the folder when you have all the report files and then run coverage combine. This will delete all the coverage files and combine it to one .coverage file. Now you can run coverage html and coverage xml.
0
19,392
0
28
2014-10-06T10:10:00.000
python,unit-testing,code-coverage,coverage.py,python-coverage
combine python coverage files?
1
1
5
63,373,136
0
0
0
Is there a way to change your twitter password via the python-twitter API or the twitter API in general? I have looked around but can't seem to find this information...
true
26,224,243
1.2
1
0
5
Fortunately not! Passwords are very confidential information which only Twitter itself wants to handle. Think about a third-party developer suggesting to change your Twitter password: would you trust them and let them see your new password? how could you make sure they are really going to set your new password and not another one?
0
1,863
0
4
2014-10-06T20:21:00.000
python,twitter,python-twitter
Password Change with the Twitter API
1
1
2
26,224,375
0
0
0
I am trying to send a string from the windows to the linux vmware on the same machine. I did the following: - opened a socket on 127.0.0.1 port 50000 on the linux machine and reading the socket in a while loop. My programming language is python 2.7 - send a command using nc ( netcat ) on 127.0.0.1 port 50000 from the windows machine ( using cygwin ). However, I dont receive any command on the linux machine although the command sent through windows /cygwin is successful. I am using NAT ( sharing the hosts IP address ) on the VMWARE Machine. Where could be the problem?
false
26,232,798
0.099668
0
0
1
When you use NAT, the host machine has no way to directly contact the client machine. All you can do is usign port forwarding to tell vmware that all traffic directed to the designated ports on the host is to be delivered to the client. It is intended to install a server on the client machine that can be accessed from outside the host machine. If you want to test network operation between the host and the client, you should configure a host-only adapter on the client machine. It is a virtual network between the host and the client(s) machine(s) (more than one client can share same host-only network, of course with different addresses) I generally configure 2 network adapters on my client machines : one NAT to give the client machine an access to the open world on host-only to have a private network between host and clients and allow them to communicate with any protocol on any port You can also use a bridged interface on the client. In this mode, the client machine has an address on same network than the external network of the host : it combines both previous modes
0
2,245
1
2
2014-10-07T09:28:00.000
python,linux,sockets,python-2.7,vmware
Send a string from windows to vmware-ubuntu over socket using python
1
1
2
26,802,000
0
1
0
I have a java web application that needs to make use of a python script. The web application will give the script a file as input and take some text as output. I have two options: Wrap the python script with HTTP and call it from the web application as a REST service Simply execute command line from the web application Which option should I take and why? This script won't be used by any other application.
false
26,235,884
0
0
0
0
I would personally recommend wrapping the Python code in an HTTP layer and turning it into a REST web service. I don't doubt that many successful applications have been written by calling scripts from the command line, but I think there are a couple things that are really nice when it comes to the freedoms of a web service. It definitely seems like putting the Python app in a web service would be the more 'Service-oriented' way to do it, so it seems reasonable to expect that doing so would give you the typical benefits of SOA. This may or may not apply to your situation, but if none of these considerations apply that seems to point towards this being a 'neither choice will be that bad' situation. The biggest thing I see going for using the web service is scalability. If the command line application can chew up a lot of server resources, it would be good to have it running on a separate server from the web application. That way, other users who aren't using the part of the application that interacts with this Python app won't have their experience reduced when other users do things that cause the Python app to be called. Putting the Python code behind a web service would also make it easier to cluster. If it's a situation where you could get some benefit from caching, it would also be easy to take advantage of the caching mechanisms in HTTP and your proxy servers. Another thing that might be nice is testability. There's a lot of good tools out there for testing the common cases of a web application talking to a web service, whereas thoroughly testing your applications when they're just calling command line applications might be a little more work.
0
663
0
1
2014-10-07T12:21:00.000
java,python,web-services,architecture
Execute the script in command line or in Rest client
1
2
2
28,648,892
0
1
0
I have a java web application that needs to make use of a python script. The web application will give the script a file as input and take some text as output. I have two options: Wrap the python script with HTTP and call it from the web application as a REST service Simply execute command line from the web application Which option should I take and why? This script won't be used by any other application.
false
26,235,884
0
0
0
0
Try both things but try to execute in command line to know the output line by line. you can find any error in the line when that line will execute while at rest service it will return you an Http response.
0
663
0
1
2014-10-07T12:21:00.000
java,python,web-services,architecture
Execute the script in command line or in Rest client
1
2
2
26,236,230
0
0
0
I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received. Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data? Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission.
true
26,269,514
1.2
1
0
1
What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security) It depends on your threat model, but intrasite VPN is sometimes used to tunnel traffic like this. If you want to move up in the protocol stack, then mutual authentication with the client pinning the server's public key would be a good option. In contrast, I used to perform security architecture work for a US investment bank. They did not use anything - they felt the leased line between data centers provided enough security.
0
697
0
0
2014-10-09T02:50:00.000
python,ssl,network-programming,network-security
Most secure server to server connection
1
3
3
26,291,779
0
0
0
I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received. Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data? Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission.
false
26,269,514
0.066568
1
0
1
Transmission and encryption need not happen together. You can get away with just about any delivery method, if you encrypt PROPERLY! Encrypting properly means using a large, randomly generated keys, using HMACs (INSIDE! the encryption) and checking for replay attacks. There may also be a denial of service attack, timing attacks and so forth; though these may also apply to any encrypted connection. Check for data coming in out of order, late, more than once. There is also the possibility (again, depending on the situation) that your "packets" will leak data (e.g. transaction volumes, etc). DO NOT, UNDER ANY CIRCUMSTANCES, MAKE YOUR OWN ENCRYPTION SCHEME. I think that public key encryption would be worthwhile; that way if someone collects copies of the encrypted data, then attacks the sending server, they will not have the keys needed to decrypt the data. There may be standards for your industry (e.g. banking industry), to which you need to conform. There are VERY SERIOUS PITFALLS if you do not implement this sort of thing correctly. If you are running a bank, get a security professional.
0
697
0
0
2014-10-09T02:50:00.000
python,ssl,network-programming,network-security
Most secure server to server connection
1
3
3
26,269,821
0
0
0
I need to build a Python application that receives highly secure data, decrypts it, and processes & stores in a database. My server may be anywhere in the world so direct connection is not feasible. What is the safest/smartest way to securely transmit data from one server to another (think government/bank-level security). I know this is quite vague but part of the reason for that is to not limit the scope of answers received. Basically, if you were building an app between two banks (this has nothing to do with banks but just for reference), how would you securely transmit the data? Sorry, I should also add SFTP probably will not cut it since this python app must fire when it is pinged from the other server with a secure data transmission.
false
26,269,514
0
1
0
0
There are several details to be considered, and I guess the question is not detailed enough to provide a single straight answer. But yes, I agree, the VPN option is definitely a safe way to do it, provided you can set up a VPN.If not, the SFTP protocol (not FTPS) would be the next best choice, as it is PCI-DSS compliant (secure enough for banking) and HIPAA compliant (secure enough to transfer hospital records) and - unlike FTPS - the SFTP protocol is a subsystem of SSH and it only requires a single open TCP port on the server side (22).
0
697
0
0
2014-10-09T02:50:00.000
python,ssl,network-programming,network-security
Most secure server to server connection
1
3
3
26,320,472
0
0
0
I have built python with fips capable openssl, all things seem to be working fine but call to wrap_socketfails with the error "Invalid SSL protocol variant specified" when fips mode is enabled. This call succeeds when not in fips mode Debugging through the code it was found that the call to SSL_CTX_new(SSLv3_method() in _ssl.c is returning null in fips mode as a result of which the above mentioned error is occurring Any idea as to what might be causing this, is it possible that some non fips components are getting called ?
false
26,299,460
0
0
0
0
I dont think sslv3 is supported in FIPS mode. Try using SSLv23_server_method instead of SSLv3_method
0
593
0
0
2014-10-10T12:28:00.000
python,python-2.7,openssl,fips
call to ssl.wrap_socket fails with the error Invalid SSL protocol variant specified
1
1
3
26,355,082
0
0
0
I'm looking for a reliable method of sending data from multiple computers to one central computer that will receive all the data, process and analyse it. All computers will be on the same network. I will be sending text from the machines so ideally it will be a file that I need sending, maybe even an XML file so I can parse it into a database easily. The main issue I have is I need to do this in near enough real-time. For example if an event happens on pc1, I need to be able to send that event plus any relevant information back to a central pc so it can be used and viewed almost immediately. The plan is to write a python program that possibly acts as a sort of client that detects the event and sends it off to a server program on the central pc. Does anyone have suggestions on a reliable way to send data from multiple computers to one central computer/server all on the same network and preferably without going out onto the web.
true
26,306,159
1.2
0
0
2
There are many possible solutions, but my recommendation is to solve this with a database. I prefer MySQL since it is free and easy to setup. It is immediate and you can avoid simultaneous update file locking problems because Innodb feature of MySQL automatically handles row locking. It's actually easier to setup a database than to try to write your own solution using files or other communication mechanisms (unless you already have experience with other techniques). Multiple computers is also not an issue and security is also built-in. Just setup your MySQL server and write a client application to update the data from multiple computer to your server. Then you can write your server application to process the input and that program can reside on your MySQL server or any other computer. Python is an excellent programming language and provides full support through readily available modules for MySQL. This solution is also scalable, because you can start with basic command line programs... then create desktop user interface with pyqt and later you could add web interfaces if you desire.
0
252
0
1
2014-10-10T18:53:00.000
python,windows,python-3.x
method of sending data from multiple computers too a central location in real-time using python
1
3
4
26,306,346
0
0
0
I'm looking for a reliable method of sending data from multiple computers to one central computer that will receive all the data, process and analyse it. All computers will be on the same network. I will be sending text from the machines so ideally it will be a file that I need sending, maybe even an XML file so I can parse it into a database easily. The main issue I have is I need to do this in near enough real-time. For example if an event happens on pc1, I need to be able to send that event plus any relevant information back to a central pc so it can be used and viewed almost immediately. The plan is to write a python program that possibly acts as a sort of client that detects the event and sends it off to a server program on the central pc. Does anyone have suggestions on a reliable way to send data from multiple computers to one central computer/server all on the same network and preferably without going out onto the web.
false
26,306,159
0
0
0
0
I assume you are working only on a secure lan, since you do not speak about security, but only low latency. In fact there are many solutions each with its advantages and drawbacks. Simple messages using UDP. Very low overhead on network, client and server. Drawback : in UDP you can never be sure that a message cannot be lost. Use case : small pieces of information that can be generated at any time with high frequency, and it is not important if one is lost Messages over a pre-established TCP connection. High overhead on server, client and network, because each client will establish and maintain a connection to the server, and the server will simultaneouly listen to all its clients. Drawback: need to reopen a connection if it breaks, server will have to multiplex io, you have to implement a protocol to separate messages. Use case : you cannot afford to loose any message, they must be sent as soon as possible and each client must serialize its own messages. Messages over TCP, each message will use its own connection. Medium overhead. Drawback : as a new connection is established per message, it may introduce a latency and overhead in high frequency events. Use case : low frequency events, but a single client PC may send simultaneously multiple messages
0
252
0
1
2014-10-10T18:53:00.000
python,windows,python-3.x
method of sending data from multiple computers too a central location in real-time using python
1
3
4
26,308,107
0
0
0
I'm looking for a reliable method of sending data from multiple computers to one central computer that will receive all the data, process and analyse it. All computers will be on the same network. I will be sending text from the machines so ideally it will be a file that I need sending, maybe even an XML file so I can parse it into a database easily. The main issue I have is I need to do this in near enough real-time. For example if an event happens on pc1, I need to be able to send that event plus any relevant information back to a central pc so it can be used and viewed almost immediately. The plan is to write a python program that possibly acts as a sort of client that detects the event and sends it off to a server program on the central pc. Does anyone have suggestions on a reliable way to send data from multiple computers to one central computer/server all on the same network and preferably without going out onto the web.
false
26,306,159
0
0
0
0
Netcat over TCP for reliability, low overhead and simplicity.
0
252
0
1
2014-10-10T18:53:00.000
python,windows,python-3.x
method of sending data from multiple computers too a central location in real-time using python
1
3
4
26,306,417
0
0
0
I'm building an installation that will run for several days and needs to get notifications from a GMail inbox in real time. The Gmail API is great for many of the features I need, so I'd like to use it. However, it has no IDLE command like IMAP. Right now I've created a GMail API implementation that polls the mailbox every couple of seconds. This works great, but times out after a while (I get "connection reset by peer"). So, is it reasonable to turn off the sesson and restart it every half an hour or so to keep it active (like with IDLE)? Is that a terrible, terrible hack that will have google busting down my door in the middle of the night? Would the proper solution be to log in with IMAP as well and use IDLE to notify my GMail API module to start up and pull in changes when they occur? Or should I just suck it up and create an IMAP only implementation?
true
26,307,256
1.2
1
0
3
Would definitely recommend against IMAP, note that even with the IMAP IDLE command it isn't real time--it's just polling every few (5?) seconds under the covers and then pushing out to the connection. (Experiment yourself and see the delay there.) Querying history.list() frequently is quite cheap and should be fine. If this is for a sizeable number of users you may want to do a little bit of optimization like intelligent backoff for inactive mailboxes (e.g. every time there's no updates backoff by an extra 5s up to some maximum like a minute or two)? Google will definitely not bust down your door or likely even notice unless you're doing it every second with 1M users. :) Real push notifications for the API is definitely something that's called for.
0
1,369
0
2
2014-10-10T20:06:00.000
python,imap,long-polling,gmail-api
Long polling with GMail API
1
1
2
26,307,963
0
0
0
I have created Python classes by an XML schema file using generateDS2.12a. I am using these classes to create XML files. My module works well with a Python 2.7 environment. Now, due to some reason my environment is changed to Python 3.0.0. Now when I try to export the XML object it is throwing me following error: Function : export(self, outfile, level, namespace_='', name_='rootTag', namespacedef_='', pretty_print=True) Error : s1 = (isinstance(inStr, basestring) and inStr or NameError: global name 'basestring' is not defined Is there a change I need to do to export XML in Python 3.0.0 or a new version of GenerateDS to be used for Python 3.0.0?
false
26,409,544
0
1
0
0
You could run generateDS to get your Python file then run, e.g., "2to3 -w your_python_file.py" to generate a Python 3 version of your generateDS file. I'm went through the same process and I had luck with this. It looks to work just fine.
0
866
0
0
2014-10-16T16:36:00.000
python,python-3.x,xml,python-generateds
What version of generateDS is to be used for Python 3.0.0?
1
1
1
26,551,274
0
0
0
A Linkedin friend's full profile is not viewable without login to our Linkedin account. Is it possible to use cookie or any other alternative way without a browser to do that? Any tips are welcome!
false
26,413,483
0.197375
0
0
2
Thanks @Anzel, but related-profile-views API allows to read past profile view, but not trigger a new view (and therefore notify the user that I visited their profile programmatically). Unless I pop up a new window and load their profile, but then I'll need a browser to do it. Ideally I wanted to achieve it via backend, cURL and cookies, but it it's not as simple as it sounds.
0
686
0
0
2014-10-16T20:37:00.000
python,curl,network-programming,linkedin
How to use Python/Curl to access LinkedIn and view a friend's full profile?
1
1
2
35,821,074
0
1
0
I've created a websocket avatar chat application where a user is given an avatar and they can move around with this avatar and send messages. I want to design a login which connects to my database (already has several accounts stored). When a user has logged in with the correct details, I'd like for their username to be shown on a chatlog i.e. "Damien has logged in". Of course there'd be several more features I'd be able to finally work on when I implement the login with the application but I'm not sure how I can. I'm presuming it will involve adding perhaps a user array list in the room? The websocket server is created in python, client in html5 and javascript. Any suggestions?
true
26,438,022
1.2
0
0
0
Cookie is available with websocket. Just login and store a session/cookie for the user as normal. The you will know who it is. Or, just send the cookie as the first message after connecting.
0
83
0
1
2014-10-18T09:04:00.000
javascript,python,websocket
Creating login for a websocket application?
1
1
1
26,441,656
0
1
0
I need to create a REST server of a python module/API of a BCI, so that the application can be accessed on my Drupal website. Will I need to create and host the REST server on a python-based website or CMS, so that it can be accessed by my Drupal website, or is the api and rest server uploaded and hosted directly on my web hosting server? If so, what is the simplest python CMS that for creating a REST server for a python module/API already available?
true
26,444,590
1.2
0
0
0
The beauty of REST is precisely that it doesn't matter where your API is, as long as its accessible from your Drupal server, or from the client if you have a javascript API client. If it's a simple application and you have admin access to your Drupal server, there's nothing preventing you from hosting the Python webservice side-by-side. They may even share the same HTTP Server, like Apache or Nginx, although depending on your demands on each one it might be better to keep them separate. If you're new to Python, the Flask framework is a decent option to write a simple REST-like API interfacing with a Python module.
0
141
0
1
2014-10-18T21:09:00.000
python,rest,content-management-system,hosting
Do rest servers need to be hosted on a website or CMS?
1
1
1
26,446,258
0
1
0
My situation is : A server send a request to me, the request's contentType is 'text/xml', and the request content is an xml. First I need to get the request content. But when I use 'web.input()' in 'POST' function, I couldn't get any message, the result just is ''. I know web.py can get form data from a request, so how I can get message from request when the contentType is 'text/xml' in POST function. Thanks!
false
26,479,903
0.197375
0
0
1
Use web.data().
0
86
0
0
2014-10-21T06:04:00.000
python,web.py
web.py how to get message from request when the contentType is 'text/xml'
1
1
1
26,498,875
0
1
0
So I have a list of public IP addresses and I'd like to see if they are a public IP that is associated with our account. I know that I can simply paste each IP into the search box in the AWS EC2 console. However I would like to automate this process via a Python program. I'm told anything you can do in the console, you can do via CLI/program, but which function do I use to simply either return a result or not, based on whether it's a public IP that's associated with our account? I understand I may have to do two searches, one of instances (which would cover non-EIP public IPs) and one of EIPs (which would cover disassociated EIPs that we still have). But how?
false
26,493,207
0
1
0
0
Here's the method I have come up with: To look up all IPs to see if they are EIPs associated with our AWS account Get a list of all our EIPs Get a list of all instances Build list of all public IPs of instances Merge lists/use same list Check desired IPs against this list. Comments welcome.
0
1,555
0
1
2014-10-21T18:00:00.000
python,amazon-web-services,amazon-ec2,boto
How to tell if my AWS account "owns" a given IP address in Python boto
1
1
2
26,495,168
0
1
0
I'm having an issue with Ghost.py. The site I am trying to crawl has links for a paginated list that work with javascript, rather than direct hrefs. When I click the links, I can't really wait for selectors because the selectors are the same on each page, so ghost doesn't wait since the selector is already present. I can't assume I know what text will be on the next page, so waiting for text will not work. And waiting for page loaded won't work either. It's almost as though the javascript is not being executed. Ghost.py seems to have minimal documentation (if you can call the examples on the website documentation) so it is really difficult to work out what I can do, and what tools are available to me. Can anybody with more experience help me out?
true
26,493,370
1.2
0
0
1
I solved my problem. There is an optional parameter to the Ghost class' click() method called expect_loading and when set to true it sets an internal boolean self.loaded = False and then calls wait_for_page_loaded() which then works, I guess because of the loaded boolean.
0
565
0
0
2014-10-21T18:10:00.000
javascript,python,ghost.py
Ghost.py links through javascript
1
1
1
26,505,780
0
0
0
Is there a way to create a Kerberos ticket in Python if you know the username/password? I have MIT Kerberos in place and I can do this interactively through KINIT but want to do it from Python.
true
26,534,348
1.2
1
0
3
From what i learned when working with kerberos (although in my work i used C) is that you can hardly replace KINIT. There are two ways you can simulate KINIT behaviour using programming and those are: Calling kinit shell command from python with the appropriate arguments or (as I did) calling one method that pretty much does everything: krb5_get_init_creds_password(k5ctx, &cred, k5princ, password, NULL, NULL, 0, NULL, NULL); So this is a C primitive but you should find one for python(i assume) that will do the same. Basically this method will receive the kerberos session, a principal(built from the username) and the password. In order to fully replace KINIT behaviour you do need a bit more than this though(start session, build principal, etc). Sorry, since i did not work with python my answer may not be what you want but i hope i shed you some light. Feel free to ask any conceptual question about how kerberized-applications work.
0
9,619
0
3
2014-10-23T17:57:00.000
python,kerberos,mit-kerberos
How can I get a Kerberos ticket in Python
1
1
2
26,548,064
0
0
0
I have a survey that went out to 100 recipients via the built-in email collector. I am trying to design a solution in python that will show only the list of recipients (email addresses) who have not responded (neither "Completed" nor "Partial"). Is there any way to get this list via the SurveyMonkey API? One possible solution is to store the original list of 100 recipients in my local database, get the list of recipients who have already responded using the get_respondent_list api, and then do a matching to find the people who have not responded. But I would prefer to not approach it this way since it involves storing the original list of recipients locally. Thanks for the help!
true
26,596,396
1.2
1
0
0
There is currently not a way to do this via the SurveyMonkey API - it sounds like your solution is the best way to do things. I think your best bet is to go with your solution and email [email protected] and ask them about the feasibility of adding this functionality in future. It sounds like what you need is a get_collector_details method that returns specific details from a collector, which doesn't currently exist.
0
165
0
0
2014-10-27T20:37:00.000
python,api,email,automation,surveymonkey
Getting list of all recipients in an email collector for a survey via SurveyMonkey API
1
1
1
26,597,085
0
1
0
I wrote a web scraper in Scrapy that I call with scrapy crawl scrape_site and a twitter bot in Twython that I call with python twitter.py. I have all the proper files in a directory called ScrapeSite. When I execute these two commands on the command line while in the directory ScrapeSite they work properly. However, I would like to move the folder to a server and have a job run the two commands every fifteen minutes. I've looked into doing a cron job to do so, but the cronjobs are located in a different parent directory, and I can only call scrapy in a directory with Scrapy files (e.g. ScrapeSite). Can I make a cron job to run a file in the ScrapeSite directory that in turn can call the two commands at the proper level? How can I programmatically execute command line commands at a different leveled directory at a certain time interval?
true
26,621,636
1.2
1
0
1
Use a cron job. This allows you to run a command line command at a time interval. You can combine commands with && and as a result can change directories with the normal bash command cd. So in this case you can call cd /directory/folder && scrapy crawl scrape_site && python twitter.py To run this every fifteen minutes make the cron job run like */15 * * * * So the full cron job would look like */15 * * * * cd /directory/folder && scrapy crawl scrape_site && python twitter.py
0
190
0
0
2014-10-29T02:00:00.000
python,command-line,cron,scrapy,twython
How can I programmatically execute command line commands at a different leveled directory at a certain time interval?
1
1
1
26,708,022
0
1
0
I work on a project in which I need a python web server. This project is hosted on Amazon EC2 (ubuntu). I have made two unsuccessful attempts so far: run python -m SimpleHTTPServer 8080. It works if I launch a browser on the EC2 instance and head to localhost:8080 or <ec2-public-IP>:8080. However I can't access the server from a browser on a remote machine (using <ec2-public-IP>:8080). create a python class which allows me to specify both the IP address and port to serve files. Same problem as 1. There are several questions on SO concerning Python web server on EC2, but none seems to answer my question: what should I do in order to access the python web server remotely ? One more point: I don't want to use a Python web framework (Django or anything else): I'll use the web server to build a kind of REST API, not for serving HTML content.
true
26,625,845
1.2
1
0
6
you should open the 8080 port and ip limitation in security croups, such as: All TCP TCP 0 - 65535 0.0.0.0/0 the last item means this server will accept every request from any ip and port, you also
0
1,858
0
4
2014-10-29T08:37:00.000
python,amazon-web-services,amazon-ec2,webserver
Accessing Python webserver remotely on Amazon EC2
1
1
2
26,626,875
0
0
0
I am connecting to a host for the first time using its private key file. Do I need to call load_host_keys function before connecting to the host? Or Can I just skip it? I have the autoAddPolicy for missing host key but how can python know the location of the host key file? Hence my question, when to use the function load_host_key?
false
26,626,347
-0.379949
1
0
-2
Load host keys from a local host-key file. Host keys read with this method will be checked after keys loaded via load_system_host_keys, but will be saved back by save_host_keys (so they can be modified). The missing host key policy AutoAddPolicy adds keys to this set and saves them, when connecting to a previously-unknown server. This method can be called multiple times. Each new set of host keys will be merged with the existing set (new replacing old if there are conflicts). When automatically saving, the last hostname is used. Read a file of known SSH host keys, in the format used by openssh. This type of file unfortunately doesn't exist on Windows, but on posix, it will usually be stored in os.path.expanduser("~/.ssh/known_hosts").
0
3,777
0
4
2014-10-29T09:09:00.000
python,ssh,paramiko
When and why to use load_host_keys and load_system_host_keys?
1
1
1
26,708,001
0
0
0
gevent library documentation suggests to use gevent.monkey.patch_all() function to make standard library modules cooperative. As I understand this method only works for my code (written by me), because I can explicitly monkey-patch standard library before importing standard library modules. What about third-party libraries (websocket client for example), which import threading, socket modules internally. Is there a way for this libraries to use patched version of threading and socket modules ?
false
26,638,048
0.197375
0
0
1
Monkey patch at the earliest possible moment in your code (i.e. before any of your third party modules have been imported). Then, when the third party modules are imported, they will use the monkey-patched versions of the standard libraries.
0
194
0
0
2014-10-29T18:30:00.000
python,gevent
gevent.patch_all() and third-party libraries
1
1
1
27,197,021
0
1
0
I am using web.py to run a server. I need to get a request from a remote server, however, the request sends me a data with Chunked Transfer Coding. I can use web.ctx.env['wsgi.input'].read(1000) to get the data. But this is not what I need since I don't know the length of the data (because it is chunked). But if I use web.ctx.env['wsgi.input'].read() the server would crash. Can anybody tell me how to get the chunked data in a request?
false
26,649,495
-0.099668
0
0
-1
web.py runs CherryPy as the web server and it has support for handling requests with chunked transfer coding. Have you misread the documentation?
0
599
0
1
2014-10-30T09:38:00.000
python,web.py
Python: how to read 'Chunked Transfer Coding' from a request in web.py server
1
1
2
26,651,096
0
1
0
I'm developing an app (with Python and Google App Engine) that requires to load some content (basically text) stored in a bucket inside the Google Cloud Storage. Everything works as expected but I'm trying to optimize the application performance. I have two different options: I can parse the content via the urllib library (the content is public) and read it or I can load the content using the cloudstorage library provided by Google. My question is: in terms of performance, which method is better? Thank you all.
false
26,652,617
0
0
0
0
They will likely be very close. the AppEngine cloud storage library uses the URL fetch service, just like urllib. Nonetheless, like any performance tuning, I'd suggest measuring on your own.
0
199
1
0
2014-10-30T12:08:00.000
python,google-app-engine,parsing,google-cloud-storage,urllib
urllib vs cloud storage (Google App Engine)
1
1
2
26,660,144
0
0
0
I am pretty new to TCP networking and would like to use TCP for real time transfer of data. Essentially, I need my PC python to send data (single character) to Android application. Data to be send changes in real time, and upon change to data (usually about 0.5 - 1sec apart), it has to send this new data to Android app and will be displayed on the app immediately. My question is 1) If I am using TCP, is it possible to keep the socket connection open even after sending one data to anticipate the subsequent transfers. Or do I need to close the connection after every single data transfer and set up another socket connection. 2) What is the latency of a TCP in the event I am performing something like these? Any form of advice is greatly appreciated!
true
26,661,358
1.2
0
0
0
Most TCP implementations delay sending small buffers for a short period of time (~.2 seconds) hoping that more data will be presented before adding the expense of sending the TCP segment. You can use the TCP_NODELAY option to (mostly) eliminate that delay. There are several other factors that get in the way, such as where the stack happens to be in an ACK sequence, but its reasonably good way to get prompt delivery. Latency depends on many factors including other traffic on the line and whether a given segment is dropped and needs to be retransmitted. I'm not sure what a solid number would be. Sometimes real time data is "better never than late", making UDP datagrams a good option. update: A TCP connection stays open until you close them with shutdown(), a client or server level socket timeout hits or the underlying stack finally gets bored and closes it. So normally you just connect and send data periodically over time. A common way to deal with a timed out socket is to reconnect if you hit a send error.
0
851
0
0
2014-10-30T19:07:00.000
android,python,sockets,tcp,real-time
Use of TCP for real time small packet data over a period of time
1
1
1
26,661,653
0
0
0
What I want to do is to read a xml file that contains some python code inside. And then run these code. For example, the xml file contains print 'Hello World' I want to use a function such as like def RunXML(xml): This function will read the code from the xml file and execute it. In this case, the function RunXML will print 'Hello World'. Appreciate for any help.
true
26,672,090
1.2
0
0
1
To execute python code from python you can use exec function, about reading from xml - you didn't provide any information about xml file structure.
0
222
0
0
2014-10-31T10:10:00.000
python,xml,compiler-construction
How to read a xml file and covert it to python code and run?
1
1
1
26,672,298
0
0
0
I am using ZeroRPC for a project, where there may be multiple instances running on the same machine. For this reason, I need to abe able to auto-assign unused port numbers. I know how to accomplish this with regular sockets using socket.bind(('', 0)) or with ZeroMQ using the bind_to_random_port method, but I cannot figure out how to do this with ZeroRPC. Since ZeroRPC is based on ZeroMQ, it must be possible. Any ideas?
false
26,700,204
0
0
0
0
Having read details about ZeroRPC-python current state, the safest option to solve the task would be to create a central LotterySINGLETON, that would receive <-REQ/REP-> send a next free port# upon an instance's request. This approach is isolated from ZeroRPC-dev/mod(s) modifications of use of the otherwise stable ZeroMQ API and gives you the full control over the port#-s pre-configured/included-in/excluded-from LotterySINGLETON's draws. The other way aroung would be to try to by-pass the ZeroRPC layer and ask ZeroMQ directly about the next random port, but the ZeroRPC documentation discourages from bypassing their own controls imposed on (otherwise pure) ZeroMQ framework elements ( which is quite a reasonable to be emphasised, as it erodes the ZeroRPC-layer consistency of it's add-on operations & services, so it shall rather be "obeyed" than "challenged" in trial/error ... )
0
346
0
3
2014-11-02T14:08:00.000
python,sockets,port,zeromq
ZeroRPC auto-assign free port number
1
1
2
26,716,452
0
0
0
I'm using Tweepy and I don't find any option to add a delay between each request to make sure I'm not getting banned from Twitter APIs. I think 1 request each 5 seconds should be fine. How can I do this using StreamListener?
true
26,703,279
1.2
0
0
0
Solved. You can never get banned from the Streaming API :)
0
397
0
0
2014-11-02T19:17:00.000
python,twitter,tweepy
How to add a delay between each request in Tweepy StreamListener?
1
1
2
27,247,595
0
0
0
I have a Linux environment which having more than 50 servers and which is monitoring by Nagios. Now we are creating new servers using python web based GUI and we need to add them to nagios server manually. Now we would like to add new servers automatically to nagios. is there any method to add the new servers to nagios automatically? Thanks in advance
false
26,708,916
0.099668
0
0
1
I think it's not possible to automatically add client in monitoring system, but you can add it through web browser using Nconf.
0
815
0
1
2014-11-03T06:28:00.000
python,linux,nagios
automatically add clients to nagios server
1
2
2
26,887,365
0
0
0
I have a Linux environment which having more than 50 servers and which is monitoring by Nagios. Now we are creating new servers using python web based GUI and we need to add them to nagios server manually. Now we would like to add new servers automatically to nagios. is there any method to add the new servers to nagios automatically? Thanks in advance
false
26,708,916
0.099668
0
0
1
Nagios doesn't support this out of the box. That said, it would be fairly easy to create a Python script to automate the task. At the least, you'd have to provide it with a list of machine names and IP addresses, then let the script do all the .cfg file updates. Did this with Perl, taking less than a day to write, it added the host and several basic infra checks.
0
815
0
1
2014-11-03T06:28:00.000
python,linux,nagios
automatically add clients to nagios server
1
2
2
26,962,382
0
1
0
I am using flask for a very simple app. The response is right but split into multiple tcp packets. It seems that flask put every http header in a tcp packet. Why flask response is split into multiple tcp packets? How do I disable this feature?
false
26,709,879
0.197375
0
0
1
Flask is microframework, it is used for developing webapp or related stuff, Flask has its own response building way, you can not control response packets from flask. What you are talking is of different layer, it comes about networking layer.
0
299
0
2
2014-11-03T07:47:00.000
python,flask
How to make flask response in one tcp packet?
1
1
1
26,717,670
0