Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
1
0
0
6
I found out after numerous attempts to solve the ImportError: cannot import name 'BeautifulSoup4' that the package is actually called BeautifulSoup so the import should be: from bs4 import BeautifulSoup
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
60,569,945
0
1
0
Is it possible to use Django for communication with some kind of server process? For example on my Django website I want to have form where I input connection details (host and port) and after connection I want to send some request or events to other server process (some simple action like slider moving or clicking a button). Can I use python socket programming for this or is there some easier way?
true
29,915,865
1.2
0
0
1
You can use with django any python packages as with any "normal" python program. If you have a module, that communicate with your server, you can use this, if not, you have to write one on your own possibly with socket programming.
0
409
0
0
2015-04-28T09:45:00.000
python,django
Socket communication with Django
1
1
1
29,916,039
0
0
0
I am trying to go through tweets of a particular user and get all replies on that tweet. I found that the APIv1.1 of twitter does not directly support it. Is there a hack or a workaround on getting the replies for a particular tweet. I am using python Streaming API.
false
29,928,638
1
1
0
6
Here's a work around to fetch replies of a tweet made by "username" using the rest API using tweepy 1) Find the tweet_id of the tweet for which the replies are required to be fetched 2) Using the api's search method query the following (q="@username", since_id=tweet_id) and retrieve all tweets since tweet_id 3) the results matching the in_reply_to_status_id to tweet_id is the replies for the post.
0
18,865
0
17
2015-04-28T19:52:00.000
python,twitter,tweepy,tweets,twitter-streaming-api
Getting tweet replies to a particular tweet from a particular user
1
1
5
31,647,823
0
0
0
I have a Python client behind a NAT and a python server with a public IP address. My job is to send a pcap file (the size of a few MB) from the client to a server, as well as a dictionary with some data. Is there any easy way of doing this without resorting to third-party libraries (e.g. twisted, tornado)? If not, what's the easiest alternative? I thought I could send the pcap file through http so that it would be easier to read it on the server side, and perhaps I could do the same with the dictionary by first pickling it. Would it be a good solution? (I have complete control on the server, where I can install whatever)
false
29,945,960
0.066568
0
0
1
If you can install software on the server, and the server allows HTTP connections, you can write your own simple HTTP server (Python has libraries for doing that). If not, the answer would depend on what services are available on the server.
0
635
0
2
2015-04-29T13:58:00.000
python,file,client,server
Python: send file to a server without third-party libraries?
1
1
3
29,946,100
0
0
0
I am using Python3 and mime.multipart to send an attachment. I was able to send attachment successfully. But today, I get an error saying file does not exist, when I can see in WINSCP that it clearly does. Is this a permissions issue? Also when I list the contents of the directory, the file DOES NOT show up. What is going on?
true
29,946,610
1.2
1
0
1
I wasn't closing the stream after writing to the file. So the code couldn't find the file. However when the script finished, the stream would get closed by force and I would see the file in the folder.
0
115
0
0
2015-04-29T14:25:00.000
python,python-3.x,file-upload,email-attachments,mime-message
Python cannot find file to send as email attachment
1
1
1
29,947,447
0
1
0
I am trying to understand how to use websockets correctly and seem to be missing some fundamental part of the puzzle. Say I have a website with 3 different pages: newsfeed1.html newsfeed2.html newsfeed3.html When a user goes to one of those pages they get a feed specific to the page, ie newsfeed1.html = sport, newsfeed2.html = world news etc. There is a CoreApplication.py that does all the handling of getting data and parsing etc. Then there is a WebSocketServer.py, using say Autobahn. All the examples I have looked at, and that is alot, only seem to react to a message from the client (browser) within the WebSocketServer.py, think chat echo examples. So a client browser sends a chat message and it is echoed back or broadcast to all connected client browsers. What I am trying to figure out is given the following two components: CoreApplication.py WebSocketServer.py How to best make CoreApplication.py communicate with WebSocketServer.py for the purpose of sending messages to connected users. Normally should CoreApplication.py simply send command messages to the WebSocketServer.py as a client. For example like this: CoreApplication.py -> Connects to WebServerSocket.py as a normal client -> sends a Json command message (like broadcast message X to all users || send message Y to specific remote client) -> WebSocketServer.py determines how to process the incoming message dependant on which client is connected to which feed and sends to according remote client browsers. OR, should CoreApplication.py connect programatically with WebSocketServer.py? As I cannot seem to find any examples of being able to do this for example with Autobahn or other simple web sockets as once the WebSocketServer is instantiated it seems to run in a loop and does not accept external sendMessage requests? So to sum up the question: What is the best practice? To simply make CoreApplication.py interact with WebSocketServer.py as a client (with special command data) or for CoreApplication.py to use an already running instance of WebSocketServer.py (both of which are on the same machine) through some more direct method to directly sendMessages without having to make a full websocket connection first to the WebSocketServer.py server?
false
29,967,612
0
0
0
0
It depends on your software design, if you decide the logic from WebSocketServer.px and CoreApplication.py belongs together, merge it. If not, you need some kind of inter process communication (ipc). You can use websockets for this ipc, but i would suggest, you use something simpler. For example, you can you json-rpc over tcp or unix domain to send control messages from CoreApplication.py to WebSocketServer.py
0
704
0
2
2015-04-30T12:19:00.000
python,websocket
WebSockets best practice for connecting an external application to push data
1
1
1
29,967,827
0
1
0
My python program outputs a set of url links. When I run this on pycharm, I can directly click on the links and that will open them up in the browser. However, when I run the python file by double clicking on the .py file, the links are not clickable. I want the links to be clickable so it takes me to the browser directly. Please support solutions with explanations as I am still learning. Thanks!
false
30,026,870
0.197375
0
0
1
As outlined above, you need to use a terminal that supports clicking of URL's. On linux, most terminals do this. Ex Gnome-terminal, terminator etc.. On Mac, try iterm2.
0
3,643
0
4
2015-05-04T09:35:00.000
python,python-3.x,hyperlink,pycharm
Clickable links in terminal output
1
1
1
40,915,552
0
0
0
I was wondering what is the difference between using pkt.time and pkt[IP].time since they both give different times for the same packet. I was also wondering how to interpret packet time such as 1430123453.564733 If anyone has an idea or knows where I can find such information it would be very helpful. Thanks.
false
30,036,175
0
0
0
0
pkt.time gives you the epoch time that is included in the FRAME layer of the packet in wireshark. Just after the notation pkt[IP].time gives you the time that is included in the IP layer of the packet in wireshark. But the IP layer has no time, so I don't think this command will work.
0
501
0
3
2015-05-04T17:26:00.000
python-2.7,packet,scapy
Scapy packet time interpretation
1
1
1
47,197,343
0
1
0
So I see this issue on google selenium site but it has not been resolved yet. when you element.send_key('12345') it will return '123'. The 5 is parsed as backspace.... is there a work around for this? Using latest selenium, chrome, chromedriver, python 2.7, ubuntu 12.04
false
30,073,603
0
0
0
0
As dear @Epiwin mentioned ,it is a horrible bug in TightVNC or combination of chrome driver and it, I've removed it completely and install TigerVNC instead and eventually it worked. BTW, I don't know why! but the speed of the remote connection increased after migrate to TigerVNC, and it was another good point for mine.
0
4,525
0
6
2015-05-06T10:07:00.000
python,selenium
Python: Selenium send_key can't type numbers like 5 or 6
1
1
7
66,779,985
0
0
0
To experiment with a recommender, I want a good amount of data consisting of users-bookmarks mapping, optionally with some user info, page tags etc. I am trying to use pydelicious to do that but not able to. Through a book I was referring to, I am trying to run get_popular() but every time it results into a result with description as "something went wrong"
false
30,104,305
0.197375
0
0
1
I had the same problem, this works for me.. Before installing the package, change this line: rss = http_request('url').read() to: rss = http_request('http://feeds.delicious.com/v2/rss').read() It is located at init.py Then you can install the package running python setup.py install
0
366
0
1
2015-05-07T14:39:00.000
python,dataset,mahout-recommender,delicious-api
How to use pydelicious to get bookmark data with user mapping
1
1
1
32,162,113
0
1
0
A question about testing proper nesting of XML tags: I got a list of tags, extracted from top to bottom from an xml file: Closing tags are clearly indicated by forward slash /to and /lastname tags are incorrectly nested. They should be switched. /lastname should be within to, /to parent tags. tag_list = ['note', 'to', 'firstname', '/firstname', 'lastname', '/to', '/lastname', '/note'] What would be the code or direction to spot that /lastname tag is outside of its parent which is to, /to pair? Cheers.
true
30,114,069
1.2
0
0
1
Make an empty stack. Iterating through the list: if you find a start tag, push it onto the stack. if you find an end tag, compare it to the entry on top of the stack. if the stack is empty or the top doesn't match, fail. if it matches, pop the stack and continue. At the end of the iteration: if the stack is empty, declare success. otherwise fail.
0
79
0
0
2015-05-08T00:54:00.000
python,xml,nested
Detecting incorrect nesting of XML tags in Python
1
1
2
30,114,393
0
1
0
Say I have a typical web server that serves standard HTML pages to clients, and a websocket server running alongside it used for realtime updates (chat, notifications, etc.). My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection. My concern is, if I want to scale things up a bit, and add another realtime server, it seems my only options are: Have the main server keep track of which realtime server the client is connected to. When that client receives a notification/chat message, the main server forwards that message along to only the realtime server the client is connected to. The downside here is code complexity, as the main server has to do some extra book keeping. Or instead have the main server simply pass that message along to every realtime server; only the server the client is connected to would actually do anything with it. This would result in a number of wasted messages being passed around. Am I missing another option here? I'm just trying to make sure I don't go too far down one of these paths and realize I'm doing things totally wrong.
true
30,114,325
1.2
0
0
1
If the scenario is a) The main web server raises a message upon an action (let's say a record is inserted) b ) He notifies the appropriate real-time server you could decouple these two steps by using an intermediate pub/sub architecture that forwards the messages to the indended recipient. An implementation would be 1) You have a redis pub-sub channel where upon a client connecting to a real-time socket, you start listening in that channel 2) When the main app wants to notify a user via the real-time server, it pushes to the channel a message, the real-time server get's it and forwards it to the intended user. This way, you decouple the realtime notification from the main app and you don't have to keep track of where the user is.
0
559
0
2
2015-05-08T01:26:00.000
python,websocket,real-time,scalability
Scaling a decoupled realtime server alongside a standard webserver
1
2
4
30,188,812
0
1
0
Say I have a typical web server that serves standard HTML pages to clients, and a websocket server running alongside it used for realtime updates (chat, notifications, etc.). My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection. My concern is, if I want to scale things up a bit, and add another realtime server, it seems my only options are: Have the main server keep track of which realtime server the client is connected to. When that client receives a notification/chat message, the main server forwards that message along to only the realtime server the client is connected to. The downside here is code complexity, as the main server has to do some extra book keeping. Or instead have the main server simply pass that message along to every realtime server; only the server the client is connected to would actually do anything with it. This would result in a number of wasted messages being passed around. Am I missing another option here? I'm just trying to make sure I don't go too far down one of these paths and realize I'm doing things totally wrong.
false
30,114,325
0
0
0
0
Changed the answer because a reply indicated that the "main" and "realtime" servers are alraady load-balanced clusters and not individual hosts. The central scalability question seems to be: My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection. Emphasis on the word "related". Assume you have 10 "main" servers and 50 "realtime" servers, and an event occurs on main server #5: which of the websockets would be considered related to this event? Worst case is that any event on any "main" server would need to propagate to all websockets. That's a O(N^2) complexity, which counts as a severe scalability impairment. This O(N^2) complexity can only be prevented if you can group the related connections in groups that don't grow with the cluster size or total nr. of connections. Grouping requires state memory to store to which group(s) does a connection belong. Remember that there's 3 ways to store state: global memory (memcached / redis / DB, ...) sticky routing (load balancer configuration) client memory (cookies, browser local storage, link/redirect URLs) Where option 3 counts as the most scalable one because it omits a central state storage. For passing the messages from "main" to the "realtime" servers, that traffic should by definition be much smaller than the traffic towards the clients. There's also efficient frameworks to push pub/sub traffic.
0
559
0
2
2015-05-08T01:26:00.000
python,websocket,real-time,scalability
Scaling a decoupled realtime server alongside a standard webserver
1
2
4
30,170,295
0
0
0
I apologize I couldn't find a proper title, let me explain what I'm working on: I have a Python IRC bot, and I want to be able to keep track of how long users have been idle in the channel, and allow them to earn things (I have it tied to Skype/Minecraft/my website) each x amount of hours they're idle in the channel. I already have everything to keep track of each user and have them validated with the site and stuff, but I am not sure how I would keep track of the time they're idle. I have it capture on join/leave/part messages. How can I get a timer set up when they join, and keep that timer running, along with other times for all of the users who are in that channel, and each hour they've been idle (not all at same time) do something then restart the timer over for them?
true
30,118,631
1.2
1
0
1
Two general ways: Create a separate timer for each user when he joins, do something when the timer fires and destroy it when the user leaves. Have one timer set to fire, say, every second (or ten seconds) and iterate over all the users when it fires to see how long they have been idle. A more precise answer would require deeper insight into your architecture, I’m afraid.
0
103
0
0
2015-05-08T07:52:00.000
python,timer
Keep track of items in array with timer
1
1
1
30,118,684
0
0
0
The Grooveshark music streaming service has been shut down without previous notification. I had many playlists that I would like to recover (playlists I made over several years). Is there any way I could recover them? A script or something automated would be awesome.
false
30,124,541
0.462117
0
0
5
You can access some information left in your browser by checking the localStorage variable. Go to grooveshark.com Open dev tools (Right click -> Inspect Element) Go to Resources -> LocalStorage -> grooveshark.com Look for library variables: recentListens, library and storedQueue Parse those variables to extract your songs Might not give your playlists, but can help retrieving some of your collection.
0
12,131
0
4
2015-05-08T13:00:00.000
python,backup,playlist,recovery,grooveshark
How can I make a script to recover my Grooveshark playlists now that the service has been shut down?
1
1
2
30,194,984
0
1
0
I am trying to import the web module of python to create a simple Hello World program on my browser using web.py. When I run it from the command line I am getting errors about the web.py package. If I run it from IDLE, it works fine.
false
30,155,400
0
0
0
0
Sometimes the problem is in the PYTHONPATH, and the IDE modify the environment variables so when you run it from there you don't have a problem.
1
83
0
1
2015-05-10T18:54:00.000
python,python-2.7,module,web.py
Unable to import python module 'web'?
1
1
1
53,768,539
0
0
0
I have OpenStack UI running. I made some changes in the local-setting.py file and restarted the Horizon service using service httpd restart and try to hit OpenStack UI but it returns an error: "HTTP 400 Bad request". When I revert back all changes, restart the service and try again the error is still there. Please help me !!
false
30,174,910
0
0
0
0
if you are running on Horizon on Apache then you need to check apache logs to identify issue. Horizon logs will not contains anything if apache was not able to execute WSGI script for Horizon.
0
808
1
0
2015-05-11T18:21:00.000
python,openstack,openstack-horizon
Unabe to Get OpenStack Login prompt
1
2
2
30,254,113
0
0
0
I have OpenStack UI running. I made some changes in the local-setting.py file and restarted the Horizon service using service httpd restart and try to hit OpenStack UI but it returns an error: "HTTP 400 Bad request". When I revert back all changes, restart the service and try again the error is still there. Please help me !!
false
30,174,910
0.099668
0
0
1
Total agree with above comments, check your apache or httpd log files. Most probably error cause due to improper info. added (for centos) under below file. /etc/openstack-dashboard/local_settings ALLOWED_HOSTS = ['XX.XX.XX.XX', 'ControllerName', 'localhost'] Hopefully this will resolve issue.
0
808
1
0
2015-05-11T18:21:00.000
python,openstack,openstack-horizon
Unabe to Get OpenStack Login prompt
1
2
2
52,743,233
0
0
0
I'm using time and datetime modules in server and in client scripts. When I am receiving data that contains date from server - it is wrong and different to client because of wrong timezone and wrong time on server. How to deal with it and what is the best way to deal with it? Thanks
true
30,211,422
1.2
0
0
1
You should either convert the time to the time zone of the client, or show the time zone. I e, for a client in Warsaw show "2015-05-14 11:40" or "2015-05-14 09:40 GMT" or similar. Which one you want depends a lot on what data it is you are showing. How to figure out the client timezone is a long topic that probably is covered elsewhere here, and that again depends on the application. You can convert it with Javascript, or have a setting, or make a guess based on countr and IP address, etc. Loads of options.
0
197
0
1
2015-05-13T09:57:00.000
python,date,datetime,time,timezone
Server and client are python scripts, are there a way to synchronize server to client time without changing system dates anywere?
1
1
1
30,234,068
0
1
0
How can I send an xml file to a Django app server from a command line interface? I tried using curl, but the data I get on the Django side becomes a dictionary and doesn't look like xml. In addition to that I need some basic authorization mechanism. I tried curl examples, but to no avail. Maybe I am expecting the wrong thing. Can someone guide me to the right tutorial or examples.
true
30,215,568
1.2
0
0
0
request.body should contain the XML as string. I tried using curl, but the data I get on the Django side becomes a dictionary Can you please share the code blocks to receive/print the data?
0
238
0
0
2015-05-13T13:00:00.000
python,xml,django,curl
Send a XML file with authorization to a django app
1
1
1
30,215,688
0
1
0
I am new at Server side, but I have gotten a chance to design and implement a server that will cover around 2000~3000 client. And I am thinking that I will use Python and Websocket, though I don't know this choice is appropriate. In this point, I am curious on how to design the server. I think there must be some architecture normally in use depending on capacity that server handles. Otherwise, Could I use a Websocket server offered by some python package like Tornado or Django? I hope that I can get any information on this. Any advice?
false
30,221,772
0
0
0
0
One of solutions could be Pyramid, sockjs, gunicorn, and gevent. Nginx probably better suits to be a frontend than Apache, but of course if you do not have any lengthy processing on the backend, any decent asynchronous Python server with websocket and sockjs support (not sure about socket.io as an alternative) will work for you out of the box. Lenghty processing should be offloaded to some queue workers anyway, so asynchronous server will fit the bill. Just check whether all used datastore/database adapters are compatible with your server solution be it asynchronous or multi-threading.
0
117
0
0
2015-05-13T17:39:00.000
python,websocket,server,capacity
Server architecture depending on the capacity
1
1
2
30,222,997
0
0
0
Is there a way to "cache" the requests I make with the python "requests" module in a way that even if I go offline the module still returns the webpage as if I was online? How can I achieve something like this? Currently I only found caching libraries that just cache the webpage but you still have to be online...
true
30,233,406
1.2
0
0
-1
For anyone else searching, the module that does the job is called "vcrpy".
0
568
0
2
2015-05-14T09:09:00.000
python-2.7,caching,python-requests
Caching python web requests for offline use
1
1
2
30,299,626
0
0
0
I'm making several requests using threads and I am not interested in the delay required to get the responses. The most important thing for me is the reception of the response. I put these values (all the bigger ones return errors): c.setopt (pycurl.CONNECTTIMEOUT ,3600000000000000000) c.setopt (pycurl.TIMEOUT,3600000000000000000) But I got the same error just after receiving some responses. error: (7, 'Failed to connect to localhost port 4000: Connection timed out') Please can some one help me . Thank you very much.
false
30,270,075
0
0
0
0
Observing the error: 'Failed to connect to localhost port 4000: Connection timed out'). Are you sure the domains/url you are trying to connect is not including localhost or a domain that resolves to localhost (i.e. 127.0.0.1)? Also port 4000 is not the typical port you will find a web server listening on.
0
259
0
0
2015-05-15T23:10:00.000
python-2.7
Connection timed out (pycurl)
1
1
1
35,987,562
0
0
0
I'm querying the Mixpanel API pretty consistently, but every so often, the request does not go through and I am given this error: urllib2.URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> I did some searching and there might be some caching issues, so I tried this in terminal: dscacheutil -flushcache I tried the above last night and it worked, but now when I am greeted with the same error and I try to flush the cache, I am still given the same error. There haven't been any code changes that would have given me this error. Any thoughts why this is happening? P.S. Yes, I know urllib2 blows. I would prefer to be using requests, but the urllib2 calls are in a mixpanel client and I'd prefer not to mess around with it.
true
30,281,349
1.2
0
0
0
The problem seemed to be solved by a combination of Ajay's comment: Try after installing this pip install pyopenssl ndg-httpsclient pyasn1 if you are using python2 and the OS X Yosemite version of DNS cache flushing: sudo discoveryutil mdnsflushcache
0
126
0
0
2015-05-16T22:19:00.000
python,urllib2
urllib2 request randomly stops working without code changes
1
1
1
30,281,634
0
0
0
I was wondering if there is a way to detect, from a python script, if the computer is connected to the internet using a tethered cell phone? I have a Python script which runs a bunch of network measurement tests. Ideally, depending on the type of network the client is using (ethernet, wifi, LTE, ...) the tests should change. The challenge is how to get this information from the python client, with out asking the user to provide it. Especially detect tethering.
false
30,330,543
0.099668
1
0
1
Normally not - from the computer's prospective the tethered cell phone is simply just another wifi router/provider. You might be able to detect some of the phone carrier networks from the traceroute info to some known servers (DNS names or even IP address ranges of network nodes - they don't change that often). If you have control over the phone tethering you could also theoretically use the phone's wifi SSID (or even IP address ranges) to identify tethering via individual/specific phones (not 100% reliable either unless you know that you can't get those parameters from other sources).
0
605
0
1
2015-05-19T15:58:00.000
python,wifi,tethering
Detect if connected to the internet using a tethered phone in Python
1
1
2
30,333,512
0
1
0
I am using GoogleScraper for some automated searches in python. GoogleScraper keeps search results for search queries in its database named google_scraper.db.e.g. if i have searched site:*.us engineering books and due to internet issue while making json file by GoogleScraper.If the result is missed and json file is not like that what it must be then when i again search that command using GoogleScraper it gives same result while internet is working fine,i mean to say GoogleScraper maintains its database for a query which it has searched and does not search again it, when i search that command whose result is stored in database,it does not give new result but give results from database stored previously
true
30,347,571
1.2
0
1
0
I solved the issue which keeps searches in GoogleScraper database,we first have to run following command GoogleScraper --clean This command cleans all cache and we can search again with new results. Regards!
0
192
0
0
2015-05-20T10:54:00.000
python,bash,web-scraping
GoogleScraper keeps searches in database
1
1
1
30,433,834
0
0
0
I have both Python 2.7 and 3.4 installed on my Ubuntu 14.04 machine. I want to install the 'requests' module so it is accessible from Py3.4. When I issued pip install requests on my terminal cmd line I got back: "Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages" How can I direct pip to install requests for 3.4 even though it is already in 2.7?
false
30,362,600
0
0
0
0
i just reinstalled the pip and it works, but I still wanna know why it happened... i used the apt-get remove --purge python-pip after I just apt-get install pyhton-pip and it works, but don't ask me why...
1
316,112
1
51
2015-05-21T00:29:00.000
python-3.x,pip,python-requests
How to install requests module in Python 3.4, instead of 2.7
1
2
6
53,805,505
0
0
0
I have both Python 2.7 and 3.4 installed on my Ubuntu 14.04 machine. I want to install the 'requests' module so it is accessible from Py3.4. When I issued pip install requests on my terminal cmd line I got back: "Requirement already satisfied (use --upgrade to upgrade): requests in /usr/lib/python2.7/dist-packages" How can I direct pip to install requests for 3.4 even though it is already in 2.7?
false
30,362,600
0
0
0
0
i was facing same issue in beautiful soup , I solved this issue by this command , your issue will also get rectified . You are unable to install requests in python 3.4 because your python libraries are not updated . use this command apt-get install python3-requests Just run it will ask you to add 222 MB space in your hard disk , just press Y and wait for completing process, after ending up whole process . check your problem will be resolved.
1
316,112
1
51
2015-05-21T00:29:00.000
python-3.x,pip,python-requests
How to install requests module in Python 3.4, instead of 2.7
1
2
6
53,132,450
0
0
0
Is there any way to find the port firefox url listens to when we open a url. platform - ubuntu or any other linux distro
true
30,383,356
1.2
0
0
0
Practically speaking, no, for a couple of reasons. First, multiple connections: when you open, say, http://www.example.com/, 99.9% of the time, the first page will have HREFs to other pages, often more than one. Firefox will typically open multiple additional connections to simultaneously pull down those different pages. So there isn't "a port" but multiple ports. Second, there's simply no clean way to find out what port firefox is using. Every time firefox opens a connection, it simply creates a socket and connects. The kernel dynamically allocates an unused port for that socket. Firefox may not even aware of the port number itself (it could get that info if it wants, but I can't see why it would be doing so). The result is derivable via lsof(8), but that wouldn't give it to you in real-time. I.e. For most URLs, by the time you ran lsof and decoded the information, it would be stale (the connection would be closed). The port could also (potentially) be gotten by ptrace(2)ing firefox (or letting strace(1) do that). But you would probably drastically affect the performance of firefox by doing so. And decoding the output would be quite complicated.
0
78
0
0
2015-05-21T20:05:00.000
python,linux,firefox,browser
Python script to dynamically find the port to which firefox listens on opening a url?
1
1
1
30,383,779
0
0
0
I'm using libtorrent in Python, but it doesn't recognize magnet link, which looks like magnet:?... with only sha1 hash, it needs to get &dn parameter to parse specific torrent-tracker. By the way, qBitTorrent, which does use the same libtorrent lib can recognize only magnet link.
false
30,406,945
0.197375
0
0
1
If the magnet link doesn't have any trackers in it, then the client needs to have the DHT active and get peers that way instead.
0
310
0
1
2015-05-22T22:48:00.000
python,libtorrent
I can't receive magnet-link metadata via libtorrent-python, without specifying udp-protocol of tracker
1
1
1
30,408,824
0
1
0
I'm looking into what it would take to add a feature to my site so this is a pretty naive question. I'd like to be able to connect buyers and sellers via an email message once the buyer clicks "buy". I can see how I could do this in java script, querying the user database and sending an email with both parties involved. What I'm wondering is if there's a better way I can do this, playing monkey in the middle so they only receive an email from my site, and the it's automatically forwarded to the other party. That way they don't have to remember to hit reply-all, just reply. Also their email addresses remain anonymous. Again assuming I generate a unique subject line with the transaction ID I could apply some rules here to just automatically forward the email from one party to the other but is there an API or library which can already do this for you?
false
30,427,300
0
1
0
0
What you're describing would be handled largely by your backend. If this were my project, I would choose the following simple route: Store the messages the buyers/sellers send in your own database, then simply send notification emails when messages are sent. Have them reply to each other on your own site, like Facebook and eBay do. An example flow would go like this: (Gather the user and buyer's email addresses via registration) Buyer enters a message and clicks 'Send Message' button on seller's page Form is posted (via AJAX or via POST) to a backend script Your backend code generates an email message Sets the 'To' field to the seller Your seller gets an email alert of the potential buyer which shows the buyer's message The Seller then logs on to your site (made easy by a URL in the email) to respond The Seller enters a response message on your site and hits 'Reply' Buyer gets an email notification with the message body and a link to your site where they can compose a reply. ...and so on. So, replies would have to be authored on-site, rather than as an email 'reply.' If you choose to go this route, there are some simple 3rd party "transactional email" providers, which is the phrase you'd use to find them. I've personally used SendGrid and found them easy to set up and use. They have a simple plug in for every major framework. There is also Mandrill, which is newer but gets good reviews as well.
0
74
0
0
2015-05-24T19:12:00.000
javascript,php,python,email
Email API for connecting in a marketplace?
1
1
1
30,427,829
0
1
0
I'm using Facebook Graph API to get all the posts on my own timeline. There is a problem. Because it was my birthday yesterday and all of the posts were done on my profile by others(my friends), these posts are shown as clubbed into one. This is the response I get in one of the posts when I make an API call at /v2.1/me/feed 189 friends posted on your timeline for your birthday. But I want to read(get) the posts separately. How can I do that?
true
30,429,084
1.2
0
0
1
Facebook aggregates all your friend birthday posts in a single post. Once the posts are aggregated there is no way to retrieve the individual post as they don't longer exist.
0
67
0
0
2015-05-24T22:43:00.000
python,facebook,facebook-graph-api
How to get Facebook posts of same type grouped into one, separately?
1
1
1
30,502,981
0
0
0
Making a very simple tic-tac-toe game in Python using a P2P architecture with sockets. Currently my GUI has button that says 'Create' that will open up and draw a new game board window, create a socket, bind, listen, and accept a connection. The 'Join' button will open and draw a new gameboard and connect to that 'server'. I'm trying to have it show a message saying 'Waiting for player...' when you create a game, a cancel button to stop and go back to the main menu, and have it disappear on it's own if a connection has been accepted. I tried using tkMessageBox but the script stops until the user clears the message so there's no way for me to listen/accept until the user presses something. What other way is there for me to accomplish this? Thanks!
true
30,430,162
1.2
0
0
1
Sounds like a threading issue. I'm unfamiliar with TK graphics, but I'd imagine what you need to do is start the window showing the "waiting for player" message. That window then loops waiting for something to happen. When the message box displays you need to have the "listening" done on another thread, which signals back to the main message box when someone's connected using a semaphore or a queue. On your main GUI thread you need to make the loop: check the queue or semaphore for values. If there's a value on there that you expect, close the box. This would need to be non-blocking so that the GUI thread can still check for input from the user. check for user input. That's probably done using callback functions though.
0
339
0
4
2015-05-25T01:35:00.000
python,sockets,p2p
Python: Show message 'Waiting for player...' while socket listens for connection
1
1
1
30,430,588
1
0
0
I have a web.py program acting as a web service, delivering responses according to the files in its directory. Now I want to build a network of servers, each running an instance of the program. Additionally, each server has a list of addresses to other servers. The motivation behind this is to allow different servers to store different files, delivering a different part of the (now distributed) service. So far so good, each server looks at its own files, does its thing to create the responses and then concatenates to that all responses he gets from his neighbors in the network. The problem is that this can cause circular requests if two servers have each other on their list (directly or indirectly). How can I prevent this? I was thinking about appending a list of server addresses to each request. A server will add itself to the list when receiving the request. It will only request a response from those neighbors that are not on the list already. Is this a feasible approach? Is there something built into http (requests) to collect responses from several servers without creating endless loops?
true
30,485,902
1.2
0
0
0
Your approach should work. Might be faster to check if the server sees itself on the request list - that means it already processed that request thus all the other servers it has in its list should already be in the response list - no need to check each and every one of them. Or maybe use it as a redundant sanity check? Unless your system allows duplicate requests coming in simultaneously on multiple servers. AFAIK HTTP handles only client/server (i.e. point to point) communications, in a 1:1 request/response model - no loops.
0
155
0
0
2015-05-27T14:47:00.000
python,http,webserver,web.py,circular-reference
How can I prevent circular http requests in a network of servers?
1
1
1
30,489,207
0
0
0
I'm trying to do some debuging on celery task using rdb but despite the fact that I can connect to the socket using telnet it doesn't give me any data, looks like Pdb is broken and doesn't have any data to inspect It starts with this lane c:\python27\lib\contextlib.py(21)exit()-> def exit(self, type, value, traceback):(Pdb) My setup is RabbitMQ and Celery running on localhost together with python virtual env Any idea what may be wrong with it?
true
30,486,773
1.2
0
0
2
For anyone who has this problem and finds their way here from google, I can't explain why rdb drops you into the middle of contextlib but I have found that if you execute the r(eturn) command a couple times you'll work your way back into the function you made your rdb.set_trace() call from.
0
666
1
1
2015-05-27T15:25:00.000
python,celery
Remote Debug of Celery using rdb
1
1
1
34,818,641
0
0
0
How do I convert "YANG" data model to "JSON"? As there is many many docs available in web, in that they changed in YANG synatx to JSON but how the value for the leaf or leaf list they are getting? from where and how it will get actual data in JSON from YANG?
false
30,503,356
-1
0
0
-4
Yang is a modeling language, not a data generation language. What you are asking for is a simulator that will contain the same or pseudo logic as your application to generate data.
1
13,938
0
5
2015-05-28T10:04:00.000
python,json,python-2.7,data-modeling,ietf-netmod-yang
how to convert YANG data model to JSON data?
1
1
2
30,657,742
0
0
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
false
30,536,946
0.033321
0
0
1
If you hace problems with the python command only need add the route C:/python34 or the route went you have python installed: List item Right click on "My computer" Click "Properties" Click "Advanced system settings" in the side panel Click "Environment Variables" Click the "New" below system variables find the path variable and edit add this variable ;C:\Python34 with the semicolon now you can run this comand cd C:\Python34 python -m pip install requests
1
108,064
0
20
2015-05-29T18:48:00.000
pip,python-3.4
How to install requests module in python 3.4 version on windows?
1
4
6
48,930,745
0
0
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
true
30,536,946
1.2
0
0
58
python -m pip install requests or py -m pip install requests
1
108,064
0
20
2015-05-29T18:48:00.000
pip,python-3.4
How to install requests module in python 3.4 version on windows?
1
4
6
30,537,052
0
0
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
false
30,536,946
0.099668
0
0
3
On Windows, I found navigating to my Python folder via CMD worked cd C:\Python36\ and then running the commandline: python -m pip install requests
1
108,064
0
20
2015-05-29T18:48:00.000
pip,python-3.4
How to install requests module in python 3.4 version on windows?
1
4
6
42,662,073
0
0
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
false
30,536,946
0
0
0
0
After installing python which comes with pip run and exe and input "pip install requests" It should do
1
108,064
0
20
2015-05-29T18:48:00.000
pip,python-3.4
How to install requests module in python 3.4 version on windows?
1
4
6
64,150,710
0
0
0
I'm looking for a best way to invoke a python script on a remote server and poll the status and fetch the output. It is preferable for me to have this implementation also in python. The python script takes a very long time to execute and I want to poll the status intermediately on what all actions are being performed. Any suggestions?
true
30,565,824
1.2
1
0
1
There are so many options. 'Polling' is generally a bad idea, as it assumes CPU occupation. You could have your script send you status changes. You could have your script write it's actual status into a (remote) file (wither overwriting or appending to a log file) and you can look into that file. This is probably the easiest way. You can monitor the file with tail -f file over the link And many more - and more complicated - other options.
0
235
0
0
2015-06-01T04:53:00.000
python,polling,remote-server
Invoke a python script on a remote server and poll the status
1
1
1
30,566,324
0
0
0
I have two PCs and I want to monitor the Internet connectivity in both of them and make it available in a page as to whether they're currently online and running. How can I do that? I'm thinking of a cron job that gets executed every minute that sends a POST to a file located in a server, which in turn would write the connectivity status "online" to a file. In the page where the statuses are displayed, read from both the status files and display whether they're online or not. But this feels like a sloppy idea. What alternative suggestion do you have? (The answer doesn't necessarily have to be code; I'm not looking for copy-paste solutions. I just want an idea, a nudge in the right directio,)
true
30,567,284
1.2
0
0
1
I would suggest just a GET request (you just need a ping to indicate that the PC is on) sent periodically to maybe a Django server and if you query a page on the Django server, it shows a webpage indicating the status of each. In the Django server have a loop where the time each GET is received is indicated, if the time between the last GET and current time is too large, set a flag to false. That flag will later be visible when the URL is queried, via the views. I don't think this would end up sloppy, just a trivial solution where you don't really have to dig too deep to make it work.
0
80
0
1
2015-06-01T06:48:00.000
python,http,cron,connection,monitor
How to monitor the Internet connectivity on two PCs simultaneously?
1
1
2
30,567,385
0
0
0
I am making a class in Python that relates a lot of nodes and edges together. I also have other operations that can take two separate objects and merge them into a single object of the same type, and so on. However, I need a way to give every node a unique ID for easy lookup. Is there a "proper way" to do this, or do I just have to keep an external ID variable that I increment and pass into my class methods every time I add more nodes to any object? I also considered generating a random string for each node upon creation, but there is still a risk of collision error (even if this probability is near-zero, it still exists and seems like a design flaw, if not a longwinded overengineered way of going about it anyway).
false
30,580,929
0
0
0
0
Pretty much both of your solutions are what is done in practice. Your first solution is to just increment a number will give you uniqueness, as long as you don't overflow (with python bigintegers this isnt really a problem). The disadvantage of this approach is if you start doing concurrency you have to make sure you use locking to prevent data races when increment and reading your external value. The other approach where you generate a random number works well in the concurrency situation. The larger number of bits you use, the less likely it is you will run into a collision. In fact you can pretty much guarantee that you won't have collisions if you use say 128-bits for your id. An approach you can use to further guarantee you don't have collisions, is to make your unique ids something like TIMESTAMP_HASHEDMACHINENAME_PROCESSID/THREADID_UNIQUEID. Then pretty much can't have collisions unless you generate two of the same UNIQUEID on the same process/thread within 1 second. MongoDB does something like this where they just increment the UNIQUEID. I am not sure what they do in the case of an overflow (which I assume doesn't happen too often in practice). One solution might be just to wait till the next second before generating more ids. This is probably overkill for what you are trying to do, but it is a somewhat interesting problem indeed.
0
1,452
0
2
2015-06-01T18:46:00.000
python,class,tree,nodes
Giving unique IDs to all nodes?
1
1
4
30,581,078
0
0
0
I want to write a script which consumes data over the internet and places the data which is pulled every n number of seconds in to a queue/list, then I will have x number of threads which I will create at the start of the script that will pick up and process data as it is added to the queue. My questions are: How can I create such a global variable (list/queue) in my script that is then accessible to all my threads? In my threads, I plan to check if the queue has data in it, if so then retrieve this data, release the lock and start processing it. Once the thread is finished working on the task, go back to the start and keep checking the queue. If there is no data in the queue, sleep for a specified number of time and then check the queue again.
false
30,581,807
0
0
0
0
If you want your app to be really multithreaded then consider using standalone queue (like activemq or zeromq) and consume it from your scripts running in different os processes because of GIL (with standalone queue it is very easy even to use it in network - plus to scalability).
1
1,389
0
4
2015-06-01T19:36:00.000
python,multithreading
Having a global queue (or list) that is available to all threads
1
1
2
30,582,034
0
1
0
I'm currently trying to right a python script that overnight turns off all of our EC2 instances then in the morning my QA team can go to a webpage and press a button to turn the instances back on. I have written my python script that turns the severs off using boto. I also have a function which when ran turns them back on. I have an html doc with buttons on it. I'm just struggling to work out how to get these buttons to call the function. I'm using bottle rather than flask and I have no Java SCript experience. So I would like t avoid Ajax if possible. I dont mind if the whole page has to reload after the button is pressed. After the single press the webpage isnt needed anyway.
false
30,592,411
0.53705
0
0
3
What I have ended up doing to fix this issue is used bottle to make a url which completes the needed function. Then just made an html button that links to the relevant url.
0
617
0
1
2015-06-02T09:36:00.000
python,html,boto,bottle
html buttons calling python functions using bottle
1
1
1
30,595,791
0
0
0
I've a python script in the network machine, I can open the network through explorer and have access to the script. Through that script from network,I want to create some folders/files and write/read some files in my localhost. Is there a way to do this in python?
false
30,611,649
0
0
0
0
The good part is: you have read access on the script. What you need is a python installation on your local machine and preferably a drive letter mapping to the script's folder. If not already mapped, this can be scripted with net use <letter>: \\<remote host>\<shared folder>. Then it's as easy as cd <letter>:\<path>\ ; python <script>.py. Then to the output of the script. Apparently it creates files. Can you supply the target folder on the script's command line? In that case just supply a local path.
0
2,012
1
1
2015-06-03T05:39:00.000
python
How to run python from a network machine to local host?
1
1
1
30,611,798
0
0
0
I have created a service login module by python. I want to limit login. The login based on failed attempts per IP address might not work well and annoy the users connected to the Internet through a local network since they'll have same external IP address. Is there a way to uniquely identify such users? Thanks a lot!
false
30,638,640
0
0
0
0
I don't think there is a bulletproof solution, if the users are behind NAT. To differentiate those users you would need the private IP address, which you can not get on the IP level. If the users are behind NAT, you can only see the public/external IP which would be the same for all clients. You could try to get the private IP on the application level (client side), but that would be tricky (and also if the private address is obtained via DHCP, it can change between requests) The other solution I can think of is identifying users via cookies. So a HTTP response to each failed login request would contain a cookie which uniquely identifies that client. In that way you can differentiate users with the same IP. You would have to sign the cookie values to preserve their integrity. However, this does not help if the client deletes cookies after each failed request.
0
103
0
0
2015-06-04T08:23:00.000
python,authentication,ip
How to uniquely identify users with the same external IP address?
1
1
1
30,776,062
0
0
0
I have a proxy traffic server which is an extra hop on a network and is handling large quantity's of traffic. I would like to calculate the cost in seconds of how long it takes for the proxy server to handle the incoming request, process them and forward it on. I had been playing to write a python script to perform a tcpdump and some how time packets entering into the server until they had left. I would probably have to perform tcpdump for a certain period of time and then analysis it to calculate times? Is this a good way of achieving what I want or would there be a more elegant solution?
true
30,647,336
1.2
0
0
1
I always found it easier to utilize a switch's 'port mirror' to copy all data in and out of the proxy's switchport to a separate port that connects to a dedicated capture box, which does the tcpdump work for you. If your switch(es) have this capability, it reduces the load on the busy proxy. If they don't, then yes, tcpdump full packets to a file: "tcpdump -i interface -s 0 -w /path/to/file". You can then (on a different machine) throw together some code to examine and report on anything you want, or even open it in wireshark for detailed analysis.
0
57
1
0
2015-06-04T14:57:00.000
python,unix,networking,network-programming,server
Timing packets on a traffic server
1
1
2
30,647,917
0
0
0
can someone point me in the right direction. Just need some documentation. I manually input a proxy, but I think it might be by passing it. I want to test my script to see if its actually going through my proxy with phantom. It looks like I successfully went through it, but still getting a few bug. Is there a way to print out the proxy its using in the command line?
true
30,761,095
1.2
0
0
0
No. Nothing about this is documented and I see no indication of getting this information in the code. As a workaround simply run Wireshark or tcpdump to capture the traffic and look into it to see where the requests go. It should be easy to see whether they go to the server or to the proxy server provided you know their IP addresses (or you can look into the dns query in Wireshark to see which IP address it is).
0
230
0
0
2015-06-10T15:41:00.000
python,proxy,phantomjs
Using phantomjs print proxy it used to access website
1
1
2
30,762,932
0
1
0
I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out?
false
30,801,879
1
0
0
11
I guess maybe it depends on what version of the client library are you using because in Java driver.navigate().back() works well.
0
25,243
0
9
2015-06-12T11:25:00.000
android,python,navigation,ui-automation,appium
How to automate the android phone back button using appium
1
4
8
40,546,232
0
1
0
I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out?
false
30,801,879
1
0
0
13
Yes,try the driver.back(), it simulates the system back function.
0
25,243
0
9
2015-06-12T11:25:00.000
android,python,navigation,ui-automation,appium
How to automate the android phone back button using appium
1
4
8
31,707,929
0
1
0
I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out?
false
30,801,879
0
0
0
0
driver.sendKeyEvent(AndroidKeyCode.BACK); does the job in Java
0
25,243
0
9
2015-06-12T11:25:00.000
android,python,navigation,ui-automation,appium
How to automate the android phone back button using appium
1
4
8
38,353,103
0
1
0
I am working on test automation for a hybrid mobile application on Android using Appium(python client library). I haven't been able to figure out any means to automate or create a gesture for using the Phone back button to go back to the previous page of the app. Is there any driver function that can be used? I tried my luck with self.driver.navigate().back() [hoping this would simulate the same behaviour as in Selenium to navigate to the previous URL] but to no avail. Can anyone suggest a way out?
false
30,801,879
0.024995
0
0
1
For appium-python-client, to go back you should call this method: driver.press_keycode(4)
0
25,243
0
9
2015-06-12T11:25:00.000
android,python,navigation,ui-automation,appium
How to automate the android phone back button using appium
1
4
8
50,558,943
0
0
0
I'm creating a code to demonstrate how to consume a REST service in Python, but I don't want my API keys to be visible to people when I push my changes to GitHub. How can I hide such information?
false
30,821,218
0.049958
1
0
1
Considering storing this kind of data in a config file that isn't tracked by git.
1
2,480
0
0
2015-06-13T16:55:00.000
python,git,github
How can I hide sensitive data before commiting to GitHub (or any other Git repo)?
1
1
4
30,821,244
0
0
0
I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client. I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID). My goal is to be able to send messages to the same client over both TCP and UDP. What is the best way to link a UDP and TCP socket to the same client? Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
true
30,875,143
1.2
0
0
1
Answering your last question: no. Because: If client is behind NAT, and the gateway (with NAT) has more than one IP, every connection can be seen by you as connection from different IP. Another problem is when few different clients that are behind the same NAT will connect with your server, you will have more than one pair of TCP-UDP clients. And it will be impossible to join correct pairs. Your method seems to be good solution for the problem.
0
1,578
0
1
2015-06-16T18:15:00.000
python,sockets,networking,tcp,udp
UDP and TCP always use same IP for one client?
1
3
3
30,875,280
0
0
0
I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client. I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID). My goal is to be able to send messages to the same client over both TCP and UDP. What is the best way to link a UDP and TCP socket to the same client? Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
false
30,875,143
0.066568
0
0
1
1- Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? NO in the general case, but ... 2- is it necessary for the client to connect twice, once for TCP and once for UDP ? NO, definitively 3- will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? YES except in special cases You really need some basic knowledge of the TCP, UDP and IP protocol to go further, and idealy, on the OSI model. Basics (but you should read articles on wikipedia to have a deeper understanding) : TCP and UDP are 2 protocol over IP IP is a routable protocol : it can pass through routers TCP is a connected protocol : it can pass through gateways or proxies (firewalls and NATs) UDP in a not connected protocol : it cannot pass through gateways a single machine may have more than one network interface (hardware slot) : each will have different IP address a single interface may have more than one IP address in the general case, client machines have only one network interface and one IP address - anyway you can require that a client presents same address to TCP and UDP when connecting to your server Network Address Translation is when there is a gateway between a local network and the wild internet that always presents its own IP address and keep track of TCP connections to send back packets to the correct client In fact the most serious problem is if there is a gateway between the client and your server. While the client and the server are two (virtual) machines for which you have direct keyboard access, no problem, but corporate networks are generally protected by a firewall acting as a NAT, and many domestic ADSL routers also include a firewall and a NAT. In that case just forget UDP. It is possible to instruct a domestic router to pass all UDP traffic to a single local IP, but it is not necessarily an easy job. In addition, that means that if a user of yours has more than one machine at home, he will be allowed to use only one at a time and will have to reconfigure his router to switch to another one !
0
1,578
0
1
2015-06-16T18:15:00.000
python,sockets,networking,tcp,udp
UDP and TCP always use same IP for one client?
1
3
3
30,876,002
0
0
0
I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client. I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID). My goal is to be able to send messages to the same client over both TCP and UDP. What is the best way to link a UDP and TCP socket to the same client? Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
false
30,875,143
0.066568
0
0
1
First of all when you send data with TCP or UDP you have to give the port. If your client connect with TCP and after your server send a response with UDP the packet will be reject by the client. Why? Because you have to register a port for connection and you can not be sure the port is correctly open on the client. So when you begin a connection in TCP the client open a port to send data and receive the response. You have to make the same with UDP. When client begin all communication with server you can be sure all the necessary port are open. Don't forget to send data on the port which the connection was open. Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)? Why you don't want create 2 connections? You have to use UDP for movement for example. because if you create an FPS you can send the player's position every 50ms so it's really important to use UDP. It's not just a question of better connection. If you want to have a really good connection between client and server you need to use Async connection and use STREAM. But if you use stream you'r TCP socket do not signal the end of a socket but you have a better transmition. So you have to write something to show the packet end (for example <EOF>). But you have a problem with this. Every socket you receive you have to analyze the data and split over the <EOF>. It can take a lot a processor. With UDP the packet always have a end signal. But you need to implement a security check.
0
1,578
0
1
2015-06-16T18:15:00.000
python,sockets,networking,tcp,udp
UDP and TCP always use same IP for one client?
1
3
3
32,045,545
0
0
0
I was trying to implement a multiuser chat (group chat) with socket on python. It basically works like this: Each messages that a user send is received by the server and the server sends it back to the rest of the users. The problem is that if the server close the program, it crashes for everyone else. So, how can you handle the departure of the server, should you change the server somehow, or there is other way around it? Thank you
true
30,880,392
1.2
0
0
0
could you make your server log for heartbeats? and also post heartbeats to the clients on the socket? if so, have a monitor check for the server heartbeats and restart the server application if the heartbeats exceed the threshold value. also, check for heartbeats on the client and reestablish connection when you did not hear a heartbeat.
0
560
0
0
2015-06-17T00:25:00.000
python,sockets
Creating Multi-user chat with sockets on python, how to handle the departure of the server?
1
1
1
30,880,515
0
0
0
I'm using PyQt4 to enter credentials into a domain login page and pull data from several additional pages in the domain. Everything works exactly as expected when supplying login or search credentials from within the code. When I open up raw_input to allow the user to enter information, it causes hang-ups trying to download one of the web-pages. I can't provide information on the page itself because it is on a corporate network, but it doesn't make sense that simply using raw_input would cause problems with QWebpage loads. The QNetworkManager throws 1 of the expected 3 or 4 .finished signals and the QWebpage frame never throws the .loadfinished signal so it just hangs. (I've tried to flushing stdin as well as seek(0) which gives me a bad file descriptor error). Has anyone run into such a problem before?
true
30,926,097
1.2
0
0
1
raw_input uses synchronous/blocking IO without giving Qt a chance to continue processing events in the background. Qt isn't really prepared for it's processing to be halted in this way. In theory it should just resume when raw_input is finished. But maybe in the meantime a timeout occurred or something like that. You really should use signal/event based input when using Qt. If GUI interaction is ok you should try QInputDialog::getText because it looks like a blocking call from the outside but internally lets Qt to continue processing background jobs.
0
285
0
1
2015-06-18T21:27:00.000
python,pyqt4,stdin,raw-input,qwebpage
Using raw_input causes problems with PyQt page loading
1
1
1
30,926,485
1
1
0
My next project requires me to develop both a mobile and a website application. To avoid duplicating code, I'm thinking about creating an API that both of these applications would use. My questions regarding this are: Is this approach sensible? Are there any frameworks to help me with this? How would I handle authentication? Does this have an affect on scalability?
false
30,928,370
0
0
0
0
Actually I doesn't make great sense. From my experience I know that mobile apps and web pages even if use the same backend very often require completely different set of data, and - (I know premature optimization is the root of all evil) - number of calls should be minimized for mobile applications to make them running smoothly. I'd separate mobile API from classic REST API, even with prefixes, e.g. /api/m/ and /api/. There're really many frameworks in a number of technologies. E.g. spring, django-rest-framework, express.js. Whatever you like. Token authentication will be the best choice. For both web and mobile. For REST in general. It shouldn't be a matter for you now.
0
288
0
0
2015-06-19T01:12:00.000
php,python,rest,authentication
Creating a centric REST API for a mobile and website application
1
1
2
30,932,400
0
1
0
I'm having problem on json output of scrapy. Crawler works good, cli output works without a problem. XML item exporter works without a problem and output is saved with correct encoding, text is not escaped. Tried using pipelines and saving the items directly from there. Using Feed Exporters and jsonencoder from json library These won't work as my data includes sub branches. Unicode text in json output file is escaped like this: "\u00d6\u011fretmen S\u00fcleyman Yurtta\u015f Cad." But for xml output file it is correctly written: "Öğretmen Süleyman Yurttaş Cad." Even changed the scrapy source code to include ensure_ascii=False for ScrapyJSONEncoder, but no use. So, is there any way to enforce scrapyjsonencoder to not escape while writing to file. Edit1: Btw, using Python 2.7.6 as scrapy does not support Python3.x This is as standart scrapy crawler. A spider file, settings file and an items file. First the page list is crawled starting from base url then the content is scraped from those pages. Data pulled from the page is assigned to variables defined in items.py of the scrapy project, encoded in utf-8. There's no problem with that, as everything works good on XML output. scrapy crawl --nolog --output=output.json -t json spidername Xml output works without a problem with this command: scrapy crawl --nolog --output=output.xml -t xml spidername I have tried editing scrapy/contrib/exporter/init.py and scrapy/utils/serialize.py to insert ensure_ascii=False parameter to json.JSONencoder. Edit2: Tried debugging again.There's no problem up to Python2.7/json/encoder.py code. Data is intact and not escaped. After that, it gets hard to debug as the scrapy works async and there are lots of callbacks. Edit3: A bit of dirty hack, but after editing Python2.7.6/lib/json/encoder.py and changing ensure_ascii parameter to False, the problem seems to be solved.
false
30,948,736
0.099668
0
0
1
As I don't have your code to test, Can you try to use codecs Try: import codecs f = codecs.open('yourfilename', 'your_mode', 'utf-8') f.write('whatever you want to write') f.close()
0
2,012
0
1
2015-06-19T23:34:00.000
python,json,unicode,utf-8,scrapy
Unicode on Scrapy Json output
1
1
2
30,964,853
0
0
0
I am doing some research of spam detection on Twitter where my program is dynamic enough which can built a tree of datastructure of metadata of user and his tweets just taking a parameter as Screen_name or Twitter id but collecting legitimate user name and spammer name is a manual task. (If there is any other way please suggest me.)
false
31,005,233
0
1
0
0
Use of Stream API could help you. You can collect the real time information there and using some clustering algorithms and data mining techniques can solve it.
0
73
0
0
2015-06-23T14:13:00.000
python,twitter
Is there any way to get current legitimate user list from Twitter in 5000 to 10000
1
1
1
31,005,344
0
0
0
I am new to python. I want to read JSON data from URL and if there is any change in JSON data on server, I want to update in my JSON file which is on client. How can I do that through python? Actually i am ploting graph on django using JSON data which is on another server. That JSON data is updated frequently. So here i want to update my charts based on updated json data. For that i has to listen to URL link for change. So how can i do that.....i know with select() system call i can but need some another way
false
31,024,852
0
0
0
0
There's no way to "listen" for changes other than repeatedly requesting that URL to check if something has changed; i.e. classic "pull updates". To get actual live notifications of changes, that other server needs to offer a way to push such a notification to you. If it's just hosting the file and is not offering any active notifications of any sort, then repeatedly asking is the best you can do. Try not to kill the server when doing so.
1
190
0
0
2015-06-24T11:04:00.000
python
Asynchronous way to listen web service for data change
1
1
1
31,025,251
0
0
0
Im using tweepy to search for a keyword in all tweets in the last 5 minutes. But I couldn't find a way to do this. I saw the since_id and max_id arguments, but they only work if I know the tweet_id before 5 minutes.
false
31,028,239
0
1
0
0
There is no specific GETfeature that would allow that. What you'll need to do is create a search for that keyword and Get search/tweets and use the since_id and max_id like you have and look at time_created_at from the JSON and filter yet again using that. The problem with the previous step is that you're limited to 180 tweets /15 minutes. Another solution is to use the streaming API since it'll give you recent information most are within 5 minutes and you'll be able to filter by keywords as well.
0
3,078
0
2
2015-06-24T13:40:00.000
python,json,tweepy
Using tweepy: how to retrieve all tweets within past 5 minutes
1
1
1
31,028,615
0
1
0
I want to do a unit test on a html page which is returned as a byte string in an HttpResponse object... e.g. "find_elements_by_tag_name". Is the solution simply to xml.dom.minidom.parseString the bytes of response.content? I couldn't find any examples of people doing this online or in Django manuals or tutorials, which makes me wonder if there's a reason for not doing it this way? If it's bad practice and there's a better way to do this please can you say why and what?
false
31,035,318
0
0
0
0
Yes that's a way to parse HTML into a DOM tree. If other people don't do that, they might have other requirements. In general your idea is not bad, it might require more CPU time then other testing method (for example regular expressions. But if it fits your needs for testing, just do it. Performance it rarely a problem at testing time.
0
627
0
0
2015-06-24T19:27:00.000
python,django,httpresponse
Django get a DOM object from an HttpResponse
1
1
2
31,035,689
0
1
0
I prepare some test suites for an e-commerce web site, so i use Selenium2Library which requires a running browser on a display. I am able to run these test on my local machine but i had to run them on remote server which does not have an actual display. I tried to use xvfb to create a virtual display but it did not worked, tried all solutions on some answers here but nothing changed. So i saw pyvirtualdisplay library of Python but it seems like helpful with tests written in Python. I'd like to know that if I am able to run test suites that i wrote in robotframework (which are .txt formatted and could be runnned via pybot) via Python so i can use pyvirtualdisplay? Sorry about my English, thanks for your answers...
false
31,045,708
0.099668
1
0
1
Yes there is with Xvfb installed. In very short: /usr/bin/Xvfb :0 -screen 0 1024x768x24& export DISPLAY=:0 robot your_selenium_test
0
2,600
0
3
2015-06-25T08:59:00.000
python,selenium,robotframework,xvfb
Is there anyway to run robot framework tests without a display?
1
1
2
36,692,169
0
0
0
I just started using tweepy library to connect with streaming api of twitter. I encountered both on_status() and on_data() methods of the StreamListener class. What is the difference? Total noob here!
false
31,054,656
0.379949
0
0
4
If you're only concerned with tweets, use on_status(). This will give you what you needed without the added information and doing so will not hinder your limit. If you want detailed information use on_data(). --That's rarely the case unless you're doing heavy analysis.
0
7,894
0
10
2015-06-25T15:30:00.000
python,twitter,tweepy
What is the difference between on_data and on_status in the tweepy library?
1
1
2
31,058,208
0
1
0
I have an iOS app that is send a POST request to my server and then it runs a function and returns some data back to the app. My question is if my app has lets say 100 people using it at the same time and make the POST request in quick succession, is this going to case an error? I am on a shared server if that matters.
false
31,066,456
0
0
0
0
No it should not cause an error, you are requesting the server with an HTTP method(POST) which serves every request from a new user as a new request because HTTP is a stateless protocol it doesn't save user states(unless you are managing user sessions explicitly) So if 100 users are requesting at the same time, they are different, each request is different and same will be the response(different) for each user. And as others said, its completely dependent on the server how it handles the request.
0
1,192
0
2
2015-06-26T06:33:00.000
php,python,ios,server,backend
Send multiple POST requests a once
1
1
2
31,067,192
0
1
0
I have a Java application that needs to run the proprietary software PowerWorld on the server and then return output to the client side Web Start window. Is this possible? How do I go about doing this? I am using Apache Tomcat to run the server. My Java code uses Runtime.exec() to run a Python script that runs PowerWorld. I made sure that the python script, powerworld file and java app are all in the same directory and reference each other using relative file paths
false
31,112,366
0.197375
0
0
1
Java WebStart will install a desktop application into the cache of the client. That will run on the client not on the server, however you can easily create a webapplication as a service, i.e. on Tomcat. The webapp will be able to receive client requests, i.e. via RMI, RESTfull service or webservice, call the proprietary programm and return the results.
0
94
0
0
2015-06-29T09:37:00.000
java,python,tomcat,java-web-start
Can Java Web Start be used to run server-side programs?
1
1
1
31,113,115
0
0
0
I want to create an api that various clients can connect to it like Web, Mobile platforms etc. My problem is that sometimes things are different with each client. For example i use different method for authentication for Web and Mobile platforms and my question is: I have to create different files for each type of client or use if else statements to detect client type and do proper functions in the same class? I want to create clean and standard API. I know this can have a lots of answers and its a broad question, But i just have a clue in this.
false
31,129,801
0.066568
0
0
1
There is definitely not 'the one and only' way to create an API. However, checking the type of client is definitely not the way to go, as it would mean checking headers, which can be forged when sending the request. For authentication, if you want to use different methods, your best bet is probably to have different authentication strategies, and try them one after the other. If the first fail, you run the next, etc. A common way to implement this is to add an authentication middleware, which tries each available strategy to authenticate the client, and stops the request if it could not. This does mean that even if you want your strategy A to work only for the browser, it could also be used to login from a mobile app, but there is no way to prevent this, and anyway it should not lead to any security issue either. If it does, the issue is probably somewhere else.
0
71
0
0
2015-06-30T04:50:00.000
javascript,php,python,node.js,api
Creating clean and standard API with different type of clients
1
1
3
31,129,923
0
0
0
I want to catch a packet my computer is sending, modify it and send it. I can't use sniff, because it gives me a copy of the packet. The packet itself is sent. I want to stop the sending of the packet, change it and then send it forward - MitM attack. How can I do it using scapy?
false
31,137,204
0.379949
0
0
2
What you need sounds more like a proxy. What kind of protocol you try to inject into? If it was HTTP it would be easy - take any HTTP proxy and mitm away. Or you can use something like socksify, but I am not aware of anything working on Windows. Or you need something that works as a network driver. You cannot easily achieve this with scapy if packets originate from YOUR computer. Example with scapy-arp-mitm is performing mitm on other computer communication.
0
1,594
0
2
2015-06-30T11:37:00.000
python,scapy,man-in-the-middle
Python - Man in the Middle
1
1
1
31,139,789
0
0
0
I installed Selenium on my system using 'pip install selenium' and it works great on Mac Console. But when I tried using selenium in my project on Pycharm, I got an error that No module named Selenium exists. What am I doing wrong?
false
31,173,867
0
0
0
0
Maybe you use python 2.x and 3.x on your computer and you configured in pycharm other then the one you use in console? If it is so then pip could install selenium to python 2.x and in pycharm you are using 3.x or the other way round.
1
5,003
0
3
2015-07-02T00:39:00.000
python,selenium,pycharm
Selenium not working on Pycharm
1
2
2
31,180,054
0
0
0
I installed Selenium on my system using 'pip install selenium' and it works great on Mac Console. But when I tried using selenium in my project on Pycharm, I got an error that No module named Selenium exists. What am I doing wrong?
true
31,173,867
1.2
0
0
1
So I found what I was doing wrong. My Mac Terminal and Pycharm were using different Python that I installed on my system, so I changed the path on Pycharm interpreter to the local path where Selenium was installed.
1
5,003
0
3
2015-07-02T00:39:00.000
python,selenium,pycharm
Selenium not working on Pycharm
1
2
2
31,191,965
0
0
0
I'd like to import the "requests" library into my Kivy application. How do I go about that? Simply giving import requests is not working out. Edit: I'm using Kivy on Mac OS X, 10.10.3.
false
31,182,671
0.099668
0
0
1
If you're using windows with kivy's portable package, I think you can get a shell with kivy's env by running the kivy executable. Assuming so, I think you can run pip install requests in this shell to install it to kivy's environment. Edit: I see you've now noted you are using OS X, but something similar may be true. I don't know about this though.
0
1,389
0
2
2015-07-02T10:57:00.000
python,kivy
Importing libraries in Kivy
1
1
2
31,183,217
1
1
0
Well, the title says it all... It is possible to perform an XmlHttpRequest from Selenium/Webdriver and then render the output of that requests in a browser instance ? If so, can you enlight me please ?
false
31,212,059
0
0
0
0
Selenium is really designed to be an external control system for a web browser. I don't think of it as being the source of test data, itself. There are other unit-testing frameworks which are designed for this purpose, but I see Selenium's intended purpose to be different.
0
1,233
0
3
2015-07-03T17:46:00.000
python,selenium,automation,webdriver
It's possible to do an XHR call and render the output with Selenium?
1
1
3
31,212,344
0
0
0
I am new to Python, so maybe this is a easy question, but I am planning to do an REST API application but I need it to be very fast and allow concurrent queries. Is there a way to do a application Python that allows concurrent execution and each execution responds to an REST API? I don't know if threads are a way to go in this case? I know threading is not the same as concurrency, but maybe are a solution for this case.
true
31,244,471
1.2
0
0
2
If you use a WSGI compliant framework (or even just plain WSGI as the "framework") then concurrency is handled by the wsgi "container" (apache + mod_wsgi, nginx+gunicorn, whatever) either as threads, processes or a mix of both. All you have to do is write your code so it does support concurrency (ie : no mutable global state etc).
1
1,626
0
0
2015-07-06T11:17:00.000
python,multithreading,rest,concurrency
Python concurrent REST API
1
1
1
31,245,355
0
1
0
I am trying to get stock data from Yahoo! Finance. I have it installed (c:\ pip install yahoo-finance), but the import in the iPython console is not working. This is the error I get: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 4: invalid start byte. I am using Python 3.4 and Spyder 2.3.1. Has anyone else encountered this? Update: The unicode error during import is no longer, but now it is replaced with the following when trying to use the yahoo_finance tool... ImportError: html5lib not found, please install it However, html5lib is listed when I run help('modules').
false
31,277,368
0.197375
0
0
1
I had the same error about html5lib with Python 3.4 in PyCharm 4.5.3, even though I installed html5lib. When I restarted PyCharm console (where I run the code), the error disappeared and options loaded correctly.
1
1,702
0
0
2015-07-07T19:12:00.000
python,pandas,scrape,yahoo-finance
Import Error related to Yahoo Finance tool / html5lib install
1
1
1
32,193,407
0
0
0
I created a packet sniffer using the pypcap Python library (in Linux). Using the .stats() method of the pypcap library, I see that from time to time few packets get dropped by the Kernel when the network is busy. Is it possible to increase the buffer size for the pypcap object so that less packets get dropped (like it is possible in tcpdump?).
true
31,289,288
1.2
0
0
0
I studied the source code of pypcap and as far as I could see there was no way to set the buffer size from it. Because pypcap is using the libpcap library, I changed the default buffer size in the source code of libpcap and reinstalled it from source. That solved the problem as it seems. Tcpdump sets the buffer size by calling the set_buffer_size() method of libpcap, but it seems that pypcap cannot do that. Edit: The buffer size variable is located in the pcap-linux.c file, and the name is opt.buffer_size. I is 2MB by default (2*1024*1024 in source code)
0
840
1
2
2015-07-08T09:59:00.000
python,packet,sniffer
How to set buffer size in pypcap
1
1
2
31,293,136
0
0
0
I have a Twisted client/server application where a client asks multiple servers for additional work to be done using AMP. The first server to respond to the client wins -- the other outstanding client requests should be cancelled. Deferred objects support cancel() and a cancellor function may be passed to the Deferred's constructor. However, AMP's sendRemote() api doesn't support passing a cancellor function. Additionally, I'd want the cancellor function to not only stop the local request from processing upon completion but also remove the request from the remote server. AMP's BoxDispatcher does have a stopReceivingBoxes method, but that causes all deferreds to error out (not quite what I want). Is there a way to cancel AMP requests?
true
31,304,788
1.2
0
0
2
No. There is no way, presently, to cancel an AMP request. You can't cancel AMP requests because there is no way defined in AMP at the wire-protocol level to send a message to the remote server telling it to stop processing. This would be an interesting feature-addition for AMP, but if it were to be added, you would not add it by allowing users to pass in their own cancellers; rather, AMP itself would have to create a cancellation function that sent a "cancel" command. Finally, adding this feature would have to be done very carefully because once a request is sent, there's no guarantee that it would not have been fully processed; chances are usually good that by the time the cancellation request is received and processed by the remote end, the remote end has already finished processing and sent a reply. So AMP should implement asynchronous cancellation.
0
186
1
1
2015-07-08T22:15:00.000
python,twisted,deferred,asynchronous-messaging-protocol
How should a Twisted AMP Deferred be cancelled?
1
1
1
31,305,323
0
1
0
I am writing a program that opens an html form in a browser window. From there, I need to get the data entered in the form and use it in python code. This has to be done completely locally. I do not have access to a webserver or I would be using PHP. I have plenty of experience with Python but not as much experience with JavaScript and no experience with AJAX. Please help! If you need any more information to answer the question, just ask. All answers are greatly appreciated.
false
31,307,147
0.197375
0
0
1
The browsers security model prevents sending data to local processes. Your options are: Write a browser extension that calls a python script. Run a local webserver. Most Python web development frameworks have a simple one included.
0
78
0
0
2015-07-09T02:35:00.000
javascript,python,html,forms,local
Sending data from JavaScript to Python LOCALLY
1
1
1
31,307,321
0
1
0
My server time is set as Asia/India. So when ever I am trying to post an image in S3 bucket I am getting the following error RequestTimeTooSkewedThe difference between the request time and the current time is too large.Thu, 09 Jul 2015 17:53:21 GMT2015-07-09T08:23:22Z90000068B8486508D2695Ag6EfiNV8uJi8JY/Y2JWCIBi7fROEa/Uw2Yaw3fw3pfAbI+ZtaFZV7PnHhZ6Yxw07 How can I change the AWS S3 bucket time as IST?
true
31,312,292
1.2
0
0
4
This has nothing to do with timezone of machine or S3 bucket, your machine time is not correct and if machine time is off by more than 15 minutes, AWS will give error because of security. Just check if time is correct on machine.
0
8,668
0
1
2015-07-09T08:37:00.000
python,django,amazon-web-services,amazon-s3,boto
How to change amazon aws S3 time zone setting for a bucket
1
1
1
31,314,635
0
0
0
I'm using jira-python to automate a bunch of tasks in Jira. One thing that I find weird is that jira-python takes a long time to run. It seems like it's loading or something before sending the requests. I'm new to python, so I'm a little confused as to what's actually going on. Before finding jira-python, I was sending requests to the Jira REST API using the requests library, and it was blazing fast (and still is, if I compare the two). Whenever I run the scripts that use jira-python, there's a good 15 second delay while 'loading' the library, and sometimes also a good 10-15 second delay sending each request. Is there something I'm missing with python that could be causing this issue? Anyway to keep a python script running as a service so it doesn't need to 'load' the library each time it's ran?
true
31,321,872
1.2
1
0
0
@ThePavoIC, you seem to be correct. I notice MASSIVE changes in speed if Jira has been restarted and re-indexed recently. Scripts that would take a couple minutes to run would complete in seconds. Basically, you need to make sure Jira is tuned for performance and keep your indexes up to date.
0
923
0
3
2015-07-09T15:26:00.000
python,jira-rest-api
Jira python runs very slowly, any ideas on why?
1
1
1
31,656,989
0
1
0
I have an SQS queue that is constantly being populated by a data consumer and I am now trying to create the service that will pull this data from SQS using Python's boto. The way I designed it is that I will have 10-20 threads all trying to read messages from the SQS queue and then doing what they have to do on the data (business logic), before going back to the queue to get the next batch of data once they're done. If there's no data they will just wait until some data is available. I have two areas I'm not sure about with this design Is it a matter of calling receive_message() with a long time_out value and if nothing is returned in the 20 seconds (maximum allowed) then just retry? Or is there a blocking method that returns only once data is available? I noticed that once I receive a message, it is not deleted from the queue, do I have to receive a message and then send another request after receiving it to delete it from the queue? seems like a little bit of an overkill. Thanks
true
31,321,996
1.2
0
0
25
The long-polling capability of the receive_message() method is the most efficient way to poll SQS. If that returns without any messages, I would recommend a short delay before retrying, especially if you have multiple readers. You may want to even do an incremental delay so that each subsequent empty read waits a bit longer, just so you don't end up getting throttled by AWS. And yes, you do have to delete the message after you have read or it will reappear in the queue. This can actually be very useful in the case of a worker reading a message and then failing before it can fully process the message. In that case, it would be re-queued and read by another worker. You also want to make sure the invisibility timeout of the messages is set to be long enough the the worker has enough time to process the message before it automatically reappears on the queue. If necessary, your workers can adjust the timeout as they are processing if it is taking longer than expected.
0
19,484
0
25
2015-07-09T15:31:00.000
python,amazon-web-services,boto,amazon-sqs
Best practice for polling an AWS SQS queue and deleting received messages from queue?
1
1
3
31,322,766
0
0
0
I'm using requests to communicate with remote server over https. At the moment I'm not verifying SSL certificate and I'd like to fix that. Within requests documentation, I've found that: You can pass verify the path to a CA_BUNDLE file with certificates of trusted CAs. This list of trusted CAs can also be specified through the REQUESTS_CA_BUNDLE environment variable. I don't want to use system's certs, but to generate my own store. So far I'm grabbing server certificate with ssl.get_server_certificate(addr), but I don't know how to create my own store and add it there.
false
31,369,633
0.379949
0
0
2
This is actually trivial... CA_BUNDLE can be any file that you append certificates to, so you can simply append the output of ssl.get_server_certificate() to that file and it works.
0
5,838
0
2
2015-07-12T15:49:00.000
python,ssl,https,python-requests,client-certificates
Adding server certificates to CA_BUNDLE in python
1
1
1
31,370,879
0
0
0
Can someone let me know 1. How to write selenium webdriver code in python for reading the data from CSV files and input them into fields of application under test and print the results into CSV file after execution. 2. I have written capture screenshot for every statement to get the screenshots. Is there any that we can capture screenshots in a single go, like using any loop statements etc..If yes, then can you post the code Thanks for your time and response is appreciated....
false
31,391,729
0.197375
0
0
1
Selenium tests in Python are just python code. You can use the CSV module and a normal loop to carry out these actions on the page, and receive the values from the new DOM. You can use loops just like normal Python to capture the screenshots, but no I'm not going to write the code for ya.
0
629
0
0
2015-07-13T19:22:00.000
python,csv,selenium,screenshot
Selenium webdriver code for reading/Writing data from CSV in Python
1
1
1
31,391,856
0
0
0
I have set up an Import.io bulk extract that works great with say, 50 URLs. It literally zips through all of them in seconds. However, when I try to do an extract of 40,000 URLs, the extractor starts very fast for the first thousand or so, and then progressively keeps getting slower every incremental URL. By 5,000 it literally is taking about 4-5 seconds per URL. One solution that seems to work is breaking them into chunks of 1,000 URLs at a time and doing a separate bulk extract for each. However, this is very time consuming, and requires splicing back together all of the data at the end. Has anyone experienced this, and if so do they have a more elegant solution? Thanks, Mike
false
31,396,712
0.379949
1
0
4
One slightly less elegant solution would be to create a crawler. And before you run it insert the the 10k URLs in the "where to start crawling" box. Under advanced options set the crawl depth to zero, that way you will only get the pages you put in the where to start crawling input box. That should do the trick. Plus the cawler has a bunch of other options like wait between pages and concurrent pages etc.
0
155
0
4
2015-07-14T02:36:00.000
python,import.io
Import.io bulk extract slows down when more URLs are in list
1
2
2
31,427,417
0
0
0
I have set up an Import.io bulk extract that works great with say, 50 URLs. It literally zips through all of them in seconds. However, when I try to do an extract of 40,000 URLs, the extractor starts very fast for the first thousand or so, and then progressively keeps getting slower every incremental URL. By 5,000 it literally is taking about 4-5 seconds per URL. One solution that seems to work is breaking them into chunks of 1,000 URLs at a time and doing a separate bulk extract for each. However, this is very time consuming, and requires splicing back together all of the data at the end. Has anyone experienced this, and if so do they have a more elegant solution? Thanks, Mike
false
31,396,712
0
1
0
0
Mike, would you mind trying again? We have worked on the Bulk Extract, now it should be slightly slower at the beginning, but more constant Possibly 40k are still too many, in which case you may try to split, but I did run 5k+ in a single run Let me know how it goes!
0
155
0
4
2015-07-14T02:36:00.000
python,import.io
Import.io bulk extract slows down when more URLs are in list
1
2
2
32,215,417
0
0
0
is there any way to constantly "listen" to a website and run some code when it updates? i'm working on earthquake data, specifically, parsing earthquake data from a site that updates and lists earthquake details in real time. so far, my only (and clunky) solution has been to use task scheduler to run every 30 minutes, which of course would have a time difference of 1-29 minutes depending on when the event will happen between the 30 minute downtime between running the code. i also thought of using some twitter API, since the site also has an automated twitter account tweeting details every time an earthquake happens, but again this would require constantly "listening" to the twitter stream via python as well. would appreciate help, thanks.
true
31,412,919
1.2
1
0
0
is there any way to constantly "listen" to a website and run some code when it updates? If the site offers a feed or stream of updates, use it. However, if you are looking to scrape a page and trigger code on differences, then you need to poll the site like you are doing (clumsily) with TaskScheduler.
0
381
0
1
2015-07-14T17:01:00.000
python,python-2.7,twitter
constantly "listen" to website using python
1
1
1
31,413,075
0
0
0
In case sentry sends lots of similar messages, is there a way to stop it? We have a lot of clients and the sentry messages are basically all the same so sentry spams me.
true
31,423,456
1.2
1
0
0
If you're talking about notifications you can disable them per-account or entirely on a project via the integration. That said if all the messages are the same you should look into why they're not grouping. A common case would be you're using a logging integration and there's a variable in the log message itself.
0
1,004
0
0
2015-07-15T06:56:00.000
python,sentry
Stop sentry from sending messages
1
1
1
31,424,387
0
0
0
When creating an AWS security group rule for ICMP using Boto, I received the following error. I was specifying the port range as 0 to 65535, which is the way to specify all ports for TCP. ICMP code (65535) out of range (InvalidParameterValue) How do I address this?
false
31,464,223
0.664037
0
0
4
ICMP does not have ports in the protocol, unlike TCP. So when making the Boto call, use -1 for the source and destination ports to avoid the above error. AWS considers -1 to be All. Using 0 is also valid, however I haven't verified that it allows all traffic. It should, given that ICMP has no ports in the protocol.
0
840
0
1
2015-07-16T20:47:00.000
python,amazon-web-services,boto
Python Boto: ICMP code (65535) out of range (InvalidParameterValue)
1
1
1
31,464,224
0
0
0
Please bear with me as I have been reading and trying to understand HTTP and the different requests available in its protocol but there are still a few loose connections here and there. Specifically, I have been using Apache's HttpClient to send requests, but I'm unsure of a few things. When we make a request to a URI, how can we know before hand how to properly format say a PUT request? You might be trying to transmit data to fill out a form, or send an image, etc. How would you know if the server is capable of receiving that format of request?
false
31,466,493
0
0
0
0
If you try to PUT without any knowledge of the server this request will "fail" (or not - depends on the implementation e.g. it can redirect you to main page). Failure is indicated by the server response code along with headers. E.g. 405 Method Not Allowed or 400 bad request etc. Or redirect you to main page: 302 Found You, as a client, must adapt to the server's API. Moreover different requests to the same API may give you different specs e.g. One response is gzipped & with ETag & cached, the other one is not. Or plain GET / will give you HTML and GET /?format=json will give you JSON.
0
25
0
0
2015-07-17T00:02:00.000
java,http,post,httpclient,python-requests
How to determine how to format an HTTP request to some server
1
1
1
31,466,587
0
1
0
My Oauth2Callback handler is able to access the Google API data I want - I want to know the best way to get this data to my other handler so it can use the data I've acquired. I figure I can add it to the datastore, or also perform redirect with the data. Is there a "best way" of doing this? For a redirect is there a better way than adding it to query string?
false
31,496,583
0.197375
0
0
1
I think I found a better way of doing it, I just use the oauth callback to redirect only with no data, and then on the redirect handler I access the API data.
0
37
1
0
2015-07-18T23:34:00.000
python-2.7,google-app-engine,oauth-2.0
How do I redirect and pass my Google API data after handling it in my Oauth2callback handler on Google App Engine
1
1
1
31,496,649
0
0
0
I am writing a python script that imports ssl library. I need to create ssl socket. However, I find that I will need to use a modified version of openssl library. The author of the modified version told me that the underlying implementation of ssl module is using the openssl library. The author provided me with a file named ssllib.c. I searched the folder of the openssl library that I installed: openssl-0.9.8k_X64 but I could not find any ssl_lib.c file. Also, the author referring to openssl by openssl-1.0.1e which is another version than mine. My question: How can I compile my python script with a modified version of openssl? Please, consider that I am using Windows x64 system and Python 2.7.
false
31,530,442
0.197375
0
0
1
You'll need to install the modified OpenSSL. Python merely has bindings, which will then call the functions in the compiled OpenSSL libraries. If the modified OpenSSL library is installed and in your path completely replacing the original OpenSSL library, then Python will "use" it. This assumes that the modified library is in fact compatible with the original OpenSSL. On a side-note, using modified cryptographic libraries is a terrible idea from a security perspective.
0
131
0
0
2015-07-21T04:33:00.000
python,python-2.7,ssl,openssl
How can I use a modified openssl library (written in C) in my python code?
1
1
1
31,530,967
0
0
0
I use websocket establish connect 10000 clients to the server. But when some of the clients disconnect the conn, the server could not find this situation and still keep this conn.So when clients conn to server again,a new conn established and the number of conn in the server become such a large num.If i dont restart the server, the conn num will still imcrease...
false
31,553,081
0
0
0
0
Your server websockets have callbacks for error as well as close. Are you monitoring both? Sockets do not have to tell the other end. Naturally because you can close a browser window and the server will never know. When you open them you can set a time-out that will cause the socket to close if it doesn't see activity in that time. You could also 'ping' the connections to force and error/close if they disconnect. Lastly you could have a session GUID that both the client and browser know about (cookie or localStorage). If a client reconnects and the GUID shows an active connection on the server you can close that connection before opening a fresh one.
0
98
0
0
2015-07-22T02:55:00.000
python,websocket,apache-kafka
when using Websocket establish connections, sometimes clients free the conn,but the server didnt
1
1
1
31,553,699
0
0
0
I am getting the following exception while trying to install using pip: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', error(111, 'Connection refused'))': /simple/<package>/ Where does pip install the packages from? How do I proxy or use alternate internal site to get these packages?
false
31,559,638
0.099668
0
0
1
This is what worked for me: unset all_proxy (optional in case none are set) pip install 'requests[socks]' inside the venv
1
21,181
0
7
2015-07-22T09:49:00.000
python,pip,pypi
pip install and custom index url
1
1
2
51,655,921
0
1
0
Basically I have a regex rule for following pages Each page has 50 links When i hit a link that is too old (based on a pre-defined date-time) I want to tell scrapy to stop following more pages, but NOT stop it entirely, it must continue to scrape the links it has already decided to scrape -> (complete all Request objects created). JUST that it must NOT follow any more links. So the program will eventually grind to a stop (when it's done scraping all the links) Is there any way i can do this inside the spider?
true
31,565,422
1.2
0
0
0
Scrapy's CrawlSpider has an internal _follow_links member variable which is not yet documented (experimental as for now) setting self._follow_links = False will tell scrapy to stop following more links. But continue to finish up all the Request objects it has already created
0
433
0
1
2015-07-22T14:03:00.000
python,scrapy
How to tell scrapy crawler to STOP following more links dynamically?
1
1
3
31,590,603
0
0
0
Is there a way in Selenium web driver (python, Firefox) to check that the current window is in private mode(private window so the cookies won't be cached) or it is just a normal window?
false
31,576,343
0.379949
0
0
4
Selenium actually already runs private mode by default. Every time you start any driver via Selenium it creates a brand new anonymous profile. This of course if you haven't specified an already created profile.
0
452
0
1
2015-07-23T00:40:00.000
javascript,python,firefox,selenium
Selenium: Check if the current window is a private window or normal window?
1
1
2
31,576,491
0
0
0
Using python 2.7, I need to convert the following curl command to execute in python. curl -b /tmp/admin.cookie --cacert /some/cert/location/serverapache.crt --header "X-Requested-With: XMLHttpRequest" --request POST "https://www.test.com" I am relatively new to Python and are not sure how to use the urllib library or if I should use the requests library. The curl options are especially tricky for me to convert. Any help will be appreciated.
false
31,582,012
0
1
0
0
Can you stay under command line ? If yes, try the python lib nammed "pexpect". It's pretty useful, and let you run commands like on a terminal, from a python program, and interact with the terminal !
0
205
0
1
2015-07-23T08:28:00.000
python,curl
curl and curl options to python conversion
1
1
1
31,584,263
0
0
0
I'm looking to backup a subreddit to disk. So far, it doesn't seem to be easily possible with the way that the Reddit API works. My best bet at getting a single JSON tree with all comments (and nested comments) would seem to be storing them inside of a database and doing a pretty ridiculous recursive query to generate the JSON. Is there a Reddit API method which will give me a tree containing all comments on a given post in the expected order?
false
31,600,249
0
1
0
0
The number of comments you get from the API has a hard limit, for performance reasons; to ensure you're getting all comments, you have to parse through the child nodes and make additional calls as necessary. Be aware that the subreddit listing will only include the latest 1000 posts, so if your target subreddit has more than that, you probably won't be able to obtain a full backup anyways.
1
429
0
2
2015-07-24T00:23:00.000
python,json,reddit
Get a JSON tree of all comments of a post?
1
1
1
31,689,325
0
0
0
I opened python code from github. I assumed it was python2.x and got the above error when I tried to run it. From the reading I've seen Python 3 has depreciated urllib itself and replaced it with a number of libraries including urllib.request. It looks like the code was written in python 3 (a confirmation from someone who knows would be appreciated.) At this point I don't want to move to Python 3 - I haven't researched what it would do to my existing code. Thinking there should be a urllib module for Python 2, I searched Google (using "python2 urllib download") and did not find one. (It might have been hidden in the many answers since urllib includes downloading functionality.) I looked in my Python27/lib directory and didn't see it there. Can I get a version of this module that runs on Python27? Where and how?
false
31,601,238
0.141893
0
0
5
Change from urllib.request import urlopen to from urllib import urlopen I was able to solve this problem by changing like this. For Python2.7 in macOS10.14
0
108,467
0
22
2015-07-24T02:28:00.000
python,urllib
Python 2.7.10 error "from urllib.request import urlopen" no module named request
1
3
7
54,285,222
0