Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail'
I searched on the internet and couldn't find a clearly solution. | false | 33,409,424 | 0 | 0 | 0 | 0 | when calling any function from another file, it should be noted to not import any library inside the function | 1 | 43,941 | 0 | 11 | 2015-10-29T08:39:00.000 | python,anaconda | Import own .py files in anaconda spyder | 1 | 4 | 7 | 64,629,354 | 0 |
0 | 0 | I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail'
I searched on the internet and couldn't find a clearly solution. | false | 33,409,424 | 0.028564 | 0 | 0 | 1 | Searched for an answer for this question to. To use a .py file as a import module from your main folder, you need to place both files in one folder or append a path to the location. If you storage both files in one folder, then check the working directory in the upper right corner of the spyder interface. Because of the wrong working directory you will see a ModuleNotFoundError. | 1 | 43,941 | 0 | 11 | 2015-10-29T08:39:00.000 | python,anaconda | Import own .py files in anaconda spyder | 1 | 4 | 7 | 69,061,619 | 0 |
0 | 0 | I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail'
I searched on the internet and couldn't find a clearly solution. | false | 33,409,424 | 0 | 0 | 0 | 0 | There are many options, e.g.
Place the mail.py file alongside the other python files (this works because the current working dir is on the PYTHONPATH.
Save a copy of mail.py in the anaconda python environment's "/lib/site-packages" folder so it will be available for any python script using that environment. | 1 | 43,941 | 0 | 11 | 2015-10-29T08:39:00.000 | python,anaconda | Import own .py files in anaconda spyder | 1 | 4 | 7 | 33,413,809 | 0 |
0 | 0 | I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail'
I searched on the internet and couldn't find a clearly solution. | false | 33,409,424 | 0.085505 | 0 | 0 | 3 | I had the same problem, my files were in same folder, yet it was throwing an error while importing the "to_be_imported_file.py".
I had to run the "to_be_imported_file.py" seperately before importing it to another file.
I hope it works for you too. | 1 | 43,941 | 0 | 11 | 2015-10-29T08:39:00.000 | python,anaconda | Import own .py files in anaconda spyder | 1 | 4 | 7 | 56,931,134 | 0 |
0 | 0 | Anyone who knows the port and host of a HBase Thrift server, and who has access to the network, can access HBase. This is a security risk. How can the client access to the HBase Thrift server be made secure? | false | 33,446,872 | 0 | 0 | 0 | 0 | My sysadmin told me that in theory he could install an HBase Thrift Server on one of the Hadoop edge nodes that are blocked off, and only open the port to my server via ACLs. He however has no intention of doing this (and I do not either). As this is not a suitable answer I'll leave the question open. | 0 | 929 | 0 | 1 | 2015-10-31T00:46:00.000 | python,hbase,thrift,happybase | How to secure client connections to an HBase Thrift Server? | 1 | 1 | 2 | 35,449,021 | 0 |
0 | 0 | I have made a simple IRC bot for myself in python which works great, but now some friends has asked me if the bot can join their IRC channel too. Their IRC channels are very active, it is Twitch chat(IRC wrapper), which means a lot of messages. I want them to use my bot, but I have no idea how it will perform, this is my first bot I've made.
Right now my code is like this:
Connect to IRC server & channel
while true:
Receive data from the socket (4096, max data to be received at once)
do something with data received
What changes should I do to make it perform better?
1. Should I have a sleep function in the loop?
2. Should I use threads?
3. Any general dos and don'ts?
Thank you for reading my post. | false | 33,447,032 | 0.197375 | 1 | 0 | 1 | Threading is one option but it doesn't scale beyond a certain point (google Python GIC limitation). Depending on how much scaling you want to do, then you need to do multi-process (launch multiple instances).
One pattern is to have a pool of worker threads that process a queue of things to do. There's a lot of overhead to creating and destroying threads in most languages. | 0 | 186 | 0 | 0 | 2015-10-31T01:13:00.000 | python,performance,scaling,bots,irc | IRC Bot, performance and scaling | 1 | 1 | 1 | 33,447,076 | 0 |
0 | 0 | OK, I am trying to use client certificates to authenticate a python client to an Nginx server. Here is what I tried so far:
Created a local CA
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt
Created server key and certificate
openssl genrsa -des3 -out server.key 1024
openssl rsa -in server.key -out server.key
openssl req -new -key server.key -out server.csr
openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
Used similar procedure to create a client key and certificate
openssl genrsa -des3 -out client.key 1024
openssl rsa -in client.key -out client.key
openssl req -new -key client.key -out client.csr
openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt
Add these lines to my nginx config
server {
listen 443;
ssl on;
server_name dev.lightcloud.com;
keepalive_timeout 70;
access_log /usr/local/var/log/nginx/lightcloud.access.log;
error_log /usr/local/var/log/nginx/lightcloud.error.log;
ssl_certificate /Users/wombat/Lightcloud-Web/ssl/server.crt;
ssl_certificate_key /Users/wombat/Lightcloud-Web/ssl/server.key;
ssl_client_certificate /Users/wombat/Lightcloud-Web/ssl/ca.crt;
ssl_verify_client on;
location / {
uwsgi_pass unix:///tmp/uwsgi.socket;
include uwsgi_params;
}
}
created a PEM client file
cat client.crt client.key ca.crt > client.pem
created a test python script
import ssl
import http.client
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
context.load_verify_locations("ca.crt")
context.load_cert_chain("client.pem")
conn = http.client.HTTPSConnection("localhost", context=context)
conn.set_debuglevel(3)
conn.putrequest('GET', '/')
conn.endheaders()
response = conn.getresponse()
print(response.read())
And now I get 400 The SSL certificate error from the server. What am I doing wrong? | true | 33,504,746 | 1.2 | 1 | 0 | 8 | It seems that my problem was that I did not create the CA properly and wasn't signing keys the right way. A CA cert needs to be signed and if you pretend to be top level CA you self-sign your CA cert.
openssl req -new -newkey rsa:2048 -keyout ca.key -out ca.pem
openssl ca -create_serial -out cacert.pem -days 365 -keyfile ca.key -selfsign -infiles ca.pem
Then you use ca command to sign requests
openssl genrsa -des3 -out server.key 1024
openssl req -new -key server.key -out server.csr
openssl ca -out server.pem -infiles server.csr | 0 | 16,933 | 0 | 9 | 2015-11-03T16:50:00.000 | python,ssl,nginx | Doing SSL client authentication is python | 1 | 1 | 1 | 33,526,221 | 0 |
0 | 0 | I'm having trouble importing tweepy. I've looked through so many previous questions and still can't find a correct solution. I think it has something to do with how tweepy is being downloaded when I install but I'm not sure. I get an import error saying that "tweepy is not a package".
I have tweepy library connected to the interpreter and all that but, it is saved as a compressed EGG file instead of a file folder like the rest of my packages. I think that has something to do with it but I'm not too sure.
Also, tweepy works in my command line but not in eclipse. | false | 33,508,572 | 0 | 1 | 0 | 0 | If you recently installed the package maybe just reconfigure your pydev (Window->Preferences->PyDev->Interpreters->Python Interpreter: Quick Auto Configure). | 0 | 321 | 0 | 1 | 2015-11-03T20:35:00.000 | python,eclipse,tweepy | Python import error in eclipse (Package works in command line but not eclipse) | 1 | 1 | 1 | 33,510,764 | 0 |
1 | 0 | When I use Selenium I can see the Browser GUI, is it somehow possible to do with scrapy or is scrapy strictly command line based? | false | 33,510,814 | 0.066568 | 0 | 0 | 1 | Scrapy by itself does not control browsers.
However, you could start a Selenium instance from a Scrapy crawler. Some people design their Scrapy crawler like this. They might process most pages only using Scrapy but fire Selenium to handle some of the pages they want to process. | 0 | 1,636 | 0 | 1 | 2015-11-03T23:09:00.000 | python,selenium,scrapy | Can scrapy control and show a browser like Selenium does? | 1 | 1 | 3 | 33,521,521 | 0 |
0 | 0 | I have searched for long time but I could not find how to disable cookies for phantomjs using selenium with python . I couldn't understand the documentation of phantomjs.Please someone help me. | true | 33,514,313 | 1.2 | 0 | 0 | 4 | Documentation suggests this driver.cookies_enabled = False, you can use it. | 0 | 730 | 0 | 0 | 2015-11-04T05:26:00.000 | python,selenium,cookies,phantomjs | disabling Cookies on phantomjs using selenium with python | 1 | 1 | 1 | 33,516,899 | 0 |
1 | 0 | I am using Selenium2Library '1.7.4' and Robot Framework 2.9.2 (Python 2.7.8 on win32). If I try to give locator as jQuery, the following exception occurs: WebDriverException: Message: unknown error: jQuery is not defined. Please advise which version of Selenium2Library and 'Robot Framework' combination works to identify jQuery as a locator. | false | 33,529,029 | 0 | 0 | 0 | 0 | From Selenium 3.0 - Gecko driver is required to run automation scripts in firefox
Selenium version less than 3.0 works.
Try with the following versions:
robotframework (3.0.2)
robotframework-selenium2library (1.8.0)
selenium (2.53.1) | 0 | 4,119 | 0 | 3 | 2015-11-04T18:09:00.000 | jquery,python,selenium-webdriver,robotframework | WebDriverException: Message: unknown error: jQuery is not defined error in robot framework | 1 | 1 | 1 | 43,115,077 | 0 |
0 | 0 | Im trying to open firefox console through Selenium with Python. How can I open firefox console with python selenium? Is it possible to send keys to the driver or something like that? | false | 33,546,911 | 0.033321 | 0 | 0 | 1 | This works:
ActionChains(driver).key_down(Keys.F12).key_up(Keys.F12).perform()
Without firebug installed at least :) | 0 | 9,299 | 0 | 6 | 2015-11-05T14:19:00.000 | python,firefox,selenium,firefox-developer-tools | How to open console in firefox python selenium? | 1 | 1 | 6 | 33,547,119 | 0 |
1 | 0 | I have access to a S3 bucket. I do not own the bucket. I need to check if new files were added to the bucket, to monitor it.
I saw that buckets can fire events and that it is possible to make use of Amazon's Lambda to monitor and respond to these events. However, I cannot modify the bucket's settings to allow this.
My first idea was to sift through all the files and get the latest one. However, there are a lot of files in that bucket and this approach proved highly inefficient.
Concrete questions:
Is there a way to efficiently get the newest file in a bucket?
Is there a way to monitor uploads to a bucket using boto?
Less concrete question:
How would you approach this problem? Say you had to get the newest file in a bucket and print it's name, how would you do it?
Thanks! | true | 33,551,143 | 1.2 | 1 | 0 | 1 | You are correct that AWS Lambda can be triggered when objects are added to, or deleted from, an Amazon S3 bucket. It is also possible to send a message to Amazon SNS and Amazon SQS. These settings needs to be configured by somebody who has the necessary permissions on the bucket.
If you have no such permissions, but you have the ability to call GetBucket(), then you can retrieve a list of objects in the bucket. This returns up to 1000 objects per API call.
There is no API call available to "get the newest files".
There is no raw code to "monitor" uploads to a bucket. You would need to write code that lists the content of a bucket and then identifies new objects.
How would I approach this problem? I'd ask the owner of the bucket to add some functionality to trigger Lambda/SNS/SQS, or to provide a feed of files. If this wasn't possible, I'd write my own code that scans the entire bucket and have it execute on some regular schedule. | 0 | 4,530 | 0 | 2 | 2015-11-05T17:34:00.000 | python,api,amazon-web-services,amazon-s3,boto | How to monitor a AWS S3 bucket with python using boto? | 1 | 1 | 1 | 33,590,521 | 0 |
0 | 0 | I use tornado websocket send/recv message, the client send json message, and server recv message and json parse, but why the server get message which is mutil json message, such as {"a":"v"}{"a":"c"}, how to process this message | false | 33,562,499 | 0.099668 | 0 | 0 | 1 | Maybe you should delimit the messages you send so it is easy to split them up - in this case you could add a \n, obviously the delimiter mustn't happen within the message. Another way would be to prefix each message with its length in also a clearly-delimited way, then the receiver reads the length then that number of bytes and parses it. | 0 | 247 | 1 | 0 | 2015-11-06T08:32:00.000 | python,websocket,tornado | tornado websocket get multi message when on_message called | 1 | 1 | 2 | 33,562,632 | 0 |
0 | 0 | Running selenium tests in python and chrome. Between each test, it logs that its creating a new http connection.
Is it possible to create a connection pool so that a new one isn't created for each test? | false | 33,568,681 | 0 | 0 | 0 | 0 | Based on the low details provided on your question, and that you're mentioning that this happens only between tests, leads me to think that you have something in your setUp or tearDown methods.
Check what gets executed there and see whether that is the source of your problem. | 0 | 433 | 0 | 0 | 2015-11-06T14:09:00.000 | python,selenium,selenium-webdriver,webdriver,chrome-web-driver | Connection pool for selenium in python | 1 | 1 | 1 | 33,569,953 | 0 |
1 | 0 | So I'm trying to get some data from a search engine. This search engine returns some results and then, after for example 2 seconds, it changes it's html and show some approximate results.
I want to get these approximate results but that's the problem. I use requests.get which gets first response and does not wait those for example 2 seconds. So I'm curious if it is possible.
I don't want to use a Selenium since it has to be as lite as possible because it will be a part of a web page.
So my question is: Is it possible to make requests.get wait for another data? | false | 33,570,762 | 0 | 0 | 0 | 0 | No, since requests is just an HTTP client.
It looks like the page is being modified by JS after finishing other request. You should figure out, what request changes the page and use it (by network inspector in Chrome, for example). | 0 | 455 | 0 | 0 | 2015-11-06T15:59:00.000 | python,get,python-requests,wait | How to wait for a response | 1 | 1 | 1 | 33,570,899 | 0 |
0 | 0 | How can i use the command:
/python -m http.server and make the server reachable through my public ip? I know this is NOT safe, but I just wanna try some things out. | true | 33,597,798 | 1.2 | 0 | 0 | 0 | Use it as you normally do.
You'd connect to it by its external IP address. You might need to setup your router to allow that. Read up on port forwarding, for example. | 0 | 74 | 0 | 0 | 2015-11-08T19:01:00.000 | python,python-3.x | Using command python -m http.server and make server reachable by public ip? | 1 | 1 | 1 | 33,597,842 | 0 |
1 | 0 | I need to run a diff mechanism on two HTML page sources to kick out all the generated data (like user session, etc.).
I wondering if there is a python module that can do that diff and return me the element that contains the difference (So I will kick him in the rest of my code in another sources) | false | 33,608,604 | 0 | 0 | 0 | 0 | You can use the difflib module. It's available as a part of standard python library. | 0 | 308 | 0 | 0 | 2015-11-09T11:58:00.000 | python,diff,lxml,difflib | How to diff between two HTML codes? | 1 | 1 | 1 | 33,608,719 | 0 |
0 | 0 | Dear Test Automation Engineers,
I am implementing Page Object Pattern in Python Language using Pycharm tool.
My concerns are followings:
Structure of Project - 2 tier:
Under project folder: I want 2 packages(folder in python, pycharm): 1 folder should contain all tests to execute while other package should contain element locators blah blah etc [I would appreciate if you please share screenshot of structure heir-achy of project]
I am facing problem in calling element locators from other package(folder)
Locators should be page wise not complete locators of project in one file(it creates mess! - share best approaches)
IMP: I don't want locators files(.py) and Testcases in one folder, should be in separate folders.
I have gone through few examples on web but they are not 2-tier and don't follow Page object model structure project exactly. | false | 33,631,661 | 0 | 0 | 0 | 0 | Just to be clear, you'd like to use that package of locators and helpers on your current web app project and on future web app projects, right?
If so, then it's a case of Python Packaging. Not the simplest topic, but well-documented and very standard (nothing unique to PyCharm.)
If not, and you just want your current project to be able to import that folder, then you need to get that directory on your PYTHONPATH. In your PyCharm Run Configuration, add that path to the run configuration's PYTHONPATH. | 1 | 362 | 0 | 1 | 2015-11-10T13:56:00.000 | python,selenium-webdriver,automated-tests,pycharm,browser-automation | SeleniumWebdriver: Implement Page Object Pattern in Python using Pycharm tool | 1 | 1 | 1 | 33,639,639 | 0 |
0 | 0 | When stepping thru the example python code for creating and retrieving data using the youtube-reporting-api, when I get to the retrieve_report.py code that prompts me to:
Please enter the report URL to download:
What is supposed to be entered here? A specific example would be most helpful. | false | 33,660,279 | 0.379949 | 0 | 0 | 2 | Apologies, download url's are provided, not sure why it wasn't showing in the first place, maybe copy and pasted the code incorrectly. It is displayed out of the retrieve_reports function.
It looks like the actual problem is that I did not read the docs thoroughly! reports are not available for 48 hours!
It would be nice if we had a development mode or something, pointing at non-prod youtube environment where we could get reports immediately, am I missing something that could provide this? | 0 | 356 | 0 | 0 | 2015-11-11T21:35:00.000 | python,youtube,youtube-api,youtube-analytics-api | youtube-reporting-api retrieve_reports.py "report URL to download" | 1 | 1 | 1 | 33,699,139 | 0 |
1 | 0 | I'm writing some code to interact with an HP Helion Eucalyptus 4.2 cloud server.
At the moment I'm using boto 2.38.0, but I discovered that also exists
the boto3 version.
Which version should I use in order to keep the code up with the times?
I mean, It seems that boto3's proposal is a ground-up rewrite more focused
on the "official" Amazon Web Services (AWS). | false | 33,677,594 | 0 | 0 | 0 | 0 | 2.38 is the right version. boto3 is something totally different and I don't have experience with it. | 0 | 189 | 0 | 1 | 2015-11-12T17:28:00.000 | python,boto,eucalyptus | Proper version of boto for Eucalyptus cloud | 1 | 1 | 2 | 33,695,098 | 0 |
0 | 0 | I have 10+ test cases at the moment and planning on creating several more. That being said is there a way that I can modify the URL variable once and that would change the variable in all my other scripts? I have this in all my of test scripts:
class TestCase1(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "http://URL"
self.verificationErrors = []
self.accept_next_alert = True
I want to be able to be able to modify self.base_url = http://URL. But I don't want to have to do that 10+ times. | true | 33,678,351 | 1.2 | 1 | 0 | 1 | There are 2 good approaches to doing this,
1) Using a Configuration Manager, a singleton object that stores all your settings.
2) Using a basetest, a single base test that all your tests inherit from.
My preference is towards a Configuration Manager. And within that configuration manager, you can put your logic for grabbing the base url form configuration files, system environments, command line params, etc... | 0 | 1,381 | 0 | 0 | 2015-11-12T18:09:00.000 | python,unit-testing,url,selenium | Python Selenium Having to Change URL Variable On All of My Test Cases | 1 | 1 | 2 | 36,340,048 | 0 |
0 | 0 | If a socket program runs on a port(say 6053) and if the rule is not added in the firewall the functions recv read and recvfrom are blocked.
How do we check this in C or python and report Port not opened error on linux machines. | false | 33,692,949 | 0.291313 | 0 | 0 | 3 | Try to connect on that port using socket.connect(), if connection is not successful, then show message that Port not opened. | 0 | 1,294 | 0 | 4 | 2015-11-13T12:37:00.000 | python,c,sockets | How to know whether a port is open/close in socket programming? | 1 | 2 | 2 | 33,693,102 | 0 |
0 | 0 | If a socket program runs on a port(say 6053) and if the rule is not added in the firewall the functions recv read and recvfrom are blocked.
How do we check this in C or python and report Port not opened error on linux machines. | false | 33,692,949 | 0.099668 | 0 | 0 | 1 | Tools like nmap can help in determining whether the particular port is open or closed.
TCP :
nmap uses techniques like TCP SYN scan or TCP Connect scan where the server will reply with ACK-RST packet for SYN request incase of closed port. You can notice that, it is determined at the time of 3-way handshake (connection establishment) itself.
UDP : nmap also facilitates the UDP scan, where ICMP based 'Port Unreachable' packet shall be returned in case the UDP packet arrives on a closed UDP port (This also depends on the stack in the OS). Unlike TCP, the UDP is not a connection-based protocol and ICMP is also connection-less, so you might need to send some significant number of UDP packets for a short interval and evaluate based on the responses or some similar logic.
You can arrive on similar technique/logic and determine whether the particular port is open or closed and flash appropriate message for user. | 0 | 1,294 | 0 | 4 | 2015-11-13T12:37:00.000 | python,c,sockets | How to know whether a port is open/close in socket programming? | 1 | 2 | 2 | 33,704,573 | 0 |
0 | 0 | I'm writing a function that receives a graph as input.
The very first thing I need to do is determine the order of the graph (that is, the number of vertices in the graph).
I mean, I could use g.summary() (which returns a string that includes the number of vertices), but then I'd have parse the string to get at the number of vertices -- and that's just nasty.
To get the number of edges I'm using len(g.get_edgelist()), which works. But there is no g.get_vertexlist(), so I can't use the same method.
Surely there is an easy way to do this that doesn't involve parsing strings. | true | 33,706,378 | 1.2 | 0 | 0 | 25 | g.vcount() is a dedicated function in igraph that returns the number of vertices. Similarly, g.ecount() returns the number of edges, and it is way faster than len(g.get_edgelist()) as it does not have to construct the full edge list in advance. | 1 | 18,012 | 0 | 12 | 2015-11-14T08:02:00.000 | python,igraph | How do I find the number of vertices in a graph created by iGraph in python? | 1 | 1 | 3 | 33,711,279 | 0 |
0 | 0 | I have a script that is using the Live Connect REST APIs to refresh an OAuth 2.0 access token. The script has been working without problems for a couple of years, but recently broke with an apparent change in Live Connect API URLs.
Originally, I used these URLs to perform OAuth authentication:
_https://login.live.com/oauth20_authorize.srf
_https://login.live.com/oauth20_token.srf
Yesterday, when attempting to run the script I received the error:
hostname 'login.live.com' doesn't match u'api.login.live.com'
So, I changed the url to "api.login.live.com" but then received a 404 during the request as _https://api.login.live.com/oauth20_token.srf doesn't seem to exist.
Interestingly, _https://login.live.com/oauth20_token.srf does yield the expected result when accessed via the browser.
Any ideas on what might be going on?
Potentially interesting data:
Browser is Chrome running on Windows 10
Script is written in Python 2.7 using the requests 1.0.4 package
(Note that my reputation doesn't allow for more than 2 links, thus the funky decoration). | true | 33,712,269 | 1.2 | 0 | 0 | 0 | Should someone find themselves in a similar situation, the fix was to add the parameter "verify=False" when calling requests.post. | 0 | 148 | 0 | 0 | 2015-11-14T19:13:00.000 | python,oauth,live-sdk | Live Connect: Unable to refresh OAuth 2.0 token due to SSL and 404 Errors | 1 | 1 | 1 | 34,280,063 | 0 |
0 | 0 | I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo? | true | 33,729,454 | 1.2 | 1 | 0 | 6 | Split them out into a configuration file that you don’t include, or replace them with placeholders and don’t commit the actual values, using git add -p. The first option is better.
The configuration file could consist of a basic .py file credentials.py in which you define the needed private credentials in any structure you consider best. (a dictionary would probably be the most suitable).
You can use the sensitive information by importing the structure in this file and accessing the contents. Others users using the code you have created should be advised to do so too.
The hiding of this content is eventually performed with your .gitignore file. In it, you simply add the filename in order to exclude it from being uploaded to your repository. | 0 | 1,206 | 0 | 0 | 2015-11-16T06:16:00.000 | python,git,github | GitHub public repo with sensitive information? | 1 | 3 | 5 | 33,729,510 | 0 |
0 | 0 | I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo? | false | 33,729,454 | 0.039979 | 1 | 0 | 1 | The twitter API keys are usually held in a JSON file. So when your uploading your repository you can modify the .gitignore file to hide the .json files. What this does is it will not upload those files to the git repository.
Your other option is obviously going for private repositories which will not be the solution in this case. | 0 | 1,206 | 0 | 0 | 2015-11-16T06:16:00.000 | python,git,github | GitHub public repo with sensitive information? | 1 | 3 | 5 | 33,729,574 | 0 |
0 | 0 | I'm developing a Python script but I need to include my public and secret Twitter API key for it to work. I'd like to make my project public but keep the sensitive information secret using Git and GitHub. Though I highly doubt this is possible, is there any way to block out that data in a GitHub public repo? | false | 33,729,454 | 0.158649 | 1 | 0 | 4 | No. Instead, load the secret information from a file and add that file to .gitignore so that it will not be a part of the repository. | 0 | 1,206 | 0 | 0 | 2015-11-16T06:16:00.000 | python,git,github | GitHub public repo with sensitive information? | 1 | 3 | 5 | 33,729,502 | 0 |
0 | 0 | Having file websocket_server.py and webscoket_service.py to run websocket I do python websocket_server.py
How show I configure custom file watcher so when I modify webscoket_service.py, my websocket server being restarted | false | 33,755,337 | 0.197375 | 0 | 0 | 1 | You can simply pass autoreload=True or debug=True (which does autoreload and some other things) to your Application constructor. | 0 | 96 | 0 | 0 | 2015-11-17T11:05:00.000 | python,python-2.7,websocket,tornado | Create file watcher that will restart websocket server when specific files modified | 1 | 1 | 1 | 33,763,753 | 0 |
0 | 0 | I'm making my firs online game in python, I decided to use python because its simplicity using sockets. I need to find a method to coordinate all the existing game objects like players, items, monsters width the server and the client computers.
Method 1:
Make every entity (player, monsters, NPC) to run in its own thread an use its own socket to communicate with the copies in the client computers. For example, if I am a player and attack a monster the monster will send the order to drain health points to its corresponding copie in each computer and the server.
Method 2:
Make one socket handled by the main function, its purpose would be to get the messages sent by the clients and process them and send an answer. For example, A monster has 0 health points so that character should die, it sends a request to the server, the server analyzes the request and sends an answer to all the clients.
I haven't finished any of these algorithms because I don't want to make useful code so that is why I am asking which of them is better.
If it is possible I would like you to recommend other methods. | false | 33,818,028 | 0 | 0 | 0 | 0 | run all the players in threads, and have central threads that monitor game logic, and monster activities. otherwise, if one of your clients disconnects, the whole game crashes | 0 | 99 | 0 | 0 | 2015-11-20T02:32:00.000 | python,multithreading,sockets | python online game using socket library | 1 | 1 | 1 | 65,875,300 | 0 |
0 | 0 | Everyone's code online refers to sudo apt-get #whatever# but windows doesn't have that feature. I heard of something called Powershell but I opened it and have no idea what it is.
I just want to get a simple environment going and lxml so I could scrape from websites. | false | 33,818,770 | 0.26052 | 0 | 0 | 4 | Go to the regular command prompt and try pip install lxml. If that doesn't work, remove and reinstall python. You'll get a list of check marks during installation, make sure you check pip and try pip install lxml again afterwards.
pip stands for pip installs packages, and it can install some useful python packages for you. | 0 | 10,453 | 1 | 6 | 2015-11-20T04:01:00.000 | python,windows,lxml | No module named 'lxml' Windows 8.1 | 1 | 1 | 3 | 33,818,809 | 0 |
1 | 0 | I am working on web socket application. From the front-end there would be single socket per application. But I am not sure about back-end. We are using Python and nginx with Flask-socketIO and socket-io client library. This architecture will be used to notify front-end that a change is occurred and it should update data.
Following are my doubts -
How many sockets and threads will be created on server ?
Can a socket be shared between different connection ?
Is there any tool to analyze no of socket open ? | true | 33,820,103 | 1.2 | 0 | 0 | 0 | How many sockets and threads will be created on server?
As many sockets as there are inbound connections. As for threads, it depends on your architecture. Could be one, could be same as sockets, could be in between, could be more. Unanswerable.
Can a socket be shared between different connection?
No, of course not. The question doesn't make sense. A socket is an endpoint of a connection.
Is there any tool to analyze no of socket open?
The netstat tool. | 0 | 611 | 0 | 0 | 2015-11-20T06:11:00.000 | python,node.js,sockets,nginx,socket.io | Socket server performance | 1 | 1 | 2 | 33,837,539 | 0 |
0 | 0 | There is a netcdf file is in a remote server. What i want to do is that extracting data/cropping the file (need only specific variable for specific period) and then moving the file into my local directory.
With python, I ve used 'paramiko' module to access the remote server; is there any way to use 'Dataset' command to open the netcdf file after ssh.connect? Or any solution with python is welcome, thanks. | true | 33,825,080 | 1.2 | 0 | 0 | 0 | Solved: not by python modules/functions, just by executing a 'common' netcdf function to extract subset files on a remote server command line (i.e. myssh.exec_command("ncea -v %s %s %s" %(varname, remoteDBpath, remotesubsetpath) and them bring the files into a local server (i.e. myftp.get(remotesubsetpath, localsubsetpath). | 0 | 273 | 0 | 1 | 2015-11-20T11:02:00.000 | python,ssh,netcdf | How to subset NetCDF in a remote server and scp the subset file into a local server | 1 | 1 | 2 | 34,001,831 | 0 |
0 | 0 | I would like to know if a key exists in boto3. I can loop the bucket contents and check the key if it matches.
But that seems longer and an overkill. Boto3 official docs explicitly state how to do this.
May be I am missing the obvious. Can anybody point me how I can achieve this. | false | 33,842,944 | 0.008333 | 0 | 0 | 1 | Just following the thread, can someone conclude which one is the most efficient way to check if an object exists in S3?
I think head_object might win as it just checks the metadata which is lighter than the actual object itself | 0 | 266,735 | 0 | 283 | 2015-11-21T11:46:00.000 | python,amazon-s3,boto3 | check if a key exists in a bucket in s3 using boto3 | 1 | 1 | 24 | 59,685,923 | 0 |
0 | 0 | I'm trying to do some home/life automation scripting and I'd like to be able to get the location of my Android phone through some API. This would be useful to do things such as turn on my home security camera, or to route my home calls to my phone if I'm away.
This would preferably a RESTful one, or an API with good Python interop. However, I'm not averse to using any tool to get the job done.
I considered checking my router to see if my phone was connected, which will work for some things, but it would hinder me in implementing other things.
I know I could probably write an Android app that would phone home to do this, but I wanted to see if there were any alternatives first. My Google-Fu came up short on this one (if it exists).
Thanks in advance! | false | 33,881,172 | 0 | 1 | 0 | 0 | secrets of mobile hackers..what does google chrome use?
Google uses three approaches..the third approach is your option find a site that does or allows triangulation involving using cell tower maps..
Or lets see mobile web page to get location via google services and than hook that into your python script on the server side | 0 | 58 | 0 | 0 | 2015-11-23T21:37:00.000 | android,python | Android Phone Location API | 1 | 1 | 1 | 33,881,298 | 0 |
0 | 0 | I have these values to connect to an IMAP sever:
hostname
username
password
I want to auto-detect the details with Python:
port
ssl or starttls
If the port is one of the well-known port numbers there should be not too many possible combinations.
Trying all would be a solution, but maybe there is a better way? | false | 33,946,694 | 0.197375 | 1 | 0 | 1 | You pretty much have to brute force it, but there's really only three setups, and it only requires two connects:
First connect on port 993 with regular SSL/TLS. If this works: 993/TLS. If this fails:
Connect to port 143, and check if CAPABILITY STARTTLS exists. If it does: try StartTLS. If this works: 143/STARTTLS. Else:
See if you can log in on port 143. if this fails, no good configuration. This wouldn't be secure anyway, so should be discouraged.
SMTP is a bit more complex: You can try 587 with StartTLS, 465 with TLS, or 25 with StartTLS, plain, or no authentication at all.
Note: autodetecting STARTTLS is dangerous, as it allows a MITM attack, where the attacker hides the STARTTLS capability so that you attempt to login without it. You may want to ask the user if they wish to connect insecurely, or provide a 'disable security' setting that must be opted into. | 0 | 237 | 0 | 0 | 2015-11-26T20:42:00.000 | python,imap | Autodetect IMAP connection details | 1 | 1 | 1 | 33,947,171 | 0 |
1 | 0 | Is there any way to have Boto seek for the configuration files other than the default location, which is ~/.aws? | true | 33,951,619 | 1.2 | 1 | 0 | 9 | It's not clear from the question whether you are talking about boto or boto3. Both allow you to use environment variables to tell it where to look for credentials and configuration files but the environment variables are different.
In boto3 you can use the environment variable AWS_SHARED_CREDENTIALS_FILE to tell boto3 where your credentials file is (by default, it is in ~/.aws/credentials. You can use AWS_CONFIG_FILE to tell it where your config file is (by default, it is in ~/.aws/config.
In boto, you can use BOTO_CONFIG to tell boto where to find its config file (by default it is in /etc/boto.cfg or ~/.boto. | 0 | 13,555 | 0 | 11 | 2015-11-27T06:43:00.000 | python,python-3.x,boto,boto3 | Boto3: Configuration file location | 1 | 1 | 3 | 33,959,240 | 0 |
0 | 0 | I'm a super noob in python and oauth2 but still I've wasted days on this one, so if you guys could give me hand, I would be eternally grateful :')
Goal: writing a script that download a file everything 5min from google drive
Achieved: Get the credentials with tokens and download it once
Problem: how do I refresh the token?
I achieved to get my tokens once but I don't understand what to do so that I don't need to rebuild a refresh token eveytime...
I don't really know if I'm getting oauth2 wrong, but I've read that it should be stored and (there is store method, right?)
Thanks :) | true | 34,004,281 | 1.2 | 0 | 0 | 0 | Okay, found it myself.
You gotta refresh you token everytime it expires, using httplib2.
Quick hint:
import httplib2
http = httplib2.Http()
http = credentials.authorize(http)
where credentials contains what you got from your first authorization flow.
Cheers | 0 | 336 | 0 | 0 | 2015-11-30T17:19:00.000 | python,oauth-2.0,token,google-api-python-client | Google API oauth2 - how to store credentials in order ot refresh token later | 1 | 1 | 1 | 34,023,824 | 0 |
0 | 0 | I am newbie in the real-time distributed search engine elasticsearch, but I would like to ask a technical question.
I have written a python module-crawler that parses a web page and creates JSON objects with native information. The next step for my module-crawler is to store the native information, using the elasticsearch.
The real question is the following.
Which technique is better for my occasion? The elasticsearch RESTful API or the python API for elastic search (elasticsearch-py) ? | false | 34,010,978 | 0 | 0 | 0 | 0 | You may also try elasticsearch_dsl it is a high level wraper of elasticsearch. | 0 | 520 | 0 | 1 | 2015-12-01T01:15:00.000 | python,api,rest,elasticsearch,elasticsearch-py | Elasticsearch HTTP API or python API | 1 | 1 | 2 | 45,581,138 | 0 |
1 | 0 | I'm trying to get the XPATH for Code Generator field form (Facebook) in order to fill it (of course before I need to put a code with "numbers").
In Chrome console when I get the XPATH I get:
//*[@id="approvals_code"]
And then in my test I put:
elem = driver.find_element_by_xpath("//*[@id='approvals_code']")
if elem: elem.send_keys("numbers")
elem.send_keys(Keys.ENTER)
With those I get:
StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
What means wrong field name. Does anyone know how to properly get a XPATH? | false | 34,018,450 | 0 | 0 | 0 | 0 | This error usually comes if element is not present in the DOM.
Or may be element is in iframe. | 0 | 231 | 0 | 1 | 2015-12-01T10:46:00.000 | python,facebook,selenium,xpath | Python selenium test - Facebook code generator XPATH | 1 | 1 | 1 | 34,019,477 | 0 |
0 | 0 | I have a script that worked away for 1hr to 1hr 18mins getting data from a web server until I got the error
NewConnectionError(': Failed to establish a new connection: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)
I'm using sessions and with requests.Session() as s: to reduce some of the overhead in my requests but is there more I can do as I still have more content to get but also don't want to get a permanent block | false | 34,092,384 | 0 | 0 | 0 | 0 | So I decided to tackle the problem a different way and run the script multiple times but use if not os.path.isfile(FileNameToWrite): from import os.path to check if I had already processed the webpage and saved it to a file. If not I submitted the requests and eventually got all the data I needed.
Incidentally I ran the same original code on a faster machine and it encountered the error sooner but having still processed the same number of files which incorporated roughly over 1000 requests | 0 | 65 | 0 | 0 | 2015-12-04T16:00:00.000 | python,python-requests | Is there a maximum period I should submit requests using python-requests to avoid blocking by the web server | 1 | 1 | 1 | 34,152,881 | 0 |
0 | 0 | I am writing a web scraper using Selenium for Python. The scraper is visiting the same sites many times per hour, therefore I was hoping to find a way to alter my IP every few searches. What is the best strategy for this (I am using firefox)? Is there any prewritten code/a csv of IP addresses I can switch through? I am completely new to masking IP, proxies, etc. so please go easy on me! | false | 34,097,386 | 0 | 0 | 0 | 0 | Your ISP will assign you your IP address. If you sign up for something like hidemyass.com, they will probably provide you with an app that changes your proxy, although I don't know how they do it.
But, if they have an app that cycles you through various proxies, then all your internet traffic will go through that proxy - including your scraper. There's no need for the scraper to know about these proxies or how hide my ass works - it'll connect through the proxies just like your browser or FTP client or .... | 0 | 7,693 | 0 | 6 | 2015-12-04T21:02:00.000 | python,selenium,web-scraping,ip | Selenium Python Changing IP | 1 | 1 | 2 | 39,429,048 | 0 |
0 | 0 | Currently, redis has maxclients limit of 10k
So, I cant spawn more than 10k celery workers (a celery worker with 200 prefork across 50 machines).
Without changing redis maxclient limit, what are some of the things I can do to accommodate more than 10k celery workers?
I was thinking setting up master-slave redis cluster but how would a celery daemon know to connect different slaves? | false | 34,141,600 | 0 | 0 | 0 | 0 | Don't. Your Redis command latency with over 10,000 connections will suffer, usually heavily. Even the basic Redis ping command shows this.
Step one: re-evaluate the 10k worker requirement. Chances are very high it is heavily inflated. What data supports it? Most of the time people are used to slow servers where concurrency is higher because each request takes orders of magnitude more time than Redis does. Consider this, a decently tuned Redis single instance can handle over a million requests per second. Do the math and you'll see that it is unlikely you will have the traffic workload to keep those workers busy without slamming into other limits such as the speed of light and Redis capacity.
If you truly do need that level of concurrency perhaps try integrating Twemproxy to handle the connections in a safer manner, though you will likely see the latency effects anyway if you really have the workload necessary to justify 10k concurrent connections.
Your other options are Codis and partitioning your data across multiple Redis instances, or some combination of the above. | 0 | 599 | 1 | 1 | 2015-12-07T19:29:00.000 | python,redis,celery | Celery w/ Redis broker: Is it possible to have more 10k connection | 1 | 1 | 1 | 34,169,982 | 0 |
1 | 0 | I want to classify a given set of web pages to different classes, mainly to 3 classes(product page, index page and product-related items page). I think it can be done using analyzing their structure. I just look for comparing the web pages based on their DOM(Document Object Model) structure. I want to whether there is library in python for resolving this problem.
Thanks in advance. | false | 34,155,609 | 0 | 0 | 0 | 0 | First you would need to identify which elements in the page actually uniquely identify a page as being of a specific webpage-class.
Then you could use a library like BeautifulSoup to actually look through the document to see if those elements exist.
Then you would just need a series of if/elifs to determine if a page has the qualifying elements, if so classify it as the appropriate webpage-class. | 0 | 192 | 0 | 1 | 2015-12-08T12:13:00.000 | python,dom,data-science | Web page structure comparison using python | 1 | 1 | 1 | 34,160,545 | 0 |
0 | 0 | The data was captured from LTE network.
I don't know how to recognize count TCP retransmission of a single TCP flow, using
Python.
Could I recognize the type of retransmission? It's because of congestion or packet loss?
Thanks. | false | 34,169,111 | 0.197375 | 0 | 0 | 1 | I did it in C but the general ideas is, you need to keep track of TCP sequence numbers (there are two streams for each TCP session, one from client to server, the other is from server to client). This is a little complex.
For each stream, you have a pointer to keep track of the consecutive sequence numbers sent so far and use a linked-list to keep track of the pairs ( sequence number + data length) that have a gap to the pointer. Each time you see a new data packet in the stream, you either update the pointer, or add to the linked-list. Note that after you update the pointer, you should check if the linked-list to see if some of the gaps are closed. You can tell retransmitted data packets this way.
Hope it helps. Good luck. | 0 | 1,227 | 0 | 0 | 2015-12-09T01:16:00.000 | python,tcp,wireshark,pcap | How could I find TCP retransmission and packet loss from pcap file? | 1 | 1 | 1 | 34,496,093 | 0 |
0 | 0 | Rarely, I see below message during execution of the script. Can someone tell me what is the reason for this? Is it due to script or because of the application I am working on?
[7192:4260:1209/122546:ERROR:latency_info.cc(157)] RenderWidgetHostImpl::OnSwapCompositorFrame, Late
ncyInfo vector size 323 is too big. | false | 34,172,598 | 0 | 0 | 0 | 0 | Which driver are you using for your script? I've run in to this problem with Chromedriver and assumed it was due to a buffer/latency issue when posting data to a text field.
By breaking this data up in to smaller chunks, I resolved the issue. I did this manually as it was just one section, but you might want to create a function that takes larger data and splits it in to smaller chunks if you do this a lot.
I've also heard it is specific to Chrome, but have not had a chance to test this in other browsers. | 0 | 1,344 | 0 | 1 | 2015-12-09T07:03:00.000 | python-3.x,selenium-webdriver | Warning messages during python selenium script : Late ncyInfo vector size | 1 | 1 | 1 | 40,609,613 | 0 |
0 | 0 | I am currently trying to write a small bot for a banking site that doesn't supply an API. Nevertheless, the security of the login page seems a little more ingenious than what I'd have expected, since even though I don't see any significant difference between Chrome and Python, it doesn't let requests made by Python through (I accounted for things such as headers and cookies)
I've been wondering if there is a tool to record requests in FireFox/Chrome/Any browser and replicate them in Python (or any other language)? Think selenium, but without the overhead of selenium :p | false | 34,173,343 | 0.099668 | 0 | 0 | 1 | You can use Selenium web drivers to actually use browsers to make the requests for you.
In such cases, I usually checkout the request made by Chrome from my dev tools "Network" tab. Then I right click on the request and copy the request as cURL to run it on command line to see if it works perfectly. If it does, then I can be certain it can be achieved using Python's requests package. | 0 | 779 | 0 | 1 | 2015-12-09T07:46:00.000 | python,web-scraping | Easily replicate browser requests with python? | 1 | 1 | 2 | 34,173,449 | 0 |
1 | 0 | I have a django api which returns content fine in my localhost. But when I run it production. it giving me 324 error [ empty content response error].
I had printed api response which is fine. But even before api runs for completions, chrome browser throwing 324 error.
When I researched a bit. it look like socket connection is dead in client side. I am not sure how to fix it. | false | 34,267,461 | -0.197375 | 0 | 0 | -1 | This could be specific to the server you are using. First try clearing your cookies, but if that does not work, that means you have a faulty server and I don't know how to fix that other than getting another one. | 0 | 649 | 0 | 0 | 2015-12-14T12:47:00.000 | python,django,google-chrome | 324 error::empty response, django | 1 | 1 | 1 | 37,083,575 | 0 |
0 | 0 | I am using the Python requests module (requests (2.7.0)) and tracking URL requests.
Most of these URL's are supposed to trigger a 301 redirect however for some the domain changes as well. These URL's where the 301 is causing a domain name change i.e. x.y.com ends up as a.b.com I get a certificate verify failed. However I have checked and the cert on that site is valid and it is not a self signed cert.
For the others where the domain remains the same I do not get any errors so it does not seem to be linked to SSL directly else the others would fail as well.
Also what is interesting is that if I run the same script using curl instead of requests I do not get any errors.
I know I can suppress the request errors by setting verify=False but I am wondering why the failure occurs only when there is a domain name change.
Regards,
AB | false | 34,341,396 | 0 | 1 | 0 | 0 | This seems to work now. I believe the issue was linked to an old version of openssl. Once I upgraded even the 301 for a different domain goes through with no errors and that was with verify set to True. | 0 | 188 | 0 | 0 | 2015-12-17T18:08:00.000 | python-3.x,python-requests | Python requests module results in SSL error for 301 redirects to a different domain | 1 | 1 | 1 | 34,343,323 | 0 |
0 | 0 | i need to ask about xbee packet size. is it there any minimum size for the packet of API.
i'm using Xbee S2 API mode AP1 however when i send below frame from router to coordinator the packet failed to arrive .
Packet : uint8_t payload[] = {'B',200,200,200,200};
However if i send :
Packet : uint8_t payload[] = {'B',200,200,200,200,200,200};
the packet arrived successfully .... weird :(
Test 3:
Packet : uint8_t payload[] = {'B',200,200,200};
the packet arrived successfully
Test 4:
uint8_t payload[] = {'B',200,200};
the packet is failed to arrive :(
i don't know what is the problem | false | 34,483,983 | 0.197375 | 0 | 0 | 1 | There isn't a minimum size, but the module does make use of a "packetization timeout" setting (ATRO) to decide when to send your data. If you wait longer, you may find that the module sends the frame and it arrives at the destination.
I'm assuming you're using "AT Mode" even though you write "API Mode". If you are in fact using API mode, please post more of your code, and perhaps include a link to the code library you're using to build your API frames. Are you setting the length correctly? Does the library expect a null-terminated string for the payload? Try adding a 0 to the end of your payload array to see if that helps. | 0 | 134 | 1 | 0 | 2015-12-27T19:25:00.000 | python,c,arduino,wireless,xbee | Xbee API packet is failing to arrive from router to coordinator | 1 | 1 | 1 | 34,498,742 | 0 |
1 | 0 | Both belongs to boto.ec2 . From the documentation i found that get_all_reserved_instances returns all reserved instances, but i am not clear about get_all_reserved_instances_offerings . What is it mean by offering.
One other thing that i want to know is,what is recurring_charges ?
Please clarify ? | true | 34,493,061 | 1.2 | 0 | 0 | 2 | The get_all_reserved_instance_offerings method in boto returns a list of all reserved instance types that are available for purchase. So, if you want to purchase reserved instances you would look through the list of offerings, find the instance type, etc. that you want and then you would be able to purchase that offering with the purchase_reserved_instance_offering method or via the AWS console.
So, perhaps a simple way to say it is get_all_reserved_instance_offerings tells you what you can buy and get_all_reserved_instances tells you what you have already bought. | 0 | 178 | 0 | 0 | 2015-12-28T11:54:00.000 | python,amazon-web-services,amazon-s3,boto,boto3 | What is the difference between get_all_reserved_instances and get_all_reserved_instances_offerings? | 1 | 1 | 1 | 34,494,713 | 0 |
0 | 0 | Is it possible to access local files via remote SSH connection (local files of the connecting client of course, not other clients)?
To be specific, I'm wondering if the app I'm making (which is designed to be used over SSH, i.e. user connects to a remote SSH server and the script (written in Python) is automatically executed) can access local (client's) files. I want to implement an upload system, where user(s) (connected to SSH server, running the script) may be able to upload images, from their local computers, over to other hosting sites (not the SSH server itself, but other sites, like imgur or pomf (the API is irrelevant)). So the remote server would require access to local files to send the file to another remote hosting server and return the link. | true | 34,500,111 | 1.2 | 1 | 0 | 2 | You're asking if you can write a program on the server which can access files from the client when someone runs this program through SSH from the client?
If the only program running on the client is SSH, then no. If it was possible, that would be a security bug in SSH. | 0 | 866 | 1 | 0 | 2015-12-28T20:14:00.000 | python,linux,ssh | Remote SSH server accessing local files | 1 | 1 | 1 | 34,500,718 | 0 |
0 | 0 | Suppose that there is one redis-client which is subscribing to channel c1 and another redis-client publishes "data" to channel c1.
At this point, does the data published is pending in a redis-server by when the client subscribing to "c1" get the data(by calling pubsub.listen() or pubsub.get_message()) or go directly to the client subscribing to the channel c1 through a redis-server?
In other words, When a redis-client call pubsub.getMessage() or pubsub.listen(), does redis-client send a request to redis-server to get data or just get data from the local socket buffer?
When I read some documents it is saying that pubsub.get_message() use select module for a socket internally.
It seems to mean that the data subscribed already is in the client local buffer, not server.
Could you give me any suggestion? | false | 34,510,980 | 0.099668 | 0 | 0 | 1 | Redis pubsub is fire and forget.
In other words, when a publish command is sent, the message is received by online subscribers, and who wasn't subscribed/listening for messages when the publish command was sent, won't never get these messages. | 0 | 381 | 0 | 0 | 2015-12-29T12:21:00.000 | python,redis | How does redis publish work? | 1 | 1 | 2 | 34,513,168 | 0 |
1 | 0 | Working on a small app that takes a Spotify track URL submitted by a user in a messaging application and adds it to a public Spotify playlist. The app is running with the help of spotipy python on a Heroku site (so I have a valid /callback) and listens for the user posting a track URL.
When I run the app through command line, I use util.prompt_for_user_token. A browser opens, I move through the auth flow successfully, and I copy-paste the provided callback URL back into terminal.
When I run this app and attempt to add a track on the messaging application, it does not open a browser for the user to authenticate, so the auth flow never completes.
Any advice on how to handle this? Can I auth once via terminal, capture the code/token and then handle the refreshing process so that the end-user never has to authenticate?
P.S. can't add the tag "spotipy" yet but surprised it was not already available | false | 34,520,233 | 0.197375 | 0 | 0 | 2 | I once ran into a similar issue with Google's Calendar API. The app was pretty low-importance so I botched a solution together by running through the auth locally in my browser, finding the response token, and manually copying it over into an environment variable on Heroku. The downside of course was that tokens are set to auto-expire (I believe Google Calendar's was set to 30 days), so periodically the app stopped working and I had to run through the auth flow and copy the key over again. There might be a way to automate that.
Good luck! | 0 | 1,073 | 1 | 6 | 2015-12-29T22:40:00.000 | python,api,heroku,oauth-2.0,spotify | Completing Spotify Authorization Code Flow via desktop application without using browser | 1 | 1 | 2 | 34,520,316 | 0 |
0 | 0 | How can I let users purchase phone numbers without having credit on my account? I'd like the users to pay for the numbers directly themselves. Is it possible to have the users payment sent to my account as a credit and then use it to pay for the number? How can I do this with Twilio API? | true | 34,522,605 | 1.2 | 0 | 0 | 2 | Phone numbers are purchased from your account. So you need to have atleast 1$(i think 1$ is required to purchase a phone number) of twilio credit to purchase a number. If you didnt have credit, you cannot purchase a numnber. And i think no Api is available to credit your account.
Best way to implement twilio number purchase webportal are
1) Have some credit in your twilio account
2) Charge users when they purchase twilio number
3) Set a twilio recharge trigger so that your twilio account is recharged from your bank account when credit goes below a limit | 0 | 82 | 0 | 0 | 2015-12-30T03:44:00.000 | python,twilio,payment | How can I let users purchase Twilio numbers from my site? | 1 | 1 | 1 | 34,529,283 | 0 |
0 | 0 | I'm developing a website to find out the best politician of my state.In my website visitors can vote for their favorite politician.
I want to restrict the users adding more than one vote to a politician and also wants to restrict a user vote for more than one politician (A user can give one vote to any one politician).
How can I trace unique users? I'm not interested in adding a registration/logon method which will reduce the user interaction. I tried to trace unique IP address, but website can be accessed through proxy servers and that method will not work.
Is there any way to trace users? | false | 34,542,440 | 0 | 0 | 0 | 0 | For this task, you have to user registration/login functionality. Otherwise, you will get incorrect data.
My suggested workflow:
User registers on site with his email id.
Confirmation mail goes to user.
User clicks on confirm link and proves that he is genuine.
After that he votes.
His vote gets recorded against his email id. | 0 | 96 | 0 | 2 | 2015-12-31T06:54:00.000 | javascript,java,php,python | unique user identification for a website | 1 | 1 | 1 | 34,542,478 | 0 |
1 | 0 | I need to upload AudioSegment object to S3 after doing some editing. What I'm doing is to edit the audio, then export it and then send it to S3.
However, export to mp3 is taking like 2 seconds for 2 minutes song.
So, I'm just wondering if it's possible to send the file to S3 without export it. Note: I see there is raw_data, however, I need to be able to play the saved clip. | true | 34,546,117 | 1.2 | 0 | 0 | 3 | The delay is caused by the transcoding step (converting the raw data to mp3). You can avoid that by exporting WAV files.
A WAV file is essentially just the raw data with some header information at the beginning so exporting with format="wav" will avoid the need to transcode, and should be significantly faster.
However, without any compression, the files will be larger (like 40MB instead of 5MB). You'll probably lose more than 2 seconds due to transferring 5 to 10 times more data over the network.
Some codecs are slower than others, so you may want to experiment with other encodings to strike a different speed/file size balance than mp3 and wav do (or you could try just using regular file compression like gzip, bz2, or a "zip" file on your wav output) | 0 | 674 | 0 | 2 | 2015-12-31T11:50:00.000 | python,amazon-s3,pydub | Python Pydub AudioSegment laggy export | 1 | 1 | 1 | 34,548,666 | 0 |
0 | 0 | I am planning to invoke AWS Lambda function by modifying objects on AWS S3 bucket. I also need to send a large amount of data to the AWS Lambda function. How can I send the data to it in an efficient way? | true | 34,586,419 | 1.2 | 1 | 0 | 3 | I would use another S3 bucket to first send the data and then use it from the Lambda function | 0 | 1,571 | 0 | 1 | 2016-01-04T07:24:00.000 | python,amazon-web-services,amazon-s3,aws-lambda | Send large data to AWS Lambda function | 1 | 3 | 3 | 34,586,632 | 0 |
0 | 0 | I am planning to invoke AWS Lambda function by modifying objects on AWS S3 bucket. I also need to send a large amount of data to the AWS Lambda function. How can I send the data to it in an efficient way? | false | 34,586,419 | 0.066568 | 1 | 0 | 1 | Your Lambda function should just read from the database your large data resides in.
Assuming your modified object on S3 contains - inside the object or as the object name - some type of foreign key to the data you need out of your database:
A) If your Lambda has access to the database directly: then you can just make your lambda function query your database directly to pull the data.
B) If your Lambda does not have direct access to the database: Then consider cloning the data as needed from the database to a secure S3 bucket for access by your lambda's when they are triggered/need it. Clone the data to S3 as JSON or some other easy to read format as logical objects for your business case (orders, customers, whatever). This method will be the fastest/most efficient for the Lambda if its possible for your use case. | 0 | 1,571 | 0 | 1 | 2016-01-04T07:24:00.000 | python,amazon-web-services,amazon-s3,aws-lambda | Send large data to AWS Lambda function | 1 | 3 | 3 | 34,967,438 | 0 |
0 | 0 | I am planning to invoke AWS Lambda function by modifying objects on AWS S3 bucket. I also need to send a large amount of data to the AWS Lambda function. How can I send the data to it in an efficient way? | false | 34,586,419 | 0.066568 | 1 | 0 | 1 | I recently did this by gzipping the data before invoking the lambda function. This is super easy to do with most programming languages. Depending on your database content this will be a better or worse solution. The content of my database had a lot of data repetition and zipped very nicely. | 0 | 1,571 | 0 | 1 | 2016-01-04T07:24:00.000 | python,amazon-web-services,amazon-s3,aws-lambda | Send large data to AWS Lambda function | 1 | 3 | 3 | 38,183,446 | 0 |
1 | 0 | I am query AWS using boto ec2 in python. Firstly I find all reserved instances by get_all_reserved_instances then I am also able to find total count of each instance_type by instance_count. I am trying to calculate total number of reserved instances under tags.
Eg. We have two tags group and name. Then I want to show total number of reserved instances of particular type (Eg. i2.xlarge) under group tag. How to do this, I did not find this in AWS console also ? | false | 34,590,285 | 0.379949 | 0 | 0 | 2 | There isn't a straight way. More appropriately, I would say there are couple of things which aren't fully right in your assumptions.
The EC2 instances aren't directly tied to the Reserved Instances. It is more of a non-technical - pure billing concept and end of the month, AWS counts the number of instance hours and checks it with number instance reservation hours and discounts the billing. This way no instance reservation is linked or associated with the EC2 instances which are running.
Reserved Instances aren't tagging enabled. Only the EC2 instances have tagging support.
To answer your question on the approach, the following the pseudo code would help
Get the list of Reserved Instances Listings (instance-platform, size, availability zone)
Get the list of EC2 Instance and filter it by Tags [group or name]
With the relieved 2 lists [ list of reservations & list of EC2 instance ] - for each of the (instance-platform, size, availability-zone) matched records may fetch you the list of Reservations & associated EC2 instance. | 0 | 986 | 0 | 4 | 2016-01-04T11:37:00.000 | python,amazon-web-services,amazon-ec2,boto,boto3 | How to find total number of reserved instances under particular tag? | 1 | 1 | 1 | 34,590,510 | 0 |
1 | 0 | If we have multiple site which have different html structure so what is the better way to implement scrapy?
should I create multiple spider according to site in single project?
should I create multiple projects according to site?
or another way, please define. | false | 34,611,880 | 0.197375 | 0 | 0 | 2 | Different website - > different script in same project if scraping same data so in a same project both the scripts can reside and use the same pipeline
Same website - > Same project
Different website ,Different Data - > Different project
Same website, different data - > Use 2 functions using callback | 0 | 297 | 0 | 0 | 2016-01-05T12:36:00.000 | python,python-2.7,scrapy | what is the better way to implement scrapy if we have multiple sites? | 1 | 2 | 2 | 34,612,388 | 0 |
1 | 0 | If we have multiple site which have different html structure so what is the better way to implement scrapy?
should I create multiple spider according to site in single project?
should I create multiple projects according to site?
or another way, please define. | true | 34,611,880 | 1.2 | 0 | 0 | 1 | Usually you should create multiple spiders in one project, one for each website, but this depends.
A scrapy spider also decides how to jump from page to page, than it applies a parser callback, the parser callback method will extract the data from a page. Because pages are not the same you need a parser callback method for each page.
The websites usually have different sitemaps, therefore you need multiple spiders, one for each website, that will decide how to jump from page to page. Than, spiders will apply their callbacks that decides how to scrape that page.
Usually You don't need to create multiple projects for multiple websites but this depends.
If your websites share some logical characteristics, put them in one project so they can use the same scrapy settings. It is also more easier in this way, you can create base spiders and inherit common methods. | 0 | 297 | 0 | 0 | 2016-01-05T12:36:00.000 | python,python-2.7,scrapy | what is the better way to implement scrapy if we have multiple sites? | 1 | 2 | 2 | 34,612,562 | 0 |
0 | 0 | I have read some introduction about how to create Google Chrome extensions. Is that possible to create the extension using Python instead of JS? | false | 34,625,232 | 0.53705 | 0 | 0 | 3 | A google chrome extension is simply a webpage with a few extra permissions and so on. So, what you're looking for is basically a method to use python as a scripting language inside a browser.
The problems with this :
Not all browsers are capable of using python as a scripting language
By default javascript scripts are enabled in most browsers, python may not be be.
Not enough tutorials on how to use python here. JS is way more popular.
But returning to how to do it. Popular methods:
Brython - They use HTML5 + text/python scripts to directly embed python inside a browser.
PyJS - here, they actually compile Python into Javascript. Somewhat more complicated, but finally the browser sees JS, so it is usable anywhere. | 0 | 3,959 | 0 | 4 | 2016-01-06T03:38:00.000 | python,google-chrome-extension | Can I use python to develop Google Chrome extensions? | 1 | 1 | 1 | 34,625,489 | 0 |
0 | 0 | I am working on a requirement in which user will ask query and that query will be routed to group of users. Group of users can directly communicate with seeker. All using XMPP python client and XMPP server(ejabberd)
Detailed scenario:
[email protected] asks a query and it is destined to [email protected]
[email protected] select a list of users(g1) from database and forwards query to them.
Each member of g1 replies individually to [email protected] even though message is sent from [email protected]
Step 1 is trivial XMPP and done already
Step 2 can be taken care of
Step 3 I am doubtful if it can be done. What features of XMPP I need to focus on. Please enlighten.
PS: I am writing custom clients using xmppp.py | false | 34,629,706 | 0 | 0 | 0 | 0 | I don't think step 3 can quite be done with plain XMPP, but, if your XMPP server is your own, and it supports plugins, one possible workaround would be to write a plugin that "fakes" this interaction by making [email protected] pretend to be [email protected] from the point of view of the other users when making the question.
Another possibility would be to use the multi-user-chat, but that won't quite give the result you want, either. | 0 | 80 | 0 | 1 | 2016-01-06T09:33:00.000 | python,xmpp,xmpppy | Using XMPP developing a client which acts a mediator in two users | 1 | 1 | 1 | 39,726,299 | 0 |
0 | 0 | I'm running OS X 10.11, and I have created a web scraper using Python and Selenium. The scraper uses Firefox as the browser to collect data.
The Firefox window must remain active at all critical steps, for the scraper to work.
When I leave the computer with Firefox as the active window, when I return I often find that the active window focus has changed to something else. Some process is stealing the window focus.
Is there a way that I can programatically tell the OS to activate the Firefox window? If so, I can tell the script to do that before every critical action in the script.
Preferably, this is something that I would like to achieve using Python. But launching a secondary AppleScript to do this specific task may also be a solution.
Note: Atm, I'm not looking at rewriting my script to use a headless browser – just to make it work by forcing active window. | false | 34,705,575 | 0.197375 | 0 | 0 | 2 | tell application "Firefox" to activate
is the way to do it in AppleScript | 0 | 1,426 | 0 | 1 | 2016-01-10T12:52:00.000 | python,macos,applescript | Python: Activate window in OS X | 1 | 1 | 2 | 34,711,005 | 0 |
0 | 0 | I need to access a smartplug device using socket programming . I have the MAC address and UDP port number of the device . Other information like SSID,password , Apps Id, Dev Id, Cmd ID are also present .
Could you please let me know if this can be achieved using Python or Java API . Is there a way in socket programming to access a device using MAC address and get the information sent from a specific UDP port .
Thanks in advance for your help . | false | 34,784,149 | 0 | 0 | 0 | 0 | You can access the device via a UDP socket, provided you have the IP address of the device as well as the UDP port number.
Both Java and Python have socket APIs so you can use either one. Just make sure you follow the network protocol defined by the device to be able to read to / write from the device properly. | 0 | 362 | 1 | 0 | 2016-01-14T07:49:00.000 | java,python,macos,sockets,udp | Smartplug socket programming using Mac Address and UDP port | 1 | 1 | 1 | 34,790,670 | 0 |
1 | 0 | I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email.
But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails.
Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email. | false | 34,785,863 | 1 | 1 | 0 | 6 | I am also facing the same issue, in my case on every PUT event in S3 bucket a lambda should trigger, it triggers twice with same aws_request_id and aws_lambda_arn.
To fix it, keep track of the aws_request_id (this id will be unique for each lambda event) somewhere and have a check on the handler. If the same aws_request_id exist then do nothing, otherwise process as usual. | 0 | 5,712 | 0 | 10 | 2016-01-14T09:28:00.000 | python,amazon-web-services,amazon-s3,aws-lambda | AWS Lambda function firing twice | 1 | 2 | 2 | 41,511,055 | 0 |
1 | 0 | I'm using an AWS Lambda function (written in python) to send an email whenever an object is uploaded into a preset S3 bucket. The object is uploaded via the AWS PHP SDK into the S3 bucket and is using a multipart upload. Whenever I test out my code (within the Lambda code editor page) it seems to work fine and I only get a single email.
But when the object is uploaded via the PHP SDK, the Lambda function runs twice and sends two emails, both with different message ID's. I've tried different email addresses but each address receives exactly two, duplicate emails.
Can anyone guide me where could I be going wrong? I'm using the boto3 library that is imported with the sample python code to send the email. | true | 34,785,863 | 1.2 | 1 | 0 | 13 | Yes, we have this as well and it's not linked to the email, it's linked to S3 firing multiple events for a single upload. Like a lot of messaging systems, Amazon does not guarantee "once only delivery" of event notifications from S3, so your Lambda function will need to handle this itself.
Not the greatest, but doable.
Some form of cache with details of the previous few requests so you can see if you've already processed the particular event message or not. | 0 | 5,712 | 0 | 10 | 2016-01-14T09:28:00.000 | python,amazon-web-services,amazon-s3,aws-lambda | AWS Lambda function firing twice | 1 | 2 | 2 | 34,795,499 | 0 |
1 | 0 | The flow with me is this we get a request to www.oursite.com?secretinfo=banana
then we have people do some stuff on that page and we send them to another site. Is it possible to remove the part "secretinfo=banana" from the referer in the header info?
We do this now, by redirecting to another page without this parameters, which does another redirect by a meta-refresh to the other party. As you can imagine this is not very good for the user experience.
Doing it direct would be great but even doing it with a 302 or 303 redirect would be better but these don't change the referer.
We are using Python 3 with Flask or it can be with JavaScript. | true | 34,790,474 | 1.2 | 0 | 0 | 0 | Thanks to NetHawk it can be done using history.replaceState() or history.pushState(). | 0 | 141 | 0 | 1 | 2016-01-14T13:08:00.000 | javascript,python,redirect,http-referer | Is it possible to remove the get parameters from the referer in the header without meta-refresh? | 1 | 1 | 1 | 34,791,352 | 0 |
0 | 0 | A friend of mine wants to get some data from certain webpages. He wants it in XML, because he will feed them to some mighty application.
That's not a problem, any scripting language can do this. The problem is, that the content is "hidden" and can only be seen, when user is logged in. Which means, in whatever language I'll use, I have to find a way to simulate a web browser - to store cookies (session id), because without it, I won't be able to get data from restricted sections of the website.
I don't wish I have to write my own "web browser", but I am not certain if I need one. Also I think, there must be a library for this. Any ideas?
Yes, we asked them about API's, data dumps, etc. They don't want to cooperate.
Thanks for any tips. | false | 34,790,796 | 0.049958 | 0 | 0 | 1 | For ease of use, try Selenium.
Although it is slower compared to using headless browsers, the good thing is you don't need to use other libraries to enable Javascript since your script will simulate an actual human browsing a website. You can also check the behaviour of your script visually since it opens the website in your browser.
You can easily find boilerplate codes and tutorials about it too :) | 0 | 2,248 | 0 | 0 | 2016-01-14T13:25:00.000 | python,shell | "Datamining" from websites | 1 | 1 | 4 | 34,791,633 | 0 |
1 | 0 | I've been attempting to use HTTrack to mirror a single page (downloading html + prerequisites: style sheets, images, etc), similar to the question [mirror single page with httrack][1]. However, the accepted answer there doesn't work for me, as I'm using Windows (where wget "exists" is but actually a wrapper for Invoke-WebRequest and doesn't function at all the same way).
HTTrack really wants to either (a) download the entire website I point it at, or (b) only download the page I point it to, leaving all images still living on the web. Is there a way to make HTTrack download only enough to view a single page properly offline - the equivalent of wget -p? | false | 34,796,053 | 0.066568 | 0 | 0 | 1 | This is an old post so you might have figured it out by now. I just came across your post looking for another answer about using Python and HTTrack. I was having the same issue you were having and I passed the argument -r2 and it downloaded the images.
My arguments basically look like this:
cmd = [httrack, myURL,'-%v','-r2','-F',"Mozilla/5.0 (Windows NT 6.1; Win64; x64)",'-O',saveLocation] | 0 | 1,960 | 0 | 2 | 2016-01-14T17:33:00.000 | python,http,command-line,wget,httrack | Using HTTrack to mirror a single page | 1 | 2 | 3 | 41,248,744 | 0 |
1 | 0 | I've been attempting to use HTTrack to mirror a single page (downloading html + prerequisites: style sheets, images, etc), similar to the question [mirror single page with httrack][1]. However, the accepted answer there doesn't work for me, as I'm using Windows (where wget "exists" is but actually a wrapper for Invoke-WebRequest and doesn't function at all the same way).
HTTrack really wants to either (a) download the entire website I point it at, or (b) only download the page I point it to, leaving all images still living on the web. Is there a way to make HTTrack download only enough to view a single page properly offline - the equivalent of wget -p? | false | 34,796,053 | -0.066568 | 0 | 0 | -1 | Saving the page with your browser should download the page and all its prerequisites. | 0 | 1,960 | 0 | 2 | 2016-01-14T17:33:00.000 | python,http,command-line,wget,httrack | Using HTTrack to mirror a single page | 1 | 2 | 3 | 59,127,033 | 0 |
0 | 0 | I have a Python script that accepts text from a user, interprets that text and then produces a text response for that user. I want to create a simple web interface to this Python script that is accessible to multiple people at once. By this I mean that person A can go to the website for the script and begin interacting with the script and, at the same time, person B can do the same. This would mean that the script is running in as many processes/sessions as desired.
What would be a good way to approach this? | false | 34,833,974 | 0 | 0 | 0 | 0 | Maybe you should try a python web framework like django or flask .etc
Make a simple website that offers a webpage that contians a form to input text ,and when people visit the url, put their text in the form and submit, your code can handle it and return a webpage to show the result. | 0 | 44 | 0 | 0 | 2016-01-17T01:05:00.000 | python,html,web | How can one create an open web interface to a Python script that accepts and returns text? | 1 | 1 | 1 | 34,834,095 | 0 |
0 | 0 | I have coded a Python Script for Twitter Automation using Tweepy. Now, when i run on my own Linux Machine as python file.py The file runs successfully and it keeps on running because i have specified repeated Tasks inside the Script and I also don't want to stop the script either. But as it is on my Local Machine, the script might get stopped when my Internet Connection is off or at Night. So i couldn't keep running the Script Whole day on my PC..
So is there any way or website or Method where i could deploy my Script and make it Execute forever there ? I have heard about CRON JOBS before in Cpanel which can Help repeated Tasks but here in my case i want to keep running my Script on the Machine till i don't close the script .
Are their any such solutions. Because most of twitter bots i see are running forever, meaning their Script is getting executed somewhere 24x7 . This is what i want to know, How is that Task possible? | false | 34,841,822 | 0 | 1 | 0 | 0 | You can add a systemd .service file, which can have the added benefit of:
logging (compressed logs at a central place, or over network to a log server)
disallowing access to /tmp and /home-directories
restarting the service if it fails
starting the service at boot
setting capabilities (ref setcap/getcap), disallowing file access if the process only needs network access, for instance | 0 | 1,461 | 0 | 2 | 2016-01-17T18:11:00.000 | python,python-2.7,tweepy | Is it Possible to Run a Python Code Forever? | 1 | 1 | 2 | 47,680,085 | 0 |
1 | 0 | I am trying to automate native android app using robot framework + appium with AppiumLibrary and was able to successfully open application ,from there my struggle begins, not able to find any element on the screen through UI automator viewer since the app which I was testing is web-view context and it shows like a single frame(no elements in it is being identified) . I have spoken to dev team and they gave some html static pages where I could see some element id's for that app. So I have used those id's ,But whenever I ran the test it throws error as element doesn't match . The same app is working with java + appium testNG framework. Only difference I could see between these two is, using java + appium framework complete html code is getting when we call page source method for the android driver object but in robot its returning some xml code which was displayed in UI automator viewer(so this xml doesn't contain any HTML source code with element id's and robot is searching the id's in this xml code and hence it is failing). I am totally confused and got stuck here. Can some one help me on this issue. | false | 34,903,359 | -0.379949 | 0 | 0 | -2 | Switch to (webview) context resolved this issue. | 0 | 825 | 0 | 1 | 2016-01-20T14:57:00.000 | python,appium,robotframework | robot framework with appium ( not able identify elements ) | 1 | 1 | 1 | 34,927,269 | 0 |
1 | 0 | Question: yikyak.com returns some sort of "browser not supported" landing page when I try to view source code in chrome (even for the page I'm logged in on) or when I write it out to the Python terminal. Why is this and what can I do to get around it?
Edit for clarification: I'm using the chrome webdriver. I can navigate around the yik yak website by clicking on it just fine. But whenever I try to see what html is on the page, I get an html page for a "browser not reported" page.
Background: I'm trying to access yikyak.com with selenium for python to download yaks and do fun things with them. I know fairly little about web programming.
Thanks!
Secondary, less important question: If you're already here, are there particularly great free resources for a super-quick intro to the certification knowledge I need to store logins and stuff like that to use my logged in account? That would be awesome. | false | 34,915,058 | 0 | 0 | 0 | 0 | I figured it out. I was being dumb. I saved off the html as a file and opened that file with chrome and it displayed the normal page. I just didn't see the fact that it was a normal page looking at it directly. Thanks all 15 people for your time. | 0 | 62 | 0 | 0 | 2016-01-21T03:48:00.000 | python,google-chrome,selenium | Trying to view html on yikyak.com, get "browser out of date" page | 1 | 1 | 1 | 34,937,195 | 0 |
1 | 0 | Env: Windows 10 Pro
I installed python 2.7.9 and using pip installed robotframework and robotframework-selenium2library and it all worked fine with no errors.
Then I was doing some research and found that unless there is a reason for me to use 2.x versions of Python, I should stick with 3.x versions. Since 3.4 support already exists for selenium2library (read somewhere), so I decided to switch to it.
I uninstalled python 2.7.9 and installed python 3.4 version. When I installed robotframerwork, I am getting the following:
C:\Users\username>pip install robotframework
Downloading/unpacking RobotFramework
Running setup.py (path:C:\Users\username\AppData\Local\Temp\pip_build_username\RobotFramework\setup.py) egg_info for package RobotFramework
no previously-included directories found matching 'src\robot\htmldata\testdata'
Installing collected packages: RobotFramework
Running setup.py install for RobotFramework
File "C:\Python34\Lib\site-packages\robot\running\timeouts\ironpython.py", line 57
raise self._error[0], self._error[1], self._error[2]
^
SyntaxError: invalid syntax
File "C:\Python34\Lib\site-packages\robot\running\timeouts\jython.py", line 56
raise self._error[0], self._error[1], self._error[2]
^
SyntaxError: invalid syntax
no previously-included directories found matching 'src\robot\htmldata\testdata'
replacing interpreter in robot.bat and rebot.bat.
Successfully installed RobotFramework
Cleaning up...
When I did pip list I do see robotframework is installed.
C:\Users\username>pip list
pip (1.5.4)
robotframework (3.0)
setuptools (2.1)
Should I be concerned and stick to Python 2.7.9? | false | 34,936,039 | 0 | 0 | 0 | 0 | With python 2.7.9 you can only install robotframework 2.9
With python 3.X you can install robotframework 3.x+ but as Bryan Oakley said, Selenium2Library is not yet supported ;) | 0 | 2,741 | 0 | 0 | 2016-01-21T23:02:00.000 | python-2.7,python-3.x,selenium-webdriver,robotframework | python version for robot framework selenium2library (Windows10) | 1 | 1 | 2 | 34,997,071 | 0 |
0 | 0 | I'm writing a Socket Server in Python, and also a Socket Client to connect to the Server.
The Client interacts with the Server in a way that the Client sends information when an action is invoked, and the Server processes the information.
The problem I'm having, is that I am able to connect to my Server with Telnet, and probably other things that I haven't tried yet. I want to disable connection from these other Clients, and only allow connections from Python Clients. (Preferably my custom-made client, as it sends information to communicate)
Is there a way I could set up authentication on connection to differentiate Python Clients from others?
Currently there is no code, as this is a problem I want to be able to solve before getting my hands dirty. | false | 34,946,330 | 0.099668 | 0 | 0 | 1 | When a new connection is made to your server, your protocol will have to specify some way for the client to authenticate. Ultimately there is nothing that the network infrastructure can do to determine what sort of process initiated the connection, so you will have to specify some exchange that allows the server to be sure that it really is talking to a valid client process. | 0 | 38 | 0 | 1 | 2016-01-22T12:05:00.000 | python,sockets,network-programming | Only allow connections from custom clients | 1 | 2 | 2 | 34,946,501 | 0 |
0 | 0 | I'm writing a Socket Server in Python, and also a Socket Client to connect to the Server.
The Client interacts with the Server in a way that the Client sends information when an action is invoked, and the Server processes the information.
The problem I'm having, is that I am able to connect to my Server with Telnet, and probably other things that I haven't tried yet. I want to disable connection from these other Clients, and only allow connections from Python Clients. (Preferably my custom-made client, as it sends information to communicate)
Is there a way I could set up authentication on connection to differentiate Python Clients from others?
Currently there is no code, as this is a problem I want to be able to solve before getting my hands dirty. | true | 34,946,330 | 1.2 | 0 | 0 | 0 | @holdenweb has already given a good answer with basic info.
If a (terminal) software sends the bytes that your application expects as a valid identification, your app will never know whether it talks to an original client or anything else.
A possible way to test for valid clients could be, that your server sends an encrypted and authenticated question (should be different at each test!), e.g. something like "what is 18:37:12 (current date and time) plus 2 (random) hours?"
Encryption/Authentication would be another issue then.
If you keep this algorithm secret, only your clients can answer it and validate themselves successfully. It can be hacked/reverse engineered, but it is safe against basic attackers. | 0 | 38 | 0 | 1 | 2016-01-22T12:05:00.000 | python,sockets,network-programming | Only allow connections from custom clients | 1 | 2 | 2 | 34,952,884 | 0 |
1 | 0 | How do I only display the screen of my webdriver in a specific case using selenium?
I only want to display the window to the user when the title of the page is for exemple "XXX", so then he type something one the window and than the window close again, and the robot continue making what he should make on background.
Is it possible?
Thanks, | false | 34,950,994 | 0 | 0 | 0 | 0 | No, its not possible.
Only way is to use several browsers.
As example:
run phantomJS(headless browser)
run do needed actions
run firefox, used perform login
copy cookies after login
paste cookies to phantomJS
close firefox | 0 | 33 | 0 | 0 | 2016-01-22T16:07:00.000 | python,selenium,selenium-webdriver | How to display the webdriver window only in one specific condition using selenium (python)? | 1 | 1 | 1 | 34,969,465 | 0 |
0 | 0 | In my program, A serve-forever daemon is restarted in a subprocess.
The program itself is a web service, using port 5000 by default.
I don't know the detail of the start script of that daemon, but it seems to inherit the socket listening on port 5000.
So if I were to restart my program, I'll find that the port is already occupied by the daemon process.
Now I am considering to fine tune the subprocess function to close the inherited socket FD, but I don't know how to get the FD in the first place. | false | 34,988,678 | 0.099668 | 0 | 0 | 1 | It seems like a permission issue. The subprocess is probably running as an other user and therefore you will not have access to the process. Use sudo ps xauw |grep [processname] to figure as under what user the daemon process is running. | 0 | 429 | 1 | 0 | 2016-01-25T09:08:00.000 | python,linux,sockets,subprocess | How to get a socket FD according to the port occupied in Python? | 1 | 1 | 2 | 34,989,073 | 0 |
0 | 0 | I have multiple api which we have provided to android developers.
Like :
1) Creating Business card API
2) Creating Contacts API
So these api working fine when app is online. So our requirement is to handle to create business card and contacts when app is offline.
We are following steps but not sure:-
1) Android developer store the business card when app offline and send this data to server using separate offline business card api when app comes online.
2) Same we do for creating contacts offline using offline contact api.
My problem is I want do in one api call to send all data to server and do operation.
Is this approach will right?? Also please suggest what is the best approach to handle offline data. Also how to handle syncing data when app would come online??
Please let me know if I could provide more information. | false | 34,992,856 | 0 | 0 | 0 | 0 | I'm confused as to how you're approaching this. My understanding is that when the app is offline you want to "queue up" any API requests that are sent.
Your process seems fine however without knowing the terms around the app being "offline" it's hard to understand if this best.
Assuming you're meaning the server(s) holding the application are offline you're correct you want a process in the android app that will store the request until the application becomes online. However, this can be dangerous for end users. They should be receiving a message on the application being offline and to "try again later" as it were. The fear being they submit a request for x new contacts to be queued and then re-submit not realizing the application was offline.
I would suggest you have the android app built to either notify the user of the app being down or provide some very visible notification that requests are queued locally on their phone until the application becomes available and for them to view/modify/delete said locally cached requests until the application becomes available. When the API becomes available a notification can be set for users to release their queue on their device. | 0 | 1,524 | 0 | 3 | 2016-01-25T12:39:00.000 | python,django,django-rest-framework,offlineapps | how to design rest api which handle offline data | 1 | 1 | 1 | 34,996,085 | 0 |
0 | 0 | I'm trying to send an email using the Gmail API in python. I think I followed the relevant documentation and youtube vids.
I'm running into this error:
googleapiclient.errors.HttpError: HttpError 403 when requesting https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Insufficient Permission"
Here is my script:
#!/usr/bin/env python
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from email.mime.text import MIMEText
import base64
import errors
SCOPES = 'https://mail.google.com/'
CLIENT_SECRET = 'client_secret.json'
store = file.Storage('storage.json')
credz = store.get()
if not credz or credz.invalid:
flags = tools.argparser.parse_args(args=[])
flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES)
credz = tools.run_flow(flow, store, flags)
GMAIL = build('gmail', 'v1', http=credz.authorize(Http()))
def CreateMessage(sender, to, subject, message_text):
"""Create a message for an email.
Args:
sender: Email address of the sender.
to: Email address of the receiver.
subject: The subject of the email message.
message_text: The text of the email message.
Returns:
An object containing a base64url encoded email object.
"""
message = MIMEText(message_text)
message['to'] = to
message['from'] = sender
message['subject'] = subject
return {'raw': base64.urlsafe_b64encode(message.as_string())}
def SendMessage(service, user_id, message):
"""Send an email message.
Args:
service: Authorized Gmail API service instance.
user_id: User's email address. The special value "me"
can be used to indicate the authenticated user.
message: Message to be sent.
Returns:
Sent Message.
"""
try:
message = (service.users().messages().send(userId=user_id, body=message)
.execute())
print 'Message Id: %s' % message['id']
return message
except errors.HttpError, error:
print 'An error occurred: %s' % error
message = CreateMessage('[email protected]', '[email protected]', 'test_subject', 'foo')
print message
SendMessage(GMAIL, 'me', message)
I tried adding scopes, trying different emails, etc. I have authenticated by logging into my browser as well. (The [email protected] is a dummy email btw) | false | 34,999,194 | 0.197375 | 1 | 0 | 2 | Try deleting generated storage.json file and then try again afresh.
you might be trying this script with different scopes so "storage.json" might be having wrong details. | 0 | 2,231 | 0 | 1 | 2016-01-25T18:03:00.000 | python,api,email,gmail,send | 403 error sending email with gmail API (python) | 1 | 2 | 2 | 35,799,866 | 0 |
0 | 0 | I'm trying to send an email using the Gmail API in python. I think I followed the relevant documentation and youtube vids.
I'm running into this error:
googleapiclient.errors.HttpError: HttpError 403 when requesting https://www.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Insufficient Permission"
Here is my script:
#!/usr/bin/env python
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from email.mime.text import MIMEText
import base64
import errors
SCOPES = 'https://mail.google.com/'
CLIENT_SECRET = 'client_secret.json'
store = file.Storage('storage.json')
credz = store.get()
if not credz or credz.invalid:
flags = tools.argparser.parse_args(args=[])
flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES)
credz = tools.run_flow(flow, store, flags)
GMAIL = build('gmail', 'v1', http=credz.authorize(Http()))
def CreateMessage(sender, to, subject, message_text):
"""Create a message for an email.
Args:
sender: Email address of the sender.
to: Email address of the receiver.
subject: The subject of the email message.
message_text: The text of the email message.
Returns:
An object containing a base64url encoded email object.
"""
message = MIMEText(message_text)
message['to'] = to
message['from'] = sender
message['subject'] = subject
return {'raw': base64.urlsafe_b64encode(message.as_string())}
def SendMessage(service, user_id, message):
"""Send an email message.
Args:
service: Authorized Gmail API service instance.
user_id: User's email address. The special value "me"
can be used to indicate the authenticated user.
message: Message to be sent.
Returns:
Sent Message.
"""
try:
message = (service.users().messages().send(userId=user_id, body=message)
.execute())
print 'Message Id: %s' % message['id']
return message
except errors.HttpError, error:
print 'An error occurred: %s' % error
message = CreateMessage('[email protected]', '[email protected]', 'test_subject', 'foo')
print message
SendMessage(GMAIL, 'me', message)
I tried adding scopes, trying different emails, etc. I have authenticated by logging into my browser as well. (The [email protected] is a dummy email btw) | false | 34,999,194 | 0.099668 | 1 | 0 | 1 | I had the same problem.
I solved it by running again the quickstart.py that provides google and change SCOPE so that google can give you all permissions you want. After that don't need to have SCOPE or CLIENT_SECRET on your new code to send a message, just get_credentials(), CreateMessage() and SendMessage() methods. | 0 | 2,231 | 0 | 1 | 2016-01-25T18:03:00.000 | python,api,email,gmail,send | 403 error sending email with gmail API (python) | 1 | 2 | 2 | 46,799,877 | 0 |
1 | 0 | If I create a Charge object via the Stripe API and the card is valid, but the charge is declined, what error does this cause? It doesn't look to be possible to simulate this error in the test sandbox and I'd like to be able to catch it (and mock it in tests), but the documentation isn't clear on this point. | true | 35,017,039 | 1.2 | 0 | 0 | 2 | That would trigger a declined charge which is a card_error and can be triggered with this card number: 4000000000000002: Charge will be declined with a card_declined code. | 0 | 1,120 | 0 | 1 | 2016-01-26T15:07:00.000 | python,stripe-payments | Catching API response for insufficient funds from Stripe | 1 | 1 | 1 | 35,018,652 | 0 |
1 | 0 | I am doing a data scraping project in python.For that i need to use beautiful soup and lxml.Should i install them globally or in a virtual environment? | false | 35,028,910 | 0 | 0 | 0 | 0 | That's a matter of personal preference, however in most cases the benefits of installing libraries in a virtual environment far outweigh the costs.
Setting up virtualenv (and perhaps virtualenvwrapper), creating an environment for your project, and activating it will take 2-10 minutes (depending on your familiarity with the system) before you can start work on your project itself, but it may save you a lot of hassle further down the line. I would recommend that you do so. | 1 | 104 | 0 | 0 | 2016-01-27T04:29:00.000 | python-2.7,beautifulsoup,lxml | Do i need Virtual environment for lxml and beautiful soup in linux? | 1 | 2 | 2 | 35,028,988 | 0 |
1 | 0 | I am doing a data scraping project in python.For that i need to use beautiful soup and lxml.Should i install them globally or in a virtual environment? | true | 35,028,910 | 1.2 | 0 | 0 | 3 | Well Using or not using a virtual environment is up to you. But it is always a best practice to use virtualenv and virtualenvwrapper. So that if something unusual happens with your project and its dependencies it won't hamper the python reciding at the system level.
It might happen that in future you might need to work on different version of lxml or beautifulsoup and if you do not use virtual environment then you need to upgrade or degrade the libraries and now your older project will not run as you have upgraded or degraded everything in the system level python. Therefore it is wise to start using the best practices as early as possible to save time and efforts. | 1 | 104 | 0 | 0 | 2016-01-27T04:29:00.000 | python-2.7,beautifulsoup,lxml | Do i need Virtual environment for lxml and beautiful soup in linux? | 1 | 2 | 2 | 35,029,148 | 0 |
0 | 0 | This is really weird. I have a notebook server running remotely and can connect to it successfully until yesterday.
I still can connect to the notebook server using localhost or [ip] on the remote server. But when trying from a remote PC it is always timeout.
I netstat -antop | grep :port and saw jupyter listening on both localhost and any ip. Also I tried to tcpdump what the remote server got on port and can see web request coming in from my remote PC and retried for two times. But the ipython notebook doesn't get any requests (telling from the debug string in --debug mode).
Any clue why this happened? | true | 35,029,353 | 1.2 | 0 | 0 | 2 | Since you have ruled out all other possible causes, it is probably a firewall between you and the remote host blocking the port. | 1 | 1,081 | 0 | 0 | 2016-01-27T05:13:00.000 | ipython-notebook,jupyter-notebook | Cannot connect to remote ipython notebook server | 1 | 1 | 1 | 35,031,657 | 0 |
0 | 0 | I am interested to do socket programming. I would like send and receive Ipv6 UDP server socket programming for raspberry (conneted with ethernet cable and opened in Putty). After surfing coulpe of sites I have got confusion with IPv6 UDP host address. Which type of host address should I use to send and receive message ipv6 UDP message.
is the link local address
example:
host ='fe80::ba27:ebff:fed4:5691';//link local address to Tx and Rx from Raspberry
or
host = 'ff02::1:ffd4:5691'
Thank you so much.
Regards,
Mahesh | true | 35,042,006 | 1.2 | 1 | 0 | 1 | You can use host ='fe80::ba27:ebff:fed4:5691', assuming you only have one link.
Link-Local addresses (Link-Local scope) are designed to be used for addressing on a single link for purposes such as automatic address configuration, neighbor discovery or when no routers are present. Routers must not forward any packets with Link-Local source or destination addresses to other links.
So if you are sending data from a server to a raspberry pi (1 link), you can use the link-local scope for you IPv6 address.
host = 'ff02::1:ffd4:5691' is the link-local multicast scope, unless you have a reason to send multicast, there is no need. | 0 | 422 | 1 | 0 | 2016-01-27T15:52:00.000 | python,sockets,udp,raspberry-pi,ipv6 | Ipv6 UDP host address for bind | 1 | 1 | 1 | 35,063,138 | 0 |
0 | 0 | I want to populate a list of commands that are stored over a server when tab is pressed and auto complete can be used only with a local set of words as i read. Is there a way to send ('\t') to the telnet server when user enters tab on raw_input, which would atleast return a list of possible commands along with showing them to user? | false | 35,050,323 | 0 | 0 | 0 | 0 | If I understand correctly then no... You will have to come up with a workaround | 0 | 81 | 0 | 0 | 2016-01-27T23:41:00.000 | python,telnet,raw-input | Send data via socket when user hits tab on raw_input in Python | 1 | 1 | 1 | 35,051,893 | 0 |
1 | 0 | I have a login page that sends a request to a Python script to authenticate the user. If it's not authenticated it redirects you to the login page. If it is redirects you to a HTML form. The problem is that you can access the HTML form by writing the URL. How Can I make sure the user came from the login form with my Python script without using modules because I can't install anything in my server. I want it to be strictly with Python I can't use PHP. Is it possible? Can I use other methods to accomplish the task? | false | 35,094,371 | -0.07983 | 0 | 0 | -2 | Make sure to check the "referer" header in Python, and validate that the address is your login page. | 0 | 1,098 | 0 | 0 | 2016-01-29T21:29:00.000 | javascript,jquery,python,html | How to prevent a user from directly accessing my html page | 1 | 1 | 5 | 35,094,515 | 0 |
0 | 0 | Let assume I have computer A and a variable x which updated very frequently, and also takes some time for this update( Lets say: asked to be updated every sec and update tooks 0.5 sec).
Now, once a minute, I have computer B which asks in a HTTP GET request for x's value. A sends him a copy of x.
Because x might be used by A, I need to make sure that nothing gets wrong.
How can I assure it? What are my options for doing this? | false | 35,109,095 | 0 | 0 | 0 | 0 | I Think you can return x in function callback.
or like twisted reactor.callLater, server B could ask twice waiting for A get lasted result. but x update per-second, i may lead to other servers always request。 | 0 | 52 | 0 | 0 | 2016-01-31T00:54:00.000 | python,multithreading,http,operating-system,communication | How to send constantly updated data in http request? | 1 | 1 | 1 | 35,109,280 | 0 |
1 | 0 | I have a blog on blogger.com, and they have a section where you can put html/javascript code in. I'm a total beginner to javascript/html but I'm somewhat adept at python. I want to open a listening socket on python(my computer) so that everytime a guest looks at my blog the javascript sends my python socket some data, like ip or datetime for example. I looked around on the internet, and ended up with the tornado module for my listening socket, but I have a hard time figuring out the javascript code.
Basically it involves no servers. | false | 35,146,632 | 0 | 0 | 0 | 0 | Maybe you need to deploy the website on your own server,because listening for client connecting is server side things,but the blogger.com host the server,the javascript section who provide to you is just for static pages.
If the blogger.com provide you api,that have some function like
app.on("connection",function(){
/*send data to your python program*/
}) | 0 | 93 | 0 | 1 | 2016-02-02T06:35:00.000 | javascript,python,sockets | Web Socket between Javascript and Python | 1 | 1 | 1 | 35,147,809 | 0 |
0 | 0 | I use python to simply call api.github.gist. I have tried urllib2 at first which cost me about 10 seconds!. The requests takes less than 1 senond
I am under a cooperation network, using a proxy. Do these two libs have different default behavior under a proxy?
And I use fiddler to check the network. In both situation, the http request finished in about 40ms. So where urllib spends the time on? | true | 35,146,733 | 1.2 | 1 | 0 | 0 | It's most likely that DNS caching sped up the requests. DNS queries might take a lot of time in corporate networks, don't know why but I experience the same. The first time you sent the request with urllib2 DNS queried, slow, and cached. The second time you sent the request with requests, DNS needed not to be queried just retrieved from the cache.
Clear up the DNS cache and change the order, i.e. request with requests first, see if there is any difference. | 0 | 241 | 0 | 4 | 2016-02-02T06:41:00.000 | python,python-requests,urllib | Is urllib2 slower than requests in python3 | 1 | 1 | 1 | 35,147,116 | 0 |
0 | 0 | I want to collect accelerometer data on my android phone and communicate it to my laptop over wifi.
A py script collects data on the phone with python for sl4a and another py script recieves data on the laptop. Both devices are on the same wifi network.
The principle looks pretty straightforward, but I have no clue on how to communicate between the two devices. Who should be server, who sould be client?
I'm not looking for a way to collect accelerometer data or somebody to write my script, I just can't find info on my wifi issues on the web.
Can anybody provide any help?
Thanks in advance | false | 35,163,521 | 0 | 1 | 0 | 0 | you mentioned two questions in your statement, 1 how to communicate via the same wifi network, 2 which one should be the server.
1, i have tried communicating two nodes using socket and multiproceseing manager, theyre really helpful for you to communicate that kind of over-network communication. you can communicate two nodes using manager or socket, but socket also provides helps for you to get the ip of node over the network, while the manager simplify the whole process.
2, if i were you, i would choose laptop as server as you would listen for certain port, bind port receiving data. One of the reason to choose laptop as server is that it would be more convenient if you want to add more smartphones to collect data
I do not know well about sl4a, but i did some projects communicating via network, heres just suggestion, hope it would be helpful and not too late for you. | 0 | 346 | 0 | 0 | 2016-02-02T20:49:00.000 | android,python,wifi,sl4a | sl4a python communicate with pc over wifi | 1 | 1 | 1 | 36,829,122 | 0 |
0 | 0 | I am currently working on a server + client combo on python and I'm using TCP sockets. From networking classes I know, that TCP connection should be closed step by step, first one side sends the signal, that it wants to close the connection and waits for confirmation, then the other side does the same. After that, socket can be safely closed.
I've seen in python documentation function socket.shutdown(flag), but I don't see how it could be used in this standard method, theoretical of closing TCP socket. As far as I know, it just blocks either reading, writing or both.
What is the best, most correct way to close TCP socket in python? Are there standard functions for closing signals or do I need to implement them myself? | false | 35,174,394 | 0.197375 | 0 | 0 | 2 | shutdown is useful when you have to signal the remote client that no more data is being sent. You can specify in the shutdown() parameter which half-channel you want to close.
Most commonly, you want to close the TX half-channel, by calling shutdown(1). In TCP level, it sends a FIN packet, and the remote end will receive 0 bytes if blocking on read(), but the remote end can still send data back, because the RX half-channel is still open.
Some application protocols use this to signal the end of the message. Some other protocols find the EOM based on data itself. For example, in an interactive protocol (where messages are exchanged many times) there may be no opportunity, or need, to close a half-channel.
In HTTP, shutdown(1) is one method that a client can use to signal that a HTTP request is complete. But the HTTP protocol itself embeds data that allows to detect where a request ends, so multiple-request HTTP connections are still possible.
I don't think that calling shutdown() before close() is always necessary, unless you need to explicitly close a half-channel. If you want to cease all communication, close() does that too. Calling shutdown() and forgetting to call close() is worse because the file descriptor resources are not freed.
From Wikipedia: "On SVR4 systems use of close() may discard data. The use of shutdown() or SO_LINGER may be required on these systems to guarantee delivery of all data." This means that, if you have outstanding data in the output buffer, a close() could discard this data immediately on a SVR4 system. Linux, BSD and BSD-based systems like Apple are not SVR4 and will try to send the output buffer in full after close(). I am not sure if any major commercial UNIX is still SVR4 these days.
Again using HTTP as an example, an HTTP client running on SVR4 would not lose data using close() because it will keep the connection open after request to get the response. An HTTP server under SVR would have to be more careful, calling shutdown(2) before close() after sending the whole response, because the response would be partly in the output buffer. | 0 | 5,767 | 0 | 2 | 2016-02-03T10:25:00.000 | python,sockets,tcp | Proper way to close tcp sockets in python | 1 | 1 | 2 | 35,206,533 | 0 |
0 | 0 | I have read lots about python threading and the various means to 'talk' across thread boundaries. My case seems a little different, so I would like to get advice on the best option:
Instead of having many identical worker threads waiting for items in a shared queue, I have a handful of mostly autonomous, non-daemonic threads with unique identifiers going about their business. These threads do not block and normally do not care about each other. They sleep most of the time and wake up periodically. Occasionally, based on certain conditions, one thread needs to 'tell' another thread to do something specific - an action -, meaningful to the receiving thread. There are many different combinations of actions and recipients, so using Events for every combination seems unwieldly. The queue object seems to be the recommended way to achieve this. However, if I have a shared queue and post an item on the queue having just one recipient thread, then every other thread needs monitor the queue, pull every item, check if it is addressed to it, and put it back in the queue if it was addressed to another thread. That seems a lot of getting and putting items from the queue for nothing. Alternatively, I could employ a 'router' thread: one shared-by-all queue plus one queue for every 'normal' thread, shared with the router thread. Normal threads only ever put items in the shared queue, the router pulls every item, inspects it and puts it on the addressee's queue. Still, a lot of putting and getting items from queues....
Are there any other ways to achieve what I need to do ? It seems a pub-sub class is the right approach, but there is no such thread-safe module in standard python, at least to my knowledge.
Many thanks for your suggestions. | false | 35,195,348 | 0 | 0 | 0 | 0 | Thanks for the response. After some thoughts, I have decided to use the approach of many queues and a router-thread (hub-and-spoke). Every 'normal' thread has its private queue to the router, enabling separate send and receive queues or 'channels'. The router's queue is shared by all threads (as a property) and used by 'normal' threads as a send-only-channel, ie they only post items to this queue, and only the router listens to it, ie pulls items. Additionally, each 'normal' thread uses its own queue as a 'receive-only-channel' on which it listens and which is shared only with the router. Threads register themselves with the router on the router queue/channel, the router maintains a list of registered threads including their queues, so it can send an item to a specific thread after its registration.
This means that peer to peer communication is not possible, all communication is sent via the router.
There are several reasons I did it this way:
1. There is no logic in the thread for checking if an item is addressed to 'me', making the code simpler and no constant pulling, checking and re-putting of items on one shared queue. Threads only listen on their queue, when a message arrives the thread can be sure that the message is addressed to it, including the router itself.
2. The router can act as a message bus, do vocabulary translation and has the possibility to address messages to external programs or hosts.
3. Threads don't need to know anything about other threads capabilities, ie they just speak the language of the router. In a peer-to-peer world, all peers must be able to understand each other, and since my threads are of many different classes, I would have to teach each class all other classes' vocabulary.
Hope this helps someone some day when faced with a similar challenge. | 1 | 1,923 | 0 | 3 | 2016-02-04T07:51:00.000 | python,multithreading | Recommended way to send messages between threads in python? | 1 | 1 | 2 | 35,222,210 | 0 |
0 | 0 | Situation:
A Pyro4 server gives a Pyro4 client a Pyro4 proxy.
I want to detect whether the client is still indeed using this proxy, so that the server can give the proxy to other clients.
My idea at the moment is to have the server periodically ping the client. To do this, the client itself need to host a Pyro Daemon, and give the server a Pyro4 proxy so that the Server can use this proxy to ping clients.
Is there a cleaner way to do this? | false | 35,226,451 | 0 | 1 | 0 | 0 | I'd let the client report back to the server as soon as it no longer needs the proxy. I.e. don't overcomplicate your server with dependencies/knowledge about the clients. | 0 | 225 | 0 | 0 | 2016-02-05T14:22:00.000 | python,python-2.7,rpc,pyro | How to check if Pyro4 client is still alive | 1 | 1 | 1 | 35,517,121 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.