Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | We have two servers (client-facing, and back-end database) between which we would like to transfer PDFs. Here's the data flow:
User requests PDF from website.
Site sends request to client-server.
Client server requests PDF from back-end server (different IP).
Back-end server sends PDF to client server.
Client server sends PDF to website.
1-3 and 5 are all good, but #4 is the issue.
We're currently using Flask requests for our API calls and can transfer text and .csv easily, but binary files such as PDF are not working.
And no, I don't have any code, so take it easy on me. Just looking for a suggestion from someone who may have come across this issue. | true | 41,154,360 | 1.2 | 0 | 0 | 1 | As you said you have no code, that's fine, but I can only give a few suggestions.
I'm not sure how you're sending your files, but I'm assuming that you're using pythons open function.
Make sure you are reading the file as bytes (e.g. open('<pdf-file>','rb'))
Cut the file up into chunks and send it as one file, this way it doesn't freeze or get stuck.
Try smaller PDF files, if this works definitely try suggestion #2.
Use threads, you can multitask with them.
Have a download server, this can save memory and potentially save bandwidth. Also it also lets you skip the PDF send back, from flask.
Don't use PDF files if you don't have to.
Use a library to do it for you.
Hope this helps! | 0 | 1,341 | 0 | 0 | 2016-12-15T00:10:00.000 | python,api,pdf | Transfer PDF files between servers in python | 1 | 2 | 2 | 41,154,455 | 0 |
0 | 0 | With Selenium Webdriver, I have to upload some files on a website but since the pop-up window for browsing the file location is handled by the operating system and not the browser, I cannot automate that part with Selenium.
So I want to know which framework or module I need to use to work with system windows of Windows OS. Can tkInter or wxPython be used for this?
My script will be used on Windows 7, 8 & 10. | false | 41,158,067 | 0 | 0 | 0 | 0 | You can call autoit3 framework from Python even to open the File Open dialog and fill in the values and press OK or do whatever with the windows. Autoit3 has a dll that can be loaded and called using ctypes. That's what I did in one or 2 projects.
If I understand your question correctly, wxpython or tk won't help you. They can be used to make the windowed UI, not to control other programs. | 0 | 253 | 0 | 0 | 2016-12-15T06:50:00.000 | python,selenium,tkinter,wxpython,selenium-chromedriver | How do I handle the system windows with Python on Windows OS? | 1 | 1 | 2 | 41,159,150 | 0 |
1 | 0 | I have been searching for a way to get my past ride invoice. Is there any api which i can use to get my rides info/invoice. as i don't want to click on every ride and then. Just want to automate the boring stuff. | true | 41,163,658 | 1.2 | 0 | 0 | 1 | "Does uber have an API which let user download invoice pdf?" - No, that API does not exist. | 0 | 106 | 0 | 0 | 2016-12-15T11:59:00.000 | python,uber-api | Does uber have an API which let user download invoice pdf? | 1 | 1 | 1 | 41,212,414 | 0 |
0 | 0 | Is there a way to get the direct download URL using youtube-dl?
I tried it with youtube-dl -g https://www.youtube.com/watch?v=xxx
It returns a URL that looks correct at the first sight, but it leads to a blank page that shows the video player. I want to extract the direct download URL like the example below.
Link to player: https://r4---sn-fpoq-cgpl.googlevideo.com/videoplayback?mime=video%2Fmp4&key=yt6&itag=22&lmt=1476010871066368&source=youtube&upn=4B17cM_dGEU&ei=cNdSWM7CKMjMigbzv62ADA&ip=151.45.98.20&requiressl=yes&initcwndbps=695000&ms=au&mt=1481824045&mv=m&sparams=dur%2Cei%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cratebypass%2Crequiressl%2Csource%2Cupn%2Cexpire&id=o-AHTIR887C2uesvqaEJtgUJhaFssm050soDMhiXfgLQ1f&pl=16&mm=31&mn=sn-fpoq-cgpl&ipbits=0&dur=226.649&ratebypass=yes&expire=1481845712&signature=34BB16F2B7F758CA44680A778F46AC49EBCA3BE3.B0452B32B62D4AA133BA2F59E78EFD66FEA6298D
Direct Link to file: https://r4---sn-cu-n1qs.googlevideo.com/videoplayback?id=o-AFc2eS8nuL2DLN608O3_QxaAQWNDCeIRl9oGTvRo-fKM&ip=81.140.223.31&sparams=dur,ei,id,initcwndbps,ip,ipbits,itag,lmt,mime,mm,mn,ms,mv,pcm2,pcm2cms,pl,ratebypass,requiressl,source,upn,expire&ei=qdVSWOP7BIK7WaTrkdgM&dur=226.649&pl=25&initcwndbps=1175000&source=youtube&ratebypass=yes&pcm2cms=yes&requiressl=yes&pcm2=yes&expire=1481845257&key=yt6&mime=video/mp4&ipbits=0&lmt=1476010871066368&itag=22&mv=m&mt=1481823436&ms=au&mn=sn-cu-n1qs&mm=31&upn=6EUZ1r48CCw&signature=9F514204B90A32936912E5134B58BD8200177AF1.5A5C8BAC42B32C62229D078F0B566890F7DA524B&&title=Bruno+Mars+-+24K+Magic+%5BOfficial+Video%5D
Is there a way to do that? | true | 41,171,238 | 1.2 | 0 | 0 | 2 | I found an answer myself, I don't know why but to generate a download URL the only things to do is add the title at the end of the URL, so adding &title=Bruno+Mars+-+24K+Magic+%5BOfficial+Video%5D at the end of the first URL solved my problem. | 0 | 3,296 | 0 | 1 | 2016-12-15T18:44:00.000 | php,python,download,youtube,youtube-dl | youtube-dl get direct download url | 1 | 1 | 1 | 41,175,121 | 0 |
1 | 0 | I have a spider that I am using to crawl a site. I only need javascript for one piece of my item. So I scrape part of the site with scrapy then open the URL in selenium. While the URL is opening scrapy continues. How do I make scrapy to wait on my selenium logic to finish?
Thanks in advance. | false | 41,185,693 | 0 | 0 | 0 | 0 | You can use DOWNLOAD_DELAY = 0.25 in settings.py,that the downloader should wait before downloading consecutive pages from the same website.
Another way you can use time.sleep() for delaying spider to get response from selenium. | 0 | 505 | 0 | 0 | 2016-12-16T13:38:00.000 | python,selenium,scrapy,web-crawler | Scrapy and Selenium: Make scrapy wait for selenium? | 1 | 1 | 1 | 46,603,023 | 0 |
0 | 0 | So I am having a bit of a issue with the concepts behind Dataflow. Especially regarding the way the pipelines are supposed to be structured.
I am trying to consume an external API that delivers an index XML file with links to separate XML files. Once I have the contents of all the XML files I need to split those up into separate PCollections so additional PTransforms can be done.
It is hard to wrap my head around the fact that the first xml file needs to be downloaded and read, before the product XML's can be downloaded and read. As the documentation states that a pipeline starts with a Source and ends with a Sink.
So my questions are:
Is Dataflow even the right tool for this kind of task?
Is a custom Source meant to incorporate this whole process, or is it supposed to be done in separate steps/pipelines?
Is it ok to handle this in a pipeline and let another pipeline read the files?
How would a high-level overview of this process look like?
Things to note: I am using the Python SDK for this, but that probably isn't really relevant as this is more a architectural problem. | true | 41,212,272 | 1.2 | 0 | 0 | 4 | Yes, this can absolutely be done. Right now, it's a little klutzy at the beginning, but upcoming work on a new primitive called SplittableDoFn should make this pattern much easier in the future.
Start by using Create to make a dummy PCollection with a single element.
Process that PCollection with a DoFn that downloads the file, reads out the subfiles, and emits those.
[Optional] At this point, you'll likely want work to proceed in parallel. To allow the system to easily parallelize, you'll want to do a semantically unnecessary GroupByKey followed by a ParDo to 'undo' the grouping. This materializes these filenames into temporary storage, allowing the system to have different workers process each element.
Process each subfile by reading its contents and emit into PCollections. If you want different file contents to be processed differently, use Partition to sort them into different PCollections.
Do the relevant processing. | 0 | 614 | 1 | 0 | 2016-12-18T19:55:00.000 | python,etl,google-cloud-dataflow | Google Cloud Dataflow consume external source | 1 | 1 | 1 | 41,229,251 | 0 |
0 | 0 | I'm working on a project mainly for a bit of fun.
I set up a twitter account and wrote a python script to write tweets.
Initially i hard-coded the twitter credentials for my app into my script (tweet.py)
Now i want to share the project so i have removed my app's credentials from tweet.py and added them to a config file. I have added the config file to .gitignore.
My question is, if someone forks my project, can they somehow checkout an old version of tweet.py which has the credentials? If so, what steps can i take to cover myself in this case? | false | 41,219,491 | 0.099668 | 1 | 0 | 1 | Yes, anyone can see the old files in version history in git-hub free version. If you want to make your project secure, you have to pay for private repository in github.
If you dont wana pay, follow what @Stijin suggested. | 0 | 36 | 0 | 0 | 2016-12-19T09:31:00.000 | python,git,twitter | Git security - private information in previous commits | 1 | 1 | 2 | 41,219,615 | 0 |
0 | 0 | I am a noob to web and mqtt programming, I am working on a python application that uses mqtt (via hivemq or rabbitmq broker) and also needs to implement http rest api for clients.
I realized using python bottle framework is pretty easy to provide a simple http server, however both bottle and mqtt have their event loop, how do I combine these 2 event loop, I want to have a single threaded app to avoid complexity. | true | 41,254,715 | 1.2 | 0 | 0 | 1 | I'm not familiar with bottle but a quick look at the docs it doesn't look like there is any other way to start it's event loop apart from with the run() function.
Paho provides a loop_start() which will kick off it's own background thread to run the MQTT network event loop.
Given there looks to be no way to run the bottle loop manually I would suggest calling loop_start() before run() and letting the app run on 2 separate threads as there is no way to combine them and you probably wouldn't want to anyway.
The only thing to be careful of will be if MQTT subscriptions update data that the REST service is sending out, but as long as are not streaming large volumes of data that is unlikely to be an issue. | 1 | 544 | 0 | 1 | 2016-12-21T03:40:00.000 | python,rabbitmq,mqtt,bottle,hivemq | Using http and mqtt together in a single threaded python app | 1 | 1 | 1 | 41,259,154 | 0 |
0 | 0 | Telegram BOT API has functions to send audio files and documents ,But can it play from an online sound streaming URL? | false | 41,285,298 | 0 | 1 | 0 | 0 | It will just show preview of link and if it's an audio, an audio bar will be shown. so the answer is yes, but it will not start automatically and user should download and play it. | 0 | 3,630 | 0 | 3 | 2016-12-22T14:23:00.000 | telegram,telegram-bot,python-telegram-bot,php-telegram-bot | Can the Telegram bot API play sound from an online audio streaming URL? | 1 | 2 | 4 | 41,792,018 | 0 |
0 | 0 | Telegram BOT API has functions to send audio files and documents ,But can it play from an online sound streaming URL? | false | 41,285,298 | 0 | 1 | 0 | 0 | No, you can't with Telegram Bot APIs.
You must download the file and upload it on Telegram servers. | 0 | 3,630 | 0 | 3 | 2016-12-22T14:23:00.000 | telegram,telegram-bot,python-telegram-bot,php-telegram-bot | Can the Telegram bot API play sound from an online audio streaming URL? | 1 | 2 | 4 | 41,324,053 | 0 |
0 | 0 | When I'm opening google.com and doing a search in Chrome Selenium WebDriver, it redirects me to my local google domain, although the search string I'm using is "google.com ....." How can I remain on the "com" domain? | false | 41,296,179 | 0.379949 | 0 | 0 | 2 | Use this url - https://www.google.com/ncr. This will not redirect to your location specific site. | 0 | 202 | 0 | 0 | 2016-12-23T06:21:00.000 | python,google-chrome,selenium,google-search,browser-automation | How can I avoid being redirected to a local google domain in Selenium Chrome? | 1 | 1 | 1 | 41,296,329 | 0 |
0 | 0 | I know this question has been asked before but I've tried pretty much every solution listed on every question I can find, all to no avail.
Pip install lxml doesn't work, nor does easy_install lxml. I have downloaded and tried a handful of different versions of lxml:
lxml-3.6.4-cp27-cp27m-win32 (WHL file)
lxml-3.7.0-cp36-cp36m-win32 (WHL file)
lxml-lxml-lxml-3.7.0-0-g826ca60.tar (GZ file)
I have also downloaded, and extracted everything from, both libxml2 and libxslt. Now they are both sitting in their own unzipped folders.
When I run the installations from the command line, it appears to be working for a few seconds but eventually just fails. It either fails with exit status 2 or failed building wheel for lxml or could not find function xmlCheckVersion in library libxml2. Is libxm12 installed?.....I think it's installed but I have no clue what an installed libxm12 should look like. I unzipped and extracted everything from the libxm12 download.
I've also tried all the following commands from other SO posts, and each has failed:
sudo apt-get install python-lxml
apt-get install python-dev libxml2 libxml2-dev libxslt-dev
pip install --upgrade lxml
pip3 install lxml
I have also looked up and attempted installing "prebuilt binaries" but those also don't seem to work.......
I don't want this post to just be me complaining that it wouldn't work so my question is: what is the simplest most straightforward way to put lxml onto my computer so I can use it in Python? | false | 41,304,621 | 0.197375 | 0 | 0 | 1 | Install missing dependency sudo apt-get install zlib1g-dev | 0 | 1,539 | 0 | 0 | 2016-12-23T16:12:00.000 | python,pip,lxml | Can't install lxml module | 1 | 1 | 1 | 45,203,932 | 0 |
0 | 0 | Everytime I use pip command the command failed with error: "ImportError: No module named 'urllib3'".
I do got urllib3 installed, and when I'm trying to install urllib3 again I got the same error. What can I do?
I'm using windows 10.
I cant run "pip install virtualenv", I got the same error with any pip command. | false | 41,305,432 | 0.099668 | 0 | 0 | 2 | May be worth double checking your PYTHONPATH Environment Variable in:
Control Panel\System and Security\System -> Advanced System Settings -> Environment Variables.
I had a rogue copy of Python that caused this exact error | 1 | 6,193 | 0 | 0 | 2016-12-23T17:16:00.000 | windows,python-2.7,python-3.x,pip,urllib3 | ImportError: No module named 'urllib3' | 1 | 2 | 4 | 42,188,924 | 0 |
0 | 0 | Everytime I use pip command the command failed with error: "ImportError: No module named 'urllib3'".
I do got urllib3 installed, and when I'm trying to install urllib3 again I got the same error. What can I do?
I'm using windows 10.
I cant run "pip install virtualenv", I got the same error with any pip command. | false | 41,305,432 | 0 | 0 | 0 | 0 | For escaping this error try to install virtualenv through "pip install virtualenv" and create the virtual environment directory using "python3 -m venv myvenv" which will create a myvenv named folder now activate the myvenv folder using "source \myvenv\bin\activate" now you have your virtual environment setup now you can install whatever you want under the venv , which will not conflict with your base os installed programs try some googling to explore pic virtualenv setup and use. happy coding :) | 1 | 6,193 | 0 | 0 | 2016-12-23T17:16:00.000 | windows,python-2.7,python-3.x,pip,urllib3 | ImportError: No module named 'urllib3' | 1 | 2 | 4 | 41,306,324 | 0 |
0 | 0 | I am currently working on a test application involving a server and several client. The communication is achieved through the use of the TCP/IP protocol.
The server has several slots available. When a client connects, this one is affected to a slot. Is there a reliable way to identify if a disconnected client has reconnected?
I would like to reassign the disconnected client to its previous slot.
I do not really ask for code, but just clues that could help me to solve this problem.
Thanks for your answers.
Edit
Working with MAC addresses should do it, login/pass, or pass-phrases. | false | 41,310,818 | 0 | 0 | 0 | 0 | Working with MAC addresses should do it, login/pass, or pass-phrases. – Papipone | 0 | 55 | 0 | 0 | 2016-12-24T06:10:00.000 | python-3.x,sockets | Python3 knwowing if a disconnected client has reconnected | 1 | 1 | 1 | 54,252,278 | 0 |
0 | 0 | I'm using eclipse program to run selenium python, but there is an issue that when I run over 1000 TCs in one times, only 1000 first TC have test result. If I separate these TCs to many parts with each part is less than 1000 TC, the test result is received completely. I think the issue is not from coding, how can I fix this ? :( | false | 41,357,580 | 0 | 1 | 0 | 0 | which unit test framework you are using, and why you are running from eclipse, i mean it's good for testing but eventually you will have to integrate that with Jenkins or other software so can you try running from command line and see what's happening.
by the way, what error you are getting? | 0 | 40 | 0 | 0 | 2016-12-28T07:43:00.000 | python,eclipse,selenium,automated-tests | Cannot get test result if running over 1000 Test cases in selenium python | 1 | 1 | 1 | 41,359,710 | 0 |
1 | 0 | I want to send bulk SMS on WhatsApp without creating broadcast list.
For that reason, I found pywhatsapp package in python but it requires WhatsApp client registration through yowsup-cli.
So I've run yowsup-cli registration -r sms -C 00 -p 000000000000 which resulted in the error below:
INFO:yowsup.common.http.warequest:{"status":"fail","reason":"old_version"}
status: fail reason: old_version
What did I do wrong and how can I resolve this? | false | 41,364,998 | 0 | 1 | 0 | 0 | The error you get clearly points out the nature of the challenge you are facing: it has to do with your version of yowsup-cli; it's an old version.
It means that your project requires a version of yowsup-cli higher than what you currently have so as to work effectively as require.
What you need to do so as to resolve it is: to update your yowsup-cli application to a more recent version. | 0 | 1,399 | 0 | 0 | 2016-12-28T15:25:00.000 | python,whatsapp | How to send bulk sms on whatsapp | 1 | 2 | 2 | 41,365,717 | 0 |
1 | 0 | I want to send bulk SMS on WhatsApp without creating broadcast list.
For that reason, I found pywhatsapp package in python but it requires WhatsApp client registration through yowsup-cli.
So I've run yowsup-cli registration -r sms -C 00 -p 000000000000 which resulted in the error below:
INFO:yowsup.common.http.warequest:{"status":"fail","reason":"old_version"}
status: fail reason: old_version
What did I do wrong and how can I resolve this? | false | 41,364,998 | 0 | 1 | 0 | 0 | The problem is with the http headers that are sent to whatsapp servers, these are found in env/env.py
The name of headers are manually provided, therefore due to new updates whatsapp servers only serve or authenticate to updated devices which is identified with their http/https/etc headers, in this case you need to update some constants in the above file(env/env.py) in your yowsup folder. | 0 | 1,399 | 0 | 0 | 2016-12-28T15:25:00.000 | python,whatsapp | How to send bulk sms on whatsapp | 1 | 2 | 2 | 41,424,293 | 0 |
0 | 0 | I was wondering, I'm interested in using the Python Yahoo Finance API on my website, I'm using iPage as my webhost, how can I install APIs there, I just today found out how can I code the website using python | false | 41,365,358 | 0 | 1 | 0 | 0 | You will need to copy the scripts to your /cgi-bin/ directory.
You can find further reference in your iPage Control Panel, under "Additional Resources/CGI and Scripted Language Support". Then look for "Server Side Includes and CGI", and you will find the supported Python version and other relevant directory paths for your setup. | 0 | 604 | 0 | 1 | 2016-12-28T15:47:00.000 | python,api | Using Python APIs on ipage? | 1 | 1 | 1 | 41,819,998 | 0 |
0 | 0 | What I want to achieve is :
tasks = [call(url) for url in urls]
call is an async method / coroutine in Python3.5 to perform GET requests , let's say aiohttp.
So basically all calls to call are async. Now I can run asyncio.wait(tasks) and later access the result in futures one by one.
BUT, what I want is, lets assume there are 2 url only, then :
a, b = call(url1), call(url2)
Something like how you do it in Koa by yielding an array. Any help how to do this if it can be done ?? | true | 41,485,507 | 1.2 | 0 | 0 | 1 | var1, var2 = loop.run_until_complete(asyncio.gather(task1, task2))
According to the docs, gather retains the order of the sequence it was passed | 1 | 73 | 0 | 2 | 2017-01-05T12:46:00.000 | python,python-3.x,asynchronous,concurrency,python-asyncio | Set result of 2 or more Async HTTP calls into named variables | 1 | 1 | 1 | 41,502,152 | 0 |
1 | 0 | Im running python code solution (automation) in linux
As part of the test im calling different api (rest) and connecting to my sql db.
I'm running the solution 24/7
The soultion does
Call api with wget
Every 1 min samples the db with query for 60 min max
Call api again with wget
Every 1 min samples dc for 10 mins max.
This scenario runs 24/7
Problem is that after 1 hr/ 2hr (inconsistency-can happen after 45 mins for instance) the solution exit with error
Temporary failutre in name resolution.
It can happen even after 2 perfect cycle as I described above.
After this failure I'm trying to call with wget tens of times and ends with the same error.
After some time it covered by it self.
Want to mention that when it fails with wget on linux,
Im able to call the api via POSTMAN via windows with no problem.
The api calls are for our system (located in aws) and im using dns of our elb..
What could be the oroblem for this inconsistency?
Thanks | false | 41,500,455 | 0 | 0 | 0 | 0 | This is tricky not knowing with which options you are calling wget and no log output, but since it seems to be a dns issue I would explicitly pass the --dns-servers=your.most.reliable.server to wget. If it persists I would also pass --append-output=logfile and examine logfile for further clues. | 0 | 5,012 | 1 | 1 | 2017-01-06T06:49:00.000 | python,linux,api,wget | Temporary failure in name resolution -wget in linux | 1 | 2 | 2 | 41,500,665 | 0 |
1 | 0 | Im running python code solution (automation) in linux
As part of the test im calling different api (rest) and connecting to my sql db.
I'm running the solution 24/7
The soultion does
Call api with wget
Every 1 min samples the db with query for 60 min max
Call api again with wget
Every 1 min samples dc for 10 mins max.
This scenario runs 24/7
Problem is that after 1 hr/ 2hr (inconsistency-can happen after 45 mins for instance) the solution exit with error
Temporary failutre in name resolution.
It can happen even after 2 perfect cycle as I described above.
After this failure I'm trying to call with wget tens of times and ends with the same error.
After some time it covered by it self.
Want to mention that when it fails with wget on linux,
Im able to call the api via POSTMAN via windows with no problem.
The api calls are for our system (located in aws) and im using dns of our elb..
What could be the oroblem for this inconsistency?
Thanks | false | 41,500,455 | 0 | 0 | 0 | 0 | You can ignore the fail:
wget http:/host/download 2>/dev/null | 0 | 5,012 | 1 | 1 | 2017-01-06T06:49:00.000 | python,linux,api,wget | Temporary failure in name resolution -wget in linux | 1 | 2 | 2 | 62,781,058 | 0 |
1 | 0 | I would like to offload some code to AWS Lambda that grabs a part of a screenshot of a URL and stores that in S3. It uses chromium-browser which in turn needs to run in xvfb on Ubuntu. I believe I can just download the Linux 64-bit version of chromium-browser and zip that up with my app. I'm not sure if I can do that with xvfb. Currently I use apt-get install xvfb, but I don't think you can do this in AWS Lambda?
Is there any way to use or install xvfb on AWS Lambda? | false | 41,509,856 | 0.099668 | 0 | 0 | 1 | No, this breaks the lambda paradigm of having a fully built container ready to go.
Also, anything you'd do with xvfb is probably going to be slow. As a general rule lambdas should execute in under a second, otherwise you should just have a server.
I would recommend creating a docker container and making an auto-scaling group. | 0 | 2,074 | 0 | 2 | 2017-01-06T16:25:00.000 | python,amazon-web-services,aws-lambda | Can I use xvfb with AWS Lambda? | 1 | 1 | 2 | 41,510,034 | 0 |
0 | 0 | My goal is to have remote control of a device on a WLAN. This device has software that enables me to configure this wireless network (IP, mask, gateway, dns). I can successfully connect this device, and my computer to a common network. Since both machines share the same network, I made the assumption that I would be able to open up a socket between them. Knowing the IP and port of the device that I am attempting to control remotely I used the following code, only to receive a timeout:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('192.168.xxx.xxx', XXXX))
(I am using python 2.7 on mac OS 10.11.6)
The network that I am connected to is on a different subnet that the IP that I assigned to my device. I also tried this having an IP on the same subnet as my network. There could be a number of things keeping me from opening a socket. That's not really what I'm after. The heart of my question is whether or not I can use python's 'socket' module to connect to a device wirelessly. | false | 41,516,828 | 0.197375 | 0 | 0 | 1 | Yes you can.
So you get a timeout when you try to connect to a wireless device. There are several steps you can take in order to troubleshoot this.
Make sure your device has a program running that is listening to the port you want to connect to. Identify if the device can answer ICMP packets in general and can be pinged in particular. Try to ping the device. If ping succeeds, it means that basic connectivity is established and the problem is somewhere higher in the OSI stack.
- I can ping the device - great, it means that the problem is somewhere in TCP or Application Layer of the TCP/IP stack. Make sure the computer, the device, and intermediate networking equipment allow for TCP connections to the particular host and port. Then proceed to your application and the device software. Add some code to the question, post the stack trace you get or ask another one on SO.
- I can't ping the device - great. There's no connectivity between the devices and you're to identify the reason.
I) Draw a network diagram. How many intermediate network devices are placed in between the computer and the device? What are they, routers, switches? (Just in case, home grade wifi modem is a router.) Get an idea of how IP datagrams should travel across the net.
II) You said that the device can be used to configure an IP network. At least for troubleshooting purposes I would ignore this option and rely on a static IP or your router's DHCP server. Using an existing DHCP will ensure there's no IP misconfigurations.
III) Review routing tables of all the devices you have. Do they have an appropriate default gateway? Does a router knows how to pass the packets to the device. You're probably in trouble if the computer and the device are in the same subnet but attached to different network interfaces. Split the network in two subnets if needed and set up static routes between them on the router.
You can also use wireshark to see if data you send leaves the computer or is dropped right there by some nasty firewall.
There's a lot of caveats in getting a LAN working. You may want to ask questions on networking.stackexchange if these simple steps doesn't help you or if you have major troubles following them. Or just leave a comment here, I'd be happy to help. | 0 | 2,085 | 0 | 0 | 2017-01-07T01:20:00.000 | python,sockets,wireless,ethernet | can I use python's 'socket' module to connect to a wireless ethernet host? | 1 | 1 | 1 | 41,569,946 | 0 |
0 | 0 | Ok, so I've looked around on how to do this and haven't really found an answer that showed me examples that I could work from.
What I'm trying to do is have a script that can do things like:
-Log into website
-Fill out forms or boxes etc.
Something simple that might help me in that I though of would be for example if I could write a script that would let me log into one if those text message websites like textnow or something like that, and then fill out a text message and send it to myself.
If anyone knows a good place that explains how to do something like this, or if anyone would be kind enough to give some guidance of their own then that would be greatly appreciated. | true | 41,528,141 | 1.2 | 1 | 0 | 1 | So after some good answers and further research, I have found that selenium is the thing that best suits my needs. It works not only with python, but supports other languages as well. If anyone else is looking for something that I had been when I asked the my question, a quick Google search for "selenium" should give them all the information they need about the tool that I found best for what I needed. | 0 | 58 | 0 | 0 | 2017-01-08T00:26:00.000 | python,html,python-3.x,web | How to have python interact automatically with a web site | 1 | 1 | 1 | 44,302,397 | 0 |
0 | 0 | When trying to use the BMCS Python SDK, I get an SSL/TLS exception. Why?
Exception:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)> | true | 41,528,570 | 1.2 | 0 | 0 | 0 | Bare Metal Cloud Services requires TLS 1.2 connections. Your version of OpenSSL is probably too old and does not support TLS 1.2. Please upgrade your version of OpenSSL and try again. | 0 | 177 | 0 | 0 | 2017-01-08T01:40:00.000 | oracle-cloud-infrastructure,oci-python-sdk | Bare Metal Cloud - Python SDK SSL/TLS exception | 1 | 1 | 1 | 41,528,571 | 0 |
0 | 0 | I'm visiting an unknown and possibly malicious website. Lots of them. Python's requests do not run javascript. Can I get infected? Should I consider using a virtual machine? | true | 41,533,444 | 1.2 | 0 | 0 | 2 | No, merely downloading HTTP data won't install a virus.
A virus needs to be activated too, and requests doesn't do anything with the downloaded data for that to happen. Normally, a virus uses bugs in the browser (or more commonly, a plugin in the browser) to trigger code execution, or by tricking the user into executing the downloaded file. For example, bugs in the Flash player executing a Flash file could be used to run arbitrary code, or the user is tricked into believing they downloaded a document but it is really an executable program. | 1 | 1,043 | 0 | 4 | 2017-01-08T13:55:00.000 | python,security,python-requests,virus | Can I get a virus by visiting an unknown website using python's requests package? | 1 | 1 | 1 | 41,533,461 | 0 |
0 | 0 | I have an interesting problem where we need to document all the URLs that our massive Python project calls. It's not feasible to manually go through the code because it's too large and changes often.
Ideally, what I'd like is a piece of script that given a Python project folder can go through all the files in it and find where requests or urllib modules make calls and list the accompanying URL. | false | 41,545,979 | 0.379949 | 0 | 0 | 2 | I think what you could do instead is to wrap requests or urllib modules with a wrapper that logs URLs your app is connecting and then just call real urllib or requests modules functions. So whenever you call import requests you actually import your wrapper. | 0 | 42 | 0 | 0 | 2017-01-09T10:32:00.000 | python,web-scraping | List all urls called by requests/urllib in python project | 1 | 1 | 1 | 41,546,162 | 0 |
0 | 0 | I install chromedriver through my package.json file and it gets installed in my npm_modules folder. Then I add it to the PATH of executables, when running through terminal tests are passing.
When running the same command in pycharm, says that it cannot find the executable:
WebDriverException: Message: 'chromedriver' executable needs to be in PATH.
Im guessing that I have to set it up in a specific way in pycharm.
Thanks | false | 41,559,294 | 0.049958 | 0 | 0 | 1 | You can specific custom PATH variable for chromedriver to PyCharm debug configuration environment variables. | 1 | 8,419 | 0 | 1 | 2017-01-10T00:25:00.000 | python,selenium,pycharm,selenium-chromedriver | Pycharm not finding executable for chromedriver for selenium | 1 | 1 | 4 | 56,002,173 | 0 |
1 | 0 | I look after many links to find how can I define my proprietary header in webpy. Can you help me, please. I need to define my own http header like ("X-UploadedFile") and then use it with web.input() | false | 41,566,222 | 0.197375 | 0 | 0 | 1 | Headers aren't part of web.input(), they're part of the "environment".
You can add headers, to be sent to your client using web.header('My-Header', 'header-value').
You can read headers sent by your client using: web.ctx.env.get('MY_HEADER') (Note all-caps, and use of underline rather than dash). | 0 | 253 | 0 | 1 | 2017-01-10T10:08:00.000 | python,web.py | Define own header in webpy | 1 | 1 | 1 | 41,726,066 | 0 |
0 | 0 | I have been working on developing a Rally API using python with the help of links pointed by Rally help (Pyral). My code connects well with the Rally server and pulls specific user story I want, but not the columns I am interested in. I am aiming to pull [full] specific reports with fields such as project, tags, etc. under the 'Reports' tab. I tried to find out how can I do it but didn't get direction. Also, the specific user stories I am able to pull include some weird fields like c_name, c_offer and the like. I would really appreciate if someone could help me through this. Like to connect to a specific project/workspace in Rally we have the following code where it asks the details in the manner below:
rally = Rally(server='', apikey='',workspace='',project='')
Is there any way to specify what report/columns I want?
Thanks in advance | false | 41,573,669 | 0.197375 | 0 | 0 | 1 | Most of the reports on the Reports tab are rendered by a separate Analytics 1 service outside of the standard WSAPI you've been communicating with. Some of that data is available in WSAPI -IterationCumulativeFlowData, ReleaseCumulativeFlowData. What data specifically are you looking for? | 0 | 346 | 0 | 2 | 2017-01-10T16:31:00.000 | python-2.7,rally,code-rally,pyral | How to connect Rally API (python) to the specific reports under 'Report' section | 1 | 1 | 1 | 41,579,286 | 0 |
1 | 0 | I have recorded some Selenium Scripts using the Selenium IDE Firefox add-on.
I'd like to add these to the unit test cases for my Django project. Is it possible to somehow turn these into a Python unit test case? | true | 41,603,576 | 1.2 | 1 | 0 | 1 | If you are recording scripts in Python formatting, those are already converted to unit test cases. Save each scripts and run them in batch mode. | 0 | 81 | 0 | 0 | 2017-01-12T01:15:00.000 | python,unit-testing,selenium,selenium-webdriver,selenium-ide | Using Selenium Scripts in a Python unit test | 1 | 1 | 1 | 41,607,809 | 0 |
0 | 0 | EDIT Removed BOTO from question title as it's not needed.
Is there a way to find the security groups of an EC2 instance using Python and possible Boto?
I can only find docs about creating or removing security groups, but I want to trace which security groups have been added to my current EC2 instance. | false | 41,620,292 | 0.099668 | 1 | 0 | 1 | You can check it from that instance and execute below command
curl http://169.254.169.254/latest/meta-data/security-groups
or from aws-cli also
aws ec2 describe-security-groups | 0 | 842 | 0 | 1 | 2017-01-12T18:21:00.000 | python,amazon-web-services,amazon-ec2 | How do I list Security Groups of current Instance in AWS EC2? | 1 | 1 | 2 | 41,620,629 | 0 |
0 | 0 | I read the following on python-requests website:
Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set stream to False or read the content property of the Response object.
But as I use the object returned by req.json() and doesn't use req thereafter. I wonder when is the connection released? I don't really know how to check that for sure too.
Many thanks | true | 41,634,436 | 1.2 | 0 | 0 | 0 | You could answer your question quite simpply by reading the source code. But anyway: response.json() does read the response's content, obviously - it's just a convenient shortcut for json.loads(response.content). | 0 | 59 | 0 | 0 | 2017-01-13T12:14:00.000 | python,json,python-requests | In requests-python, when is connection released when using req_json = req.json()? | 1 | 1 | 1 | 41,634,715 | 0 |
0 | 0 | I have a small Cassandra cluster hosted on AWS that I want to connect to using the python drivers. Unfortunately I get "Keyspace does not exist" when trying to connect to it from one specific pc. The strange thing is that keyspace exists and I can connect to itfrom other pcs. And I can find that keyspace on that server in cqlsh. How do I fix this error? I've looked into the cassandra version, 3.7.1 which should work fine with my updated python driver. The error is reliably repeatable on that pc. And I can reliably connect to that keyspace on other pcs. | false | 41,642,612 | 0.197375 | 0 | 0 | 2 | Check if the query from your python driver is using upper case letters for the keyspace name - change it to lower case | 0 | 2,129 | 0 | 1 | 2017-01-13T20:03:00.000 | python,amazon-web-services,cassandra | Cassandra cluster returns incorrect error "Keyspace does not exist" when connecting from one specific pc | 1 | 1 | 2 | 43,731,632 | 0 |
0 | 0 | I wrote a python script to download some files from an s3 bucket. The script works just fine on one machine, but breaks on another.
Here is the exception I get: botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden.
I am pretty sure it's related to some system configurations, or something related to the registry, but don't know what exactly. Both machines are running Windows 7 and python 3.5.
Any suggestions. | false | 41,646,514 | 1 | 0 | 1 | 8 | The issue was actually being caused by the system time being incorrect. I fixed the system time and the problem is fixed. | 0 | 8,191 | 0 | 6 | 2017-01-14T03:29:00.000 | python,windows,amazon-web-services,amazon-s3,boto3 | Trying to access a s3 bucket using boto3, but getting 403 | 1 | 1 | 2 | 41,682,857 | 0 |
0 | 0 | I am using the telepot python library, I know that you can send a message when you have someone's UserID(Which is a number).
I wanna know if it is possible to send a message to someone without having their UserID but only with their username(The one which starts with '@'), Also if there is a way to convert a username to a UserID. | false | 41,664,810 | 1 | 1 | 0 | 73 | Post one message from User to the Bot.
Open https://api.telegram.org/bot<Bot_token>/getUpdates page.
Find this message and navigate to the result->message->chat->id key.
Use this ID as the [chat_id] parameter to send personal messages to the User. | 0 | 131,355 | 0 | 39 | 2017-01-15T18:38:00.000 | visual-studio-code,python,telegram,telegram-bot,python-telegram-bot | How can I send a message to someone with my telegram bot using their Username | 1 | 2 | 5 | 50,736,131 | 0 |
0 | 0 | I am using the telepot python library, I know that you can send a message when you have someone's UserID(Which is a number).
I wanna know if it is possible to send a message to someone without having their UserID but only with their username(The one which starts with '@'), Also if there is a way to convert a username to a UserID. | false | 41,664,810 | 1 | 1 | 0 | 14 | It is only possible to send messages to users whom have already used /start on your bot. When they start your bot, you can find update.message.from.user_id straight from the message they sent /start with, and you can find update.message.from.username using the same method.
In order to send a message to "@Username", you will need them to start your bot, and then store the username with the user_id. Then, you can input the username to find the correct user_id each time you want to send them a message. | 0 | 131,355 | 0 | 39 | 2017-01-15T18:38:00.000 | visual-studio-code,python,telegram,telegram-bot,python-telegram-bot | How can I send a message to someone with my telegram bot using their Username | 1 | 2 | 5 | 42,990,824 | 0 |
0 | 0 | I have been breaking my head for the pass 2 weeks, and I still can't figure it out.
I'm trying to build a Server-Client based streaming player on Python (Ironpython for the wpf GUI) that streams video files. My problem is when the client requests to seek on a part that he did not load yet. When I try to send him just the middle of the .mp4 file, he cant seem to be able to play it.
Now I know such thing exists because every online player has it, and it uses the HTTP 206 Partial Content request, where the client just requests the byte range that he desires and the server sends it to him.
My question is - how is the client able to play the video with a gap in bytes in his .mp4 file - how can he start watching for the middle of the file? When i seem to try it the player just wont open the file.
And more importantly: how can I implement this on my Server-Client program to enable free seeking?
I really tried to look for a simple explanation for this all over the internet...
Please explain it thoroughly and in simple terms for a novice such as me, I would highly appreciate it.
Thanks in advance. | true | 41,666,809 | 1.2 | 0 | 0 | 2 | Before playing an MP4 file the client (e.g. browser) needs to read the header part of the file. An MP4 is broken into 'Atoms' and the Moov atom is the header or index atom for the file.
For MP4 files that will be streamed, a common optimisation is to move this Moov atom to the front of the file.
This allows the client to get the moov at the start and it will then have the information it needs to allow you jump to the offset you want in your case.
If you don't have the moov atom at the start the client needs to either download the whole file, or if it is a bit more sophisticated, jump around the file with range requests until it finds it. | 0 | 1,969 | 0 | 1 | 2017-01-15T22:01:00.000 | python,video,video-streaming,httprequest,buffering | How does HTTP 206 Partial Content Request works | 1 | 1 | 1 | 41,672,032 | 0 |
0 | 0 | I'm using python to develop SDN
I also wrote a virtual network function just like DHCP,NAT,Firewall,QoS
But I want to get computer's hostname from IP like 192.168.2.XXX
I try to use arp but it only can find IP and MAC address in packets.
So how should I get hostname from specific IP?
Should I try this in DHCP or NAT?
Thanks a lot !! | false | 41,671,972 | 0 | 0 | 0 | 0 | Try socket.gethostbyaddr() from the module socket | 0 | 1,035 | 0 | 1 | 2017-01-16T08:15:00.000 | python,hostname,nat,dhcp,sdn | How to get hostname from IP? | 1 | 1 | 1 | 41,672,160 | 0 |
0 | 0 | I have a graph created in zabbix. I want to update this graph to include items from other hosts. For that I am calling graph.update() zabbix API using a python script. The method is updating the graph item instead of adding/appending to the existing graph item list. Does any one has idea about this ?
graph.update(graphid=graph_id,gitems=[{"itemid" :"10735", "color":"26265b"}])
where graph_id is and id of existing graph.
Thanks in advance!! | false | 41,694,723 | 0 | 0 | 0 | 0 | Get the existing graph items with graph.get first, then update the graph and pass all the existing items (include gitemid for these items) with your new items added. | 0 | 601 | 0 | 1 | 2017-01-17T10:30:00.000 | python,api,graph,zabbix | add graph items to existing graph in zabbix using API's | 1 | 1 | 1 | 41,695,124 | 0 |
1 | 0 | I am trying to give rest call to java API using python.
Java API needs JSON input with java literals like {a:null,b:true,c:false},
While parsing the JSON from python it is not allowing to do so because python needs null,true and false to be inside double quotes like "null","true","false".
what is the solution? | false | 41,756,756 | 0 | 0 | 0 | 0 | while passing json to java api from python replace null with None ,true with True and false with False. It will work | 1 | 52 | 0 | 0 | 2017-01-20T05:33:00.000 | java,python,json | How to pass java literals from python dictionary | 1 | 2 | 2 | 41,764,528 | 0 |
1 | 0 | I am trying to give rest call to java API using python.
Java API needs JSON input with java literals like {a:null,b:true,c:false},
While parsing the JSON from python it is not allowing to do so because python needs null,true and false to be inside double quotes like "null","true","false".
what is the solution? | false | 41,756,756 | 0 | 0 | 0 | 0 | The JSON syntax expects values to be quoted.
It means that the problem comes from the java JSON api.
What API do you use ? | 1 | 52 | 0 | 0 | 2017-01-20T05:33:00.000 | java,python,json | How to pass java literals from python dictionary | 1 | 2 | 2 | 41,756,812 | 0 |
0 | 0 | I've been trying to use the python websocket-client module to receive and store continuous updates from an exchange. Generally, the script will run smoothly for a day or so before raising the following error: websocket._exceptions.WebSocketConnectionClosedException: Connection is already closed.
I've looked at the websocket-client source code and apparently the error is being raised in line 92 by the code if not bytes_:. Furthermore, the WebSocketConnectionClosedException is supposed to be raised "If remote host closed the connection or some network error happened".
Can anybody tell me why this is happening, and what I could do to stop or handle it. | false | 41,785,893 | 0.197375 | 0 | 0 | 1 | Most likely, the remote host closed the connection. You cannot stop it. You can handle it by re-connecting.
People running web servers will implement automatic cleanup to get rid of potentially stale connections. Closing a connection that's been open for 24 hours sounds like a sensible approach. And there's no harm done, because if the client is still interested, it can re-establish the connection. That's also useful to re-authenticate the client, if authentication is required.
On second thought, it might be a network disconnect as well. Some DSL providers used to disconnect and re-assign a new IP every 24 hours, to prevent users from running permanent services on temporarily assigned IP addresses. Don't know if they still do that. | 0 | 7,326 | 0 | 13 | 2017-01-21T23:59:00.000 | python,websocket | "Connection is already closed." error with python WebSocket client | 1 | 1 | 1 | 56,409,252 | 0 |
0 | 0 | Case:
There is a large zip file in an S3 bucket which contains a large number of images. Is there a way without downloading the whole file to read the metadata or something to know how many files are inside the zip file?
When the file is local, in python i can just open it as a zipfile() and then I call the namelist() method which returns a list of all the files inside, and I can count that. However not sure how to do this when the file resides in S3 without having to download it. Also if this is possible with Lambda would be best. | false | 41,789,176 | -0.039979 | 0 | 0 | -1 | As of now, you cannot get such information without downloading the zip file. You can store the required information as the metadata for a zip file when uploading to s3.
As you have mentioned in your question, using the python functions we are able to get the file list without extracting. You can use the same approach to get the file counts and add as metadata to a particular file and then upload it to S3.
Hope this helps, Thanks | 0 | 3,417 | 0 | 7 | 2017-01-22T09:10:00.000 | python,amazon-web-services,amazon-s3,boto | How to count files inside zip in AWS S3 without downloading it? | 1 | 1 | 5 | 41,790,354 | 0 |
0 | 0 | I have been having this error when trying to make web requests to various hosts. After debugging a bit I have found the solution is updating the requests[security] through pip. | true | 41,832,838 | 1.2 | 0 | 0 | 20 | Run
sudo python3 -m pip install "requests[security]"
or
sudo python -m pip install "requests[security]"
to fix this issue. | 0 | 30,212 | 0 | 11 | 2017-01-24T16:06:00.000 | python,python-3.x,pip,python-requests | Python Error 104, connection reset by peer | 1 | 1 | 2 | 41,832,839 | 0 |
0 | 0 | I have some employee data in which there are 3 different roles. Let's say CEO, Manager and Developer.
CEO can access the whole graph, managers can only access data of some people (their team) and developers can not access employee data.
How should I assign subgraph access to user roles and implement this using Python?
There are good solutions and comprehensive libraries and documentations but only in Java! | true | 41,850,411 | 1.2 | 0 | 1 | 1 | At the moment it is not possible to write procedures for custom roles to implement subgraph access control using Python. It is only possible in Java.
A workaround might be to indirektly implement it using phyton by adding properties to nodes and relationship storing the security levels for these nodes and relationships. Checking the secutiry level of a user it might be possible to use a phyton visualization that checks the properties to only display nodes and relationships that are in agreement with the user security level. | 0 | 304 | 0 | 2 | 2017-01-25T11:25:00.000 | python,neo4j,authorization,graph-databases,py2neo | Authorization (subgraph access control) in Neo4j with python driver | 1 | 1 | 2 | 42,622,083 | 0 |
1 | 0 | I'm using selenium on python 3.5 with chrome webdriver on a ububtu vps, and when I run a very basic script (navigate to site, enter login fields, click), memory usage goes up by ~400mb,and cpu usage goes up to 100%. Are there any things I can do to lower this, or if not, are there any alternatives?
I'm testing out selenium in python but I plan to do a project with it in java, where memory usage is a critical factor for me, so the same question applies for java as well. | false | 41,918,828 | 0.066568 | 0 | 0 | 1 | Don't forget drive.close() in your code , if you don't close your driver, you will have a lot instance of Chrome. | 0 | 7,993 | 0 | 3 | 2017-01-29T08:02:00.000 | java,python-3.x,selenium,memory-management | Selenium using too much memory | 1 | 1 | 3 | 44,546,508 | 0 |
1 | 0 | I'm trying to know if this is possible at all. So far it doesn't look that great. Let's imagine I wanted to list all my current Google Authenticator passwords somewhere. That list would update once there's a new set. Is this possible at all?
I remember back when Blizzard made their authenticator. You would basically have to enter the recovery key/password from their app into a program, which could then show your authenticator on the screen and on your phone or physical device (yeah they sold those). I imagine they used TOTP just like Google Authenticator does.
So my real question is: I have my x amount of Google Authenticator passwords, which refreshes every 30 seconds. Can I pull these out and show them in another program? Java? Python? Anything? I assume "reverse engineering the algorithm" and brute forcing the keys (like grab 100 keys and work out the next key) would be impossible, as these are server-client based.. right? | false | 41,941,537 | 0 | 0 | 0 | 0 | So I dug a little deeper. This, however, requires I disable and remove the current 2FA from my account.
Go disable/remove current 2FA
Go enable it again, but remember to grab the secret (it's listed somewhere in the request or on the page) and save it somewhere
Find any secret -> One time password "generator"
Now I have the secrets synced on my PC and on my phone. Pretty neat. Requires a lot of work, as I need to disable all my authenticators, but it does work actually. | 0 | 75 | 0 | 0 | 2017-01-30T17:08:00.000 | javascript,java,python,google-authenticator,authenticator | Google Authenticator passwords duplicated somewhere else? | 1 | 1 | 1 | 41,942,437 | 0 |
0 | 0 | Today, I tried to paste "ip[tab]port" in a interpreter, the result is "ipport".
Example: Copy 111.222.3.44 80(using spaces, here, in lieu of tab) from another source, e.g. Notepad, and paste it into the interactive shell. Unfortunately, when I try this, the [tab] doesn't
paste, and the result is:111.222.3.4480
I would like to be able to paste the IP & Port with the [tab] so that they are properly separated when pasted.
Python 3.6, Windows OS.
Does anyone know a way to do this? | false | 41,942,799 | 0 | 0 | 0 | 0 | This is a frustrating issue. The workaround I settled on was to paste into a text editor, then used sed to convert tabs into \t characters. Then copy and paste that into the python interactive shell.
For example:
Copy and paste 111.222.3.44[tab]80 into a text file that preserves the tabs, and save that file as temp.
Run a sed command to convert tabs into \t.
sed 's/\t/\\t/g' temp
Copy and the result into the python interactive shell:
111.222.3.44\t80 | 1 | 603 | 0 | 4 | 2017-01-30T18:19:00.000 | python,python-3.x,copy,paste | How to paste Tab character into Python interactive shell | 1 | 1 | 2 | 70,075,958 | 0 |
0 | 0 | Is there a way to automate checking Google Page Speed scores? | false | 41,988,762 | 0 | 1 | 0 | 0 | Anybody using this guide (as of April 2022) will need to update to the following:
https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url={YOUR_SITE_URL}/&filter_third_party_resources=true&locale=en_US&screenshot=false&strategy=desktop&key={YOUR_API_KEY}
The difference is the "/v2/" needs to be replaced with "/v5/" | 0 | 2,435 | 0 | 1 | 2017-02-01T20:08:00.000 | python,selenium,automated-tests | How to automate Google PageSpeed Insights tests using Python | 1 | 1 | 2 | 71,752,003 | 0 |
0 | 0 | Is there a Facebook API call to retrieve all or a subset of the current live videos with the relative metadata, such as location, user who is streaming, time at stream?
The implementation could be in Python. | true | 41,992,016 | 1.2 | 0 | 0 | 1 | No, there is no such API at this time. | 0 | 355 | 0 | 0 | 2017-02-01T23:51:00.000 | python,facebook-live-api | How to access videos in Facebook livemap programmatically? | 1 | 1 | 1 | 42,508,346 | 0 |
1 | 0 | So I am currently writing a script that will allow me to wait on a website that has queue page before I can access contents
Essentially queue page is where they let people in randomly. In order to increase my chance of getting in faster , I am writing multi thread script and have each thread wait in line.
First thing that came to my mind is would session.get() works in this case?
If I send session get request every 10 seconds, would I stay hold my position in queue? Or would I end up at the end?
Some info about website, they randomly let people in. I am not sure if refreshing page reset your chance or not. But best thing would be to leave page open and let it do it things.
I could use phantomjs but I would rather not have over 100 headless browser open slowing down my program and computer | false | 42,003,456 | 0 | 0 | 0 | 0 | You don't need to keep sending the session, as long as you keep the Python application running you should be good. | 0 | 194 | 0 | 1 | 2017-02-02T13:27:00.000 | python,python-2.7,session,request,phantomjs | Does Python requests session keep page active? | 1 | 1 | 1 | 42,003,480 | 0 |
1 | 0 | how do I upload an image (from the web) using Bigcommerce's Python API?
I've got this so far:
custom = api.Products.create(name='Test', type='physical', price=8.33, categories=[85], availability='available', weight=0)
Thank you! I've tried almost everything! | false | 42,003,461 | -0.379949 | 0 | 0 | -2 | This will create the product on the BigCommerce website. You create the image after creating the product, by entering the following line. The image_file tag should be a fully qualified URL pointing to an image that is accessible to the BigCommerce website, being found either on another website or on your own webserver.
api.ProductImages.create(parentid=custom.id, image_file='http://www.evenmore.co.uk/images/emgrab_s.jpg', description='My image description') | 0 | 438 | 0 | 3 | 2017-02-02T13:27:00.000 | python,e-commerce,bigcommerce | Bigcommerce Python API, how do I create a product with an image? | 1 | 1 | 1 | 48,835,159 | 0 |
1 | 0 | I made a simple scraper that accesses an album, and scrapes lyrics for each song from azlyrics.com.
After about an hour of working, the website crashed, with an error:
Chrome:
www.azlyrics.com didn’t send any data. ERR_EMPTY_RESPONSE
Tor, firefox, waterfox:
The connection was reset The connection to the server was reset while the page was loading.
It's the same for all devices on my home network. If I use mobile data to access it via my phone it works fine.
I tried fixing it with ipconfig /release /renew, but it didn't work.
I'm at a loss for what else I could do or why it even happened. Any help is greatly appreciated. | false | 42,006,758 | 0.664037 | 0 | 0 | 4 | Apparently your IP was banned by the website for suspicious activity. There are couple ways around that:
talk to website owners. This is the most straightforward and nicest way
change your IP, e.g. by connecting though a pool of public proxies or Tor. This is a little bit dirty and it is not so robust, e.g. you can be banned by user-agent or some other properties of your scraper. | 0 | 337 | 0 | 2 | 2017-02-02T15:57:00.000 | python,web-scraping | Website error after scraping | 1 | 1 | 1 | 42,006,894 | 0 |
0 | 0 | I am trying to upload the file in python and i want to upload the file in resumable mode i.e when the internet connection resume , the file upload resume from the previous stage.
Is there any specific protocol that supports resumable file upload.
Thanks in advance | false | 42,019,279 | 0 | 0 | 0 | 0 | Excellent answer by GuySoft - it helped me a lot. I have had to slightly modify it as I never (so far) encountered the three exceptions his script is catching, but I experienced a lot of ConnectionResetError and socket.timeout errors on FTP uploads, so I added that. I also noticed that if I added a timeout of 60 seconds at ftp login, number of ConnectionResetErrors dropped significantly (but not all together). It was often happening that upload often got stuck at 100 % at the ftp.storbinary until socket.timeout, then tried 49 times and quit. I fixed that by comparing totalSize and rest_pos and exited when equal.
So I have working solution now, bu I will try to figure out what is causing the socket timeouts.
Interesting thing is that when I used Filezilla and even a PHP script, the file uploads to same FTP server were working without glitch. | 0 | 2,044 | 0 | 3 | 2017-02-03T07:51:00.000 | python,python-2.7,resume-upload | Resumable file upload in python | 1 | 1 | 2 | 68,074,868 | 0 |
0 | 1 | I am using a networkx weighted graph in order to model a transportation network. I am attempting to find the shortest path in terms of the sum of weighted edges. I have used Dijkstra path in order to find this path. My problem occurs when there is a tie in terms of weighted edges. When this occurs I would always like to choose from the set of paths that tied, the path that has the least number of edges. Dijkstra path does not seem to be doing this.
Is there a way to ensure that I can choose the path with the least number of edges from a set of paths that are tied in terms of sum of weighted edges? | false | 42,029,159 | 0 | 0 | 0 | 0 | Instead of using floating points for weights, use tuples (weight, number_of_edges) with pairwise addition. The lowest weight path using these new weights will have the lowest weight, and in the case of a tie, be the shortest path.
To define these weights I would make them a subclass of tuple with __add__ redefined. Then you should be able to use your existing code. | 0 | 711 | 0 | 1 | 2017-02-03T16:51:00.000 | python,algorithm,graph | Python: Shortest Weighted Path and Least Number of Edges | 1 | 1 | 1 | 42,029,561 | 0 |
0 | 0 | I want to make a file and upload that on a Synology NAS. I am using Python. It doesn't support FTP but it is just a network drive.
I know the question is really short, I just don't know what more to tell. | true | 42,033,430 | 1.2 | 0 | 0 | 1 | The point of network drives is that they are used like local drives. So make it accessible to your operating system (mount on Unix/Linux/MacOS, share on Windows...) and copy the file to it. Alternatively, you can use a network protocol, such as webdav, sftp, whatever is enabled. python supports them all (sometimes with some support from the OS) | 0 | 1,321 | 0 | 2 | 2017-02-03T21:40:00.000 | python,python-3.x,upload,nas,synology | How to upload file on Synology NAS? | 1 | 1 | 1 | 42,033,609 | 0 |
1 | 0 | I'm getting started out creating a website where users can store and get (on user request) private information they store on the server. Since the information is private, I would also like to provide 256 bit encryption. So, how should I go about it? Should I code the back end server stuff in node.js or Python, since I'm comfortable with both languages? How do I go about providing a secure server to the user? And if in the future, I would like to expand my service to mobile apps for Android and iOS, what would be the process?
Please try explaining in detail since that would be a great help :) | true | 42,036,307 | 1.2 | 1 | 0 | 1 | You don't need to create your own encrypted communication protocol. Just serve all traffic over https.
If you also wish to encrypt the data before storing it on a database you can encrypt it on arrival to the server.
Check out Express.js for the server, Passport.js for authentication and search for 256-bit encryption on npm. There are quite a few implementations. | 0 | 30 | 0 | 1 | 2017-02-04T03:58:00.000 | python,node.js,web-applications,server | Build a Server to Receive and Send User's Private Information | 1 | 1 | 1 | 42,036,454 | 0 |
0 | 0 | I have requirement where I need to store users ip, device information, user_agent, etc. information for on url on my site. How do I go about this?
This data will be used later as stats (which device hitting more, which locations etc.)
I can see that Google analytics helps in tracking for entire site.
How do I enable it to track only for one specific url on my site and track all information mentioned above? | true | 42,036,376 | 1.2 | 0 | 0 | 1 | If you add your tracking code only on the one web page you wish to track, then you should be able to accomplish your goal. Just to clarify, if you have two web pages, trackme.html and donottrackme.html, you would place the Google Analytics tracking code only on trackme.html. IP, device information, user agent, etc. should be visible within your dashboard. | 0 | 227 | 0 | 0 | 2017-02-04T04:10:00.000 | python-2.7,flask,google-analytics | Track users using Google analytics for one url on website | 1 | 1 | 1 | 42,036,453 | 0 |
0 | 0 | I am new to windows and this is the first time I am running a Python program on windows.
I am running a crawler program that uses selenium and firefox webdriver.
My program runs successfully on mac/ubuntu, but on windows
webdriver.Firefox()
open a new geckodriver window(cmd like window) and just hangs there nothing after that. Program doesn't move forward after that.
Windows 7
geckodriverv0.13 | false | 42,037,554 | 0.197375 | 0 | 0 | 1 | Your problem is most likely the compatibility between Firefox and your GeckoDriver. Try using the latest Firefox and geckodriver. If you have a problem with Firefox, try reinstalling it and disable automatic updating. | 0 | 944 | 0 | 0 | 2017-02-04T07:03:00.000 | python,selenium | Windows: Selenium webdriver.Firefox hangs | 1 | 1 | 1 | 42,040,656 | 0 |
1 | 0 | Is there any kid of "repl + extra features" (like showing docs, module autoreload etc.), like iPython, but for Nodejs?
And I mean something that runs locally & offline. This is a must. And preferably to work both in terminal mode and have an optional nicer GUI on top (like iPython + iPythonQT/Jupyter-qtconsole).
The standard Nodejs repl is usable, but it has horrible usability (clicking the up-arrow cycles through the repl hisoty by line instead of by multi-line command, as you would expect any sane repl to work for interactively experimenting with things like class statements), and is very bare-bones. Every time I switch from iPython to it it's painful. A browser's repl like Chrome's that you can run for node too by starting a node-inspector debug session is more usable... but also too cumbersome. | false | 42,039,868 | 1 | 0 | 0 | 9 | I've been looking for "ipython for node" for years and here's how I would answer your question:
No. | 1 | 7,289 | 0 | 31 | 2017-02-04T11:33:00.000 | node.js,ipython,read-eval-print-loop,ijavascript | Is there a REPL like iPython for Nodejs? | 1 | 1 | 4 | 57,401,854 | 0 |
0 | 0 | I am trying to keep the chrome browser open after selenium finishes executing my test script. I want to re-use the same window for my second script to run. | false | 42,044,315 | -0.066568 | 0 | 0 | -1 | This should be as simple as not calling driver.quit() at the end of your test case. You should be left with the chrome window in an opened state. | 0 | 9,560 | 0 | 3 | 2017-02-04T18:58:00.000 | python,selenium,webdriver,selenium-chromedriver,ui-automation | How to keep chrome browser window open to be re-used after selenium script finishes on python | 1 | 1 | 3 | 42,044,535 | 0 |
1 | 0 | I am scraping data from peoplefinders.com a website which is not accesible from my home country so I am basically using a vpn client.
I login to this website with a session post and through the same session I get items from different pages of the same website. The problem is that I do scraping in a for loop with get requests but for some reason I receive response 400 error after a several iterations. The error occurs after scraping 4-5 pages on average.
Is it due to fact that I am using a vpn connection ?
Doesn't all requests from the same session contains same cookies and hence allow me to keep logged in while scraping different pages of the same website ?
Thank You | false | 42,079,848 | 0 | 0 | 0 | 0 | HTTP 400 is returned, if the request is malformed.
You should inspect the request being made, when you get the error. Perhaps, it is not properly encoded.
VPN should not cause an HTTP 400. | 0 | 397 | 0 | 1 | 2017-02-07T00:44:00.000 | python,web-scraping,session-cookies,vpn | Does using a vpn interrupts python sessions requests which are using the same cookies over and over? | 1 | 1 | 1 | 42,079,994 | 0 |
0 | 0 | I would like to know how to create a communication for each services. I am using API Gateway for the outside of the system to communicate with the services within. Is it necessary for a service to call another service through API Gateway or just directly into the service itself ?
Thank You | false | 42,105,805 | 0 | 0 | 0 | 0 | Api gateway is not needed for Internal service to service communication
But, you need a service registry or some kind of dynamic load balancing mechanism to reach the services | 1 | 3,274 | 0 | 0 | 2017-02-08T06:03:00.000 | python,api,microservices | Microservices Communication Design | 1 | 1 | 2 | 42,114,560 | 0 |
1 | 0 | I am developing a web app which depends on data from one or more third party websites. The websites do not provide any kind of authentication API, and so I am using unofficial APIs to retrieve the data from the third party sites.
I plan to ask users for their credentials to the third party websites. I understand this requires users to trust me and my tool, and I intend to respect that trust by storing the credentials as safely as possible as well as make clear the risks of sharing their credentials.
I know there are popular tools that address this problem today. Mint.com, for example, requires users' credentials to their financial accounts so that it may periodically retrieve transaction information. LinkedIn asks for users' e-mail credentials so that it can harvest their contacts.
What would be a safe design to store users' credentials? In particular, I am writing a Django application and will likely build on top of a PostgreSQL backend, but I am open to other ideas.
For what it's worth, the data being accessed from these third party sites is nowhere near the level of financial accounts, e-mail accounts, or social networking profiles/accounts. That said, I intend to treat this access with the utmost respect, and that is why I am asking for assistance here first. | true | 42,169,854 | 1.2 | 0 | 0 | 2 | There’s no such thing as a safe design when it comes to storing passwords/secrets. There’s only, how much security overhead trade-off you are willing to live with. Here is what I would consider the minimum that you should do:
HTTPS-only (all passwords should be encrypted in transit)
If possible keep passwords encrypted in memory when working with them except when you need to access them to access the service.
Encryption in the data store. All passwords should be strongly encrypted in the data store.
[Optional, but strongly recommended] Customer keying; the customer should hold the key to unlock their data, not you. This will mean that your communications with the third party services can only happen when the customer is interacting with your application. The key should expire after a set amount of time. This protects you from the rogue DBA or your DB being compromised.
And this is the hard one, auditing. All accesses of any of the customer's information should be logged and the customer should be able to view the log to verify / review the activity. Some go so far as to have this logging enabled at the database level as well so all row access at the DB level are logged. | 0 | 1,085 | 0 | 3 | 2017-02-10T22:45:00.000 | python,django,postgresql,security,encryption | How to safely store users' credentials to third party websites when no authentication API exists? | 1 | 1 | 2 | 42,170,097 | 0 |
0 | 0 | Is there any way to delete a message sent by anyone other than the bot itself, the documentation seems to indicate that it is possible
Your own messages could be deleted without any proper permissions. However to delete other people’s messages, you need the proper permissions to do so.
But I can't find a way to target the message to do so in an on_message event trigger, am I missing something or is it just not possible? | false | 42,182,243 | 0 | 1 | 0 | 0 | if you're trying to delete the last sent message, e.g if a user is calling a command and you want to remove their message and then send the command.
Use this "await ctx.message.delete()" at the top of your command, it will find the last sent message and delete it. | 0 | 87,622 | 0 | 10 | 2017-02-11T23:00:00.000 | python,discord | Deleting User Messages in Discord.py | 1 | 1 | 5 | 69,640,480 | 0 |
0 | 0 | Can anyone tell difference between iPerf and iPerf3? while using it with client-server python script,what are the dependencies?And what are the alternatives to iPerf? | false | 42,218,537 | 0 | 1 | 0 | 0 | iperf vs iperf3 from wikipedia
A rewrite of iperf from scratch, with the goal of a smaller, simpler
code base and a library version of the functionality that can be used
in other programs, called iperf3, was started in 2009. The first
iperf3 release was made in January 2014. The website states: "iperf3
is not backwards compatible with iperf2.x".
Alternatives are Netcat, Bandwidth Test Controller (BWCTL), ps-performance toolkit, iXChariot, jperf, Lanbench, NetIO-GUI, Netstress. | 0 | 2,323 | 0 | 0 | 2017-02-14T05:28:00.000 | python-2.7 | Iperf3 commands with python script | 1 | 1 | 2 | 42,813,821 | 0 |
1 | 0 | There is a <ul>tag in a webpage, and so many <li> tags in the <ul> tag. The <li> tags are loaded by ajax automatically while mouse wheel scroll down continuously.
The loading of <li> tags will work well if I use mouse wheel.
I want to use selenium to get the loaded info in <li> tags, but the javascript of:
document.getElementById(/the id of ul tag/).scrollTop=200;
can not work as the new <li> can not be loaded by ajax neither in chrome console nor the selenium execute_script.
So, if there is an API of selenium to behave like mouse wheel scroll down? Or is there any other way to solve this problem? | false | 42,239,544 | 0.099668 | 0 | 0 | 1 | I'd look at the ajax load event listener (the code that loads more <li>s). You need to trigger whatever that listens for. (aka: does it watch for something entering the view port, or something's y-offset, or a MouseEvent, or a scroll()?)
Then you need to trigger that kind of event on the element it listens to. | 0 | 418 | 0 | 2 | 2017-02-15T02:11:00.000 | python,selenium,selenium-chromedriver | How can python + selenium + chromedriver use mouse wheel? | 1 | 2 | 2 | 42,261,282 | 0 |
1 | 0 | There is a <ul>tag in a webpage, and so many <li> tags in the <ul> tag. The <li> tags are loaded by ajax automatically while mouse wheel scroll down continuously.
The loading of <li> tags will work well if I use mouse wheel.
I want to use selenium to get the loaded info in <li> tags, but the javascript of:
document.getElementById(/the id of ul tag/).scrollTop=200;
can not work as the new <li> can not be loaded by ajax neither in chrome console nor the selenium execute_script.
So, if there is an API of selenium to behave like mouse wheel scroll down? Or is there any other way to solve this problem? | true | 42,239,544 | 1.2 | 0 | 0 | 0 | From now on, there is not any reasonable reason, so I close this question. | 0 | 418 | 0 | 2 | 2017-02-15T02:11:00.000 | python,selenium,selenium-chromedriver | How can python + selenium + chromedriver use mouse wheel? | 1 | 2 | 2 | 42,455,875 | 0 |
0 | 0 | i have started working on python chat, using sockets.
I am now having a problem with connecting many clients to the server, because if they connect to the same port they won't be able to communicate live, because each client would wait in line until the port will be free. Now my idea was to choose (on the server side) how many clients I want first, then open that range of ports using a simple for function and threads. Now my problem is that on my client size I am using try, when the "try" point is connecting to the port. At first I thought if somebody already connected to some port it will throw an error so the client will just jump to next port, but I forgot about that line thing. Any ideas? | true | 42,243,255 | 1.2 | 0 | 0 | 0 | Never mind i figures it out. My mistake was that i opened new socket with every thread, while should have opened that once in main() func, then do the accept in the thread. Thank you all | 0 | 65 | 0 | 0 | 2017-02-15T07:37:00.000 | python,python-2.7,sockets | Trying to create Python chat | 1 | 1 | 1 | 42,267,485 | 0 |
0 | 0 | I have python code that sends data to socket (a rather large file). Should I divide it into 1kb chunks, or would just conn.sendall(file.read()) be acceptable? | true | 42,258,274 | 1.2 | 0 | 0 | 4 | It will make little difference to the sending operation. (I assume you are using a TCP socket for the purposes of this discussion.)
When you attempt to send 1K, the kernel will take that 1K, copy it into kernel TCP buffers, and return success (and probably begin sending to the peer at the same time). At which point, you will send another 1K and the same thing happens. Eventually if the file is large enough, and the network can't send it fast enough, or the receiver can't drain it fast enough, the kernel buffer space used by your data will reach some internal limit and your process will be blocked until the receiver drains enough data. (This limit can often be pretty high with TCP -- depending on the OSes, you may be able to send a megabyte or two without ever hitting it.)
If you try to send in one shot, pretty much the same thing will happen: data will be transferred from your buffer into kernel buffers until/unless some limit is reached. At that point, your process will be blocked until data is drained by the receiver (and so forth).
However, with the first mechanism, you can send a file of any size without using undue amounts of memory -- your in-memory buffer (not including the kernel TCP buffers) only needs to be 1K long. With the sendall approach, file.read() will read the entire file into your program's memory. If you attempt that with a truly giant file (say 40G or something), that might take more memory than you have, even including swap space.
So, as a general purpose mechanism, I would definitely favor the first approach. For modern architectures, I would use a larger buffer size than 1K though. The exact number probably isn't too critical; but you could choose something that will fit several disk blocks at once, say, 256K. | 1 | 1,254 | 0 | 3 | 2017-02-15T19:20:00.000 | python-3.x,sockets | Should I send data in chunks, or send it all at once? | 1 | 1 | 1 | 42,260,182 | 0 |
0 | 0 | I am looking to simulate a TCP server, where I would want to reject connection with different error codes in ICMP message.
Currently, the issue is even before it reaches handle_accept() in sockets SYN,ACK would have already reached to the server, and I can reject the connection with ICMP errors!
Did anybody ever tried it? Is there any other way to do it?
Thanks in Advance! | true | 42,265,676 | 1.2 | 0 | 0 | 0 | There is no way to do this on the level of the TCP socket interface available in Python since the OS kernel does the connection setup already before the applications returns from accept. You would need to handle this outside of the application with firewall rules or use raw sockets or a user space network stack where you are not restricted to how connections are handled in the kernel and what the socket interface offers. | 0 | 125 | 0 | 0 | 2017-02-16T05:28:00.000 | python,sockets,tcp | Reject TCP SYN with ICMP error messages in python | 1 | 1 | 1 | 42,266,135 | 0 |
1 | 0 | I need to scrape a url which has checkboxes in it. I wanna click some of the checkboxes and scrape and I wanna scrape again with someother checkboxes clicked. For instance;
I wanna click new and then scrape
and then I wanna scrape the same url with Used and Very Good clicked.
Is there a way to do this without making more than 1 request which is done for getting the url.
I guess html changes when you click one of the boxes since the listing will change when you refine the search. Any thoughts? Any suggestions?
Best,
Can | true | 42,291,191 | 1.2 | 0 | 0 | 0 | You are wrong.
Scrapy cannot manipulate real browser-like behavior.
From the image you linked, I saw you are scraping Amazon, so open that link in browser, and click on checkbox, you will notice the URL in browser will also change according to new filter set.
And then put that URL in scrapy code and do your scraping.
IF YOU WANT TO MANIPULATE REAL BROWSER-LIKE BEHAVIOR use Python Selenium or PhantomJS or CasperJS. | 0 | 1,305 | 0 | 3 | 2017-02-17T06:49:00.000 | python,xpath,web-scraping,scrapy,web-crawler | Scrapy - How to put a check into checkboxes in a url then scrape | 1 | 1 | 2 | 42,297,186 | 0 |
0 | 0 | I have created a service account for Google Drive API two months ago and was using it to upload files in weekly basics to a shared folder. From couple of days I am getting the below error while trying to upload files using this API
"The user has exceeded their Drive storage quota"
I tried to upload into another folder but still got the same issue. I am not sure if I am doing anything wrong here.
Thanks,Teja | false | 42,300,271 | 0.197375 | 0 | 0 | 2 | From your question it sounds like you are using a Service Account to proxy to a standard account. The first thing to do is to establish which account is out of quota, ie. is it the Service Account or is it the standard account? You can use the About.get method to see the used and available quota for each account. If it's the Service Account, it might be because the uploaded files are still owned by the Service Account. You might need to change their permission so they become owned by the standard account. The answer that @nicolas linked to is very helpful.
If you are using a Service Account as a proxy, consider not doing this because it's a bit hacky. Instead you should consider uploading directly to the standard account using a saved Refresh Token. There are pros and cons of each approach. | 0 | 4,699 | 0 | 0 | 2017-02-17T14:23:00.000 | python,google-drive-api | Google Drive API service Account "The user has exceeded their Drive storage quota" python | 1 | 1 | 2 | 42,303,792 | 0 |
0 | 0 | I am working on a chat-box. I am using IBM Watson Conversation to make a conversation bot. My query is:
Suppose the user is talking to the bot in some specific node, and suddenly the user asks a random question, like "what is the weather?", my bot should be able to connect to few Internet websites, search the content and come with a relevant reply, and after that as the user inputs to get back to the previous topic, the bot should be able to resume from where it left.
Summary: How to code in Python to make the bot jump to some intent and
then get back to previous intent. Thanks! | true | 42,344,095 | 1.2 | 0 | 0 | 1 | Search for a "request response". This is a way to redirect the conversation / dialog flow to your app, and then forward it back to watson.
Hope it helps. | 0 | 1,028 | 0 | 1 | 2017-02-20T12:03:00.000 | python,ibm-cloud,ibm-watson,chatbot,watson-conversation | IBM Watson Conversation - Python: Make chat bot jump to some intent & get back to previous intent | 1 | 1 | 1 | 42,350,581 | 0 |
0 | 0 | I want to get the sum of weights (total cost/distance encountered) of a given path in a networkx multigraph.
It's like the current shortest_path_length() function but I plan to use it on the paths returned by the all_simple_paths() function. Is there a way to do that?
I can't just iterate over all the nodes in the path because since it's a multigraph, I will need the key for that given path to be able to know which edge is used. Thank you. | false | 42,365,761 | -0.099668 | 0 | 0 | -1 | I got it. I created a subgraph instead of every output path from all_simple_paths() and just obtained their sum over an attribute by using size() function. | 0 | 1,902 | 0 | 1 | 2017-02-21T11:14:00.000 | python,networkx,shortest-path | Calculate sum of weights in NetworkX multigraph given path | 1 | 1 | 2 | 42,365,979 | 0 |
0 | 0 | So everyday, I need to login to a couple different hosts via ssh and run some maintenance commands there in order for the QA team to be able to test my features.
I want to use a python script to automate such boring tasks. It would be something like:
ssh host1
deploy stuff
logout from host1
ssh host2
restart stuff
logout from host2
ssh host3
check health on stuff
logout from host3
...
It's killing my productivity, and I would like to know if there is something nice, ergonomic and easy to implement that can handle and run commands on ssh sessions programmatically and output a report for me.
Of course I will do the code, I just wanted some suggestions that are not bash scripts (because those are not meant for humans to be read). | false | 42,375,396 | -0.066568 | 1 | 0 | -1 | If these manual stuffs is too many, then I may look into some server configuration managements like Ansible.
I have done this kinda automation using:
Ansible
Python Fabric
Rake | 0 | 9,599 | 1 | 4 | 2017-02-21T18:40:00.000 | python,linux,ssh | Automate ssh commands with python | 1 | 1 | 3 | 42,375,591 | 0 |
0 | 0 | So, I'm trying to use the ebaysdk-python module, to connect to ebay and get a list of orders. After struggle a little bit with the connection, I've finally have found the ebay.yaml syntax. I have then configured the user and password, but I'm receiving this Error 16112.
So, this is my question: is there a way to connect to ebay without interactivity? I mean, without the need to give the permission to get the token and such (oauth)? | true | 42,380,443 | 1.2 | 0 | 0 | 0 | I've finally found the way to do this: I have created a user token using the method auth'n'auth. This user token have almost a year of validity, so it can be used for my purpose. Now, there is another question around that. | 0 | 69 | 0 | 0 | 2017-02-22T00:18:00.000 | python,ebay-sdk | Error 16112 - How to connect to Ebay without interactivity? | 1 | 1 | 1 | 42,380,791 | 0 |
0 | 0 | I tried to test getting toast message on android device with Appium 1.6.3, but it is disappointed for me,the rate to correct get toast is very low. Is there anyone help me? | false | 42,384,577 | 0 | 1 | 0 | 0 | 1.It depends upon how dynamic data is coming.
2. If you want to get toast data while swiping than it becomes hard to get accurate data. | 0 | 509 | 0 | 0 | 2017-02-22T06:52:00.000 | python,selenium-webdriver,automated-tests,ui-automation,python-appium | How to improve get toast correct rate on Appium 1.6.3 with uiautomator2? | 1 | 1 | 3 | 42,408,979 | 0 |
0 | 0 | I am using ZMQ to facilitate communications between one server and multiple clients. Is there a method to have the clients automatically find the ZMQ server if they are on the same internal network? My goal would be to have the client be able to automatically detect the IP and Port it should connect to. | false | 42,395,369 | 0 | 0 | 0 | 0 | It's not possible to do this in any sort of scalable way without some sort of broker or manager that will manage your communications system.
The way that would work is that you have your broker on a known IP:port, and as your server and clients spin up, they connect to the broker, and the broker then tells your endpoints how to communicate to each other.
There are some circumstances where such a communication pattern could make sense, generally when the server and clients are controlled by different entities, and maybe even different from the entity controlling the broker. In your case, it sounds like a dramatic amount of over-engineering. The only other way to do what you're looking for that I'm aware of is to just start brute forcing the network to find open IP:port combinations that respond the way you are looking for. Ick.
I suggest you just define the IP:port you want to use, probably through some method of static configuration you can change manually as necessary, or that can act as sort of a flat-file broker that both ends of the communication can access. | 0 | 130 | 0 | 0 | 2017-02-22T15:14:00.000 | python,server,client,zeromq,pyzmq | Python automatically find server with ZMQ | 1 | 1 | 1 | 42,517,557 | 0 |
0 | 0 | I have about 170 GB data. I have to analyze it using hadoop 2.7.3. There are 14 workers. I have to find total of unique MIME type of each document e.g. total number of documents that are text/html type. When I run mapreduce job(written in python), Hadoop returns many output files instead of single one that I am expecting. I think this is due to many workers that process some data seprately and give output. I want to get single output. Where is the problem. How I can restrict hadoop to give single output (by combining all small output files). | false | 42,406,596 | 0.066568 | 0 | 0 | 1 | your job is generating 1 file per mapper, you have to force a reducer phase using 1 reducer to do this, you can accomplish this emitting the same key in all the mappers. | 0 | 750 | 1 | 1 | 2017-02-23T03:42:00.000 | python,hadoop,mapreduce | How to combine hadoop mappers output to get single result | 1 | 2 | 3 | 42,406,958 | 0 |
0 | 0 | I have about 170 GB data. I have to analyze it using hadoop 2.7.3. There are 14 workers. I have to find total of unique MIME type of each document e.g. total number of documents that are text/html type. When I run mapreduce job(written in python), Hadoop returns many output files instead of single one that I am expecting. I think this is due to many workers that process some data seprately and give output. I want to get single output. Where is the problem. How I can restrict hadoop to give single output (by combining all small output files). | false | 42,406,596 | 0.066568 | 0 | 0 | 1 | Make your mapper to emit for each document processed - (doc-mime-type, 1) then count up all such pairs at reduce phase. In essence, it is a standard word count exercise except your mappers emit 1s for each doc's mime-type.
Regarding number of reducers to set: Alex's way of merging reducers' results is preferable as allows to utilize all your worker nodes at reduce stage. However, if job to be run on 1-2 nodes then just one reducer should work fine. | 0 | 750 | 1 | 1 | 2017-02-23T03:42:00.000 | python,hadoop,mapreduce | How to combine hadoop mappers output to get single result | 1 | 2 | 3 | 42,414,979 | 0 |
0 | 0 | I have many telegram channels, 24\7 they send messages in the format
"buy usdjpy sl 145.2 tp 167.4"
"eurusd sell sl 145.2 tp 167.4"
"eurusd sl 145.2 tp 167.4 SELL"
or these words in some order
My idea is to create app that checks every channel's message, and redirects it to my channel if it is in the above format.
Does telegram api allow it? | false | 42,430,232 | 0.024995 | 1 | 0 | 1 | Got the solution to this problem.
Here is bot which automatically forwards messages from one channel to another without the forward tag.
Moreover the copying speed is legit!
@copythatbot
This is the golden tool everyone is looking for. | 0 | 36,182 | 0 | 3 | 2017-02-24T03:29:00.000 | telegram,telegram-bot,python-telegram-bot | How can I redirect messages from telegram channels that are in certain format?[telegram bot] | 1 | 3 | 8 | 56,536,495 | 0 |
0 | 0 | I have many telegram channels, 24\7 they send messages in the format
"buy usdjpy sl 145.2 tp 167.4"
"eurusd sell sl 145.2 tp 167.4"
"eurusd sl 145.2 tp 167.4 SELL"
or these words in some order
My idea is to create app that checks every channel's message, and redirects it to my channel if it is in the above format.
Does telegram api allow it? | false | 42,430,232 | 0.099668 | 1 | 0 | 4 | You cannot scrape from a telegram channel with a bot, unless, the bot is an administrator in the channel, which only the owner can add.
Once that is done, you can easily redirect posts to your channel by listening for channel_post updates. | 0 | 36,182 | 0 | 3 | 2017-02-24T03:29:00.000 | telegram,telegram-bot,python-telegram-bot | How can I redirect messages from telegram channels that are in certain format?[telegram bot] | 1 | 3 | 8 | 42,467,337 | 0 |
0 | 0 | I have many telegram channels, 24\7 they send messages in the format
"buy usdjpy sl 145.2 tp 167.4"
"eurusd sell sl 145.2 tp 167.4"
"eurusd sl 145.2 tp 167.4 SELL"
or these words in some order
My idea is to create app that checks every channel's message, and redirects it to my channel if it is in the above format.
Does telegram api allow it? | false | 42,430,232 | 0.049958 | 1 | 0 | 2 | This is very easy to do with Full Telegram API.
first on your mobile phone subscribe to all the interested channels
Next you develop a simple telegram client the receives all the updates from these channels
Next you build some parsers that can understand the channel messages and filter out what you are interested in
Finally you send the filtered content (re-formatted) to your own channel.
that's all that is required. | 0 | 36,182 | 0 | 3 | 2017-02-24T03:29:00.000 | telegram,telegram-bot,python-telegram-bot | How can I redirect messages from telegram channels that are in certain format?[telegram bot] | 1 | 3 | 8 | 42,441,369 | 0 |
1 | 0 | How can I persist an API object across different Celery tasks? I have one API object per user with an authenticated session (python requests) to make API calls. A user_id, csrftoken, etc. is sent with each request.
I need to schedule different tasks in Celery to perform API requests without re-authenticating for each task.
How can I do this? | true | 42,438,998 | 1.2 | 0 | 0 | 1 | You can put these data into the database/memcache and fetch by userid as a key.
If these data are stateless - it's fine. Concurrent processes take the authenticating parameters, construct request and send it.
If it changes the state (unique incrementing request id, changing token, etc) after each request (or in some requests) - you need to implement a singleton manager to provide correct credentials by request. All tasks should request for credentials from this manager. It can also limit the rate for example.
If you would like to pass this object to the task as a parameter - then you need to serialize it. Just make sure it is seriazeable. | 0 | 152 | 0 | 1 | 2017-02-24T12:44:00.000 | python,django,python-requests,celery | How can I persist an authenticated API object across different Celery tasks? | 1 | 1 | 1 | 42,439,583 | 0 |
0 | 0 | I have a raspberry pi, which is setup as a audio streaming server. I have used websockets and python as programming language. The client can listen to the live audio stream by connecting to the server hosted on raspberry pi. The system works well in localhost environment. Now, I want to access the server from the internet and by searching I got to know about STUN. I tried to use pystun but I couldn't get the proper port for NAT punching. So can anyone help me to implement STUN?
Note: server is listening at localhost:8000 | false | 42,453,445 | 0 | 1 | 0 | 0 | NAT punching is used for peer-to-peer (P2P) communication and your audio streaming server seems to be a client-server implementation.
How and if this is going to work heavily depends on your NAT device (which kind of NAT is implemented). Chances are high that your NAT device has short timeouts and you need to punch holes for every client connection (from your raspberry pi).
As you stated you're using WebSockets and these are always TCP, pystun isn't going to work because pystun only supports UDP.
I'd suggest to create a port forwarding in your NAT device, tunnel your traffic using a P2P VPN or host your audio streaming server on a different network. | 0 | 3,652 | 0 | 0 | 2017-02-25T07:52:00.000 | python,websocket,nat,stun | How to implement stun with python | 1 | 1 | 2 | 64,177,395 | 0 |
0 | 0 | Hey I am building a simple API server to handle some functionality for a chrome extension. But I need to the users of my extension/add-on to be logged in and for this I want to make the python api server HTTPS requests only. How would I go about verifying the certificate for my server from the chrome extension? Sorry for this broad ish question, I am very new to web based programming. | false | 42,491,349 | 0.197375 | 0 | 0 | 1 | You shouldn't have to do anything special. Any HTTPS requests made by the Chrome extension will go through the same certificate verification as would any other request made in the browser. | 0 | 221 | 0 | 0 | 2017-02-27T16:54:00.000 | python,ssl,google-chrome-extension | Python Server: Chrome Extension SSL certificate | 1 | 1 | 1 | 42,491,557 | 0 |
0 | 0 | I would like to know how traffic flows in SoftLayer between the servers, In course of traffic flow how to detect unusual traffic and how to detect ports that are prone to unusual/malicious traffic. Can we retrieve this information using any SoftLayer python API's ? | true | 42,506,643 | 1.2 | 0 | 0 | 0 | that is not possbile using the Softlayer's API even using the Softlayer's control portal that information is not avaiable.
regards | 0 | 61 | 0 | 0 | 2017-02-28T10:47:00.000 | python-2.7,ibm-cloud-infrastructure | how to get unusual traffic or traffic information in SoftLayer using python API's | 1 | 1 | 1 | 42,534,904 | 0 |
0 | 0 | does anyone of you know how to find / click the YT-Like button in Python using selenium since it doesn't have a real Id, etc...
Thanks for the answers | true | 42,515,611 | 1.2 | 0 | 0 | 0 | you can select with CSS Selector like this:
if you want to like:
#watch8-sentiment-actions > span > span:nth-child(1) > button
if you want cancel like:
#watch8-sentiment-actions > span > span:nth-child(2) > button | 0 | 378 | 0 | 0 | 2017-02-28T17:54:00.000 | python,selenium,button,youtube | Python - Selenium: Find / Click YT-Like Button | 1 | 1 | 1 | 42,515,710 | 0 |
0 | 0 | I am trying to intall webdriver and in order to open firefox i need the geckodriver to be installed and in the correct path.
Firstly the download link to install geckodriver only allows you to install a file that is not an executable. So is there a way to make it an executable?
secondly I have tried to change my path variables in commmand prompt, but of course it didn't work. I then changed the user variable not the system path variables because there is not Path in system. there is a Path in user variables so I edited that to change where the file is located.
I have extracted the geckodriver rar file and have received a file with no extension. I don't know how you can have a file with no extension, but they did it. The icon is like a blank sheet of paper with a fold at the top left.
If anyone has a solution for this including maybe another package that is like webdriver and will allow me to open a browser and then refresh the page after a given amount of time. this is all I want to do. | true | 42,524,114 | 1.2 | 0 | 0 | 7 | For one make sure you are downloading the one for your OS. Windows is at the bottom of the list it will say win32. Download that file or 64 doesn't matter.
After that you are going to want to extract the file. If you get an error that says there is no file in the Winrar file, this may be because in your Winrar settings you have Winrar set to not extract any files that have the extension .exe. If you go to Winrar options then settings then security you can delete this it will say *.exe, and after you delete that you can extract the file. After that is done, search how to update the path so that gecko driver can be accessed. Then you will most likely need to restart. | 0 | 66,774 | 0 | 8 | 2017-03-01T05:46:00.000 | python,webdriver,geckodriver | how to install geckodriver on a windows system | 1 | 2 | 5 | 42,542,815 | 0 |
0 | 0 | I am trying to intall webdriver and in order to open firefox i need the geckodriver to be installed and in the correct path.
Firstly the download link to install geckodriver only allows you to install a file that is not an executable. So is there a way to make it an executable?
secondly I have tried to change my path variables in commmand prompt, but of course it didn't work. I then changed the user variable not the system path variables because there is not Path in system. there is a Path in user variables so I edited that to change where the file is located.
I have extracted the geckodriver rar file and have received a file with no extension. I don't know how you can have a file with no extension, but they did it. The icon is like a blank sheet of paper with a fold at the top left.
If anyone has a solution for this including maybe another package that is like webdriver and will allow me to open a browser and then refresh the page after a given amount of time. this is all I want to do. | false | 42,524,114 | 0 | 0 | 0 | 0 | I've wrestled with the same question for last hour.
Make sure you have the latest version of Firefox installed. I had Firefox 36, which, when checking for updates, said it was the latest version. Mozilla's website had version 54 as latest. So download Firefox from website, and reinstall.
Make sure you have the latest gecko driver downloaded.
If you're getting the path error - use the code below to figure out which path python is looking at. Add the geckodriver.exe to the working directory.
import os
os.getcwd() | 0 | 66,774 | 0 | 8 | 2017-03-01T05:46:00.000 | python,webdriver,geckodriver | how to install geckodriver on a windows system | 1 | 2 | 5 | 46,927,125 | 0 |
1 | 0 | requests.exceptions.Timeout VS
requests.models.Response.status_code = 504 [gateway timeout]
what is the actual difference between the two as both deals with saying timeout has occurred?
Let us say Service s1 makes call to S2
In s1:
request.post( url=s2,..., timeout=60 )
when will requests.exceptions.Timeout be raised and in what scenario 504 is received.
Can retries be made for all of those exceptions - I believe answer for above question might give lead to this..
Thanks in advance. | false | 42,558,294 | 0 | 0 | 0 | 0 | The gateway timeout means the connected server had some sort of timeout after receiving your request(i.e. you did make a connection). However, the requests timeout exception means your script never connected to the server and timed out waiting on a response from the server (i.e. you did not make a connection). | 0 | 1,354 | 0 | 1 | 2017-03-02T14:38:00.000 | python,python-2.7,request,timeoutexception,http-status-code-504 | python: requests.exceptions.Timeout vs requests.models.Response.status_code 504 ( gateway timeout ) | 1 | 1 | 1 | 42,558,358 | 0 |
0 | 0 | I am trying to get more information from experienced people doing web scraping in general, I am getting into web scraping using Python libraries. At the same time, I noticed some people are using simple Bash, and using commands for web scraping such as wget, curl, sed, grep, awk.
These commands seem to be much cleaner in scripting than using Python libraries for web scraping.
What are your takes on this? Do you see any advantage of using python libraries over Bash that I am not getting? Or even using Python with Bash to accomplish web scraping? | false | 42,563,683 | 0 | 0 | 0 | 0 | With Python you can also scrape sites rendered with JavaScript using selenium und a headless browser like PhantomJS. Maybe this is possible with bash scripting too, but the more complext your code gets the bigger the advantage of the clarity of python IMHO. | 0 | 707 | 0 | 0 | 2017-03-02T18:56:00.000 | python,bash,curl,sed,web-scraping | Using Bash scripting for web scraping over python libraries? | 1 | 1 | 2 | 42,564,318 | 0 |
0 | 0 | I've read the API documentation and there seem to be no way to get user email. | false | 42,573,079 | 0 | 1 | 0 | 0 | if you can see a page witch contains your needed data with your eyes. you can use web scraping to gather them. | 0 | 67 | 0 | 0 | 2017-03-03T07:29:00.000 | php,python | Is there any way to extract a list of users and their email ids from wattpad? | 1 | 1 | 1 | 42,573,390 | 0 |
0 | 0 | I am coding a program(server-client) in python 2.7, that exchange data through sockets. I use SSL to secure the connection. But here is the thing. I want to make the client and the server executables with pyinstaller, and i want the SSL certificate and the key to be "hidden" somewhere inside the python code... so i can have only ONE file, and not several. I tried to load the certificate through a variable that contained the certificate, but appareantly the certificate needs to be loaded through a file. What options do i have ? | false | 42,607,360 | 0 | 0 | 0 | 0 | Appereantly there isn't any answer to this question... so i figured out something else. I saved the certificate in a variable in my pythonic code :P and then before connecting to the server, the client saves the certificate to a temp file, and at the end delete it. | 1 | 228 | 0 | 1 | 2017-03-05T10:42:00.000 | python-2.7,ssl,ssl-certificate,embed | Embed SSL certificates | 1 | 1 | 1 | 42,621,585 | 0 |
0 | 0 | Can you click on an element, while the page is not fully load but the element is already loaded/visible? If yes then how? If no then is there any other solution? | false | 42,608,100 | 0 | 0 | 0 | 0 | Technically speaking, you could set an explicit wait targetting the "presence_of_element_located" or "visibility_of_element_located" condition. However keep in mind that the action fired by the click on the element could be binded in many ways, and some of them could take place after the dom is ready (when the complete DOM is loaded altough not yet completely rendered).
Think at these scenarios:
The element has an "onClick" attribute which fire a javascript function: in thiw case, the action could take place before the complete load (but only if it not concern elements that are not rendered yet)
The element is an anchor with an "href" attribute with a plain url inside it: in this case I think it could be pretty safe to click before the complete load
The element has an action binded through javascript at some point: in this case you should check the js code to make sure the element has the action already binded when you want to click it. | 0 | 122 | 0 | 0 | 2017-03-05T11:58:00.000 | python,python-2.7,selenium,click,element | Python 2.7 Selenium, Click on button while the page isn't fully load | 1 | 1 | 1 | 42,665,717 | 0 |
0 | 0 | I have developed a lambda function which hits API url and getting the data in Json Format. So need to use modules/libraries like requests which is not available in AWS online editor using Python 2.7.
So need to upload the code in Zip file, How we can do step by step to deploy Lambda function from windows local server to AWS console. What are the requirements? | false | 42,628,638 | 0.197375 | 0 | 0 | 1 | You could use code build, which will build your code on the aws linux envoirnment. Then it wont matter if the envoirnment is windows or linux.
code build will put the artifacts directly on s3, from there you can directly upload it to lambda. | 1 | 614 | 0 | 1 | 2017-03-06T14:52:00.000 | python,amazon-web-services,aws-lambda | AWS lambda function deployment | 1 | 1 | 1 | 43,783,169 | 0 |
1 | 0 | What does the mytubeid tag(like <iframe src="/portal/corporateEventsCalendarIframe.html" mytubeid="mytube1" width="820" height="1600" frameborder="0"/>) do in an iframe?
Note that the iframe do not have an id or as such in it!
How can it be referenced in code? I am using python+selenium+scrapy to build a webscraping tool. | true | 42,653,958 | 1.2 | 0 | 0 | 0 | If you want to find this iframe using selenium try something like
driver.find_element_by_xpath('//iframe[@mytubeid="mytube1"]').
For more explicit answer please provide some code and site url. | 0 | 159 | 0 | 0 | 2017-03-07T16:57:00.000 | python,html,selenium,iframe,scrapy | Referencing mytubeid in iframe using Python Selenium | 1 | 1 | 2 | 42,658,773 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.