Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | This program listen to Redis queue. If there is data in Redis, worker start to do their jobs. All these jobs have to run simultaneously that's why each worker listen to one particular Redis queue.
My question is : Is it common to run more than 20 workers to listen to Redis ?
python /usr/src/worker1.py
python /usr/src/worker2.py
python /usr/src/worker3.py
python /usr/src/worker4.py
python /usr/src/worker5.py
....
....
python /usr/src/worker6.py | false | 46,808,342 | 0 | 0 | 0 | 0 | Having multiple worker processes (and when I mean "multiple" I'm talking hundreds or more), possibly running on different machines, fetching jobs from a job queue is indeed a common pattern nowadays. There even are whole packages/frameworks devoted to such workflows, like for example Celery.
What is less common is trying to write the whole task queues system from scratch in a seemingly ad-hoc way instead of using a dedicated task queues system like Celery, ZeroMQ or something similar. | 0 | 834 | 1 | 0 | 2017-10-18T10:44:00.000 | python,redis,queue,worker | Is it common to run 20 python workers which uses Redis as Queue ? | 1 | 2 | 2 | 46,808,475 | 0 |
0 | 0 | I am writing a small chat app with python using bash commands. I'm using nc to accomplish this but I want to be able to append a username before the user's message. How do I go about doing this without breaking the connection?
The command I'm using to connect is just
nc -l -p 1234 -q 0
and the desired outcome is that when the person sends something it would look like: <User1> Hello
Thank you! | false | 46,814,272 | 0 | 0 | 0 | 0 | its hard to understand the context and the applications structure, but i would assume that putting the connection in a separate thread would help. So that the connection is always open and the messages can be processed in preferred fashion. | 0 | 232 | 1 | 0 | 2017-10-18T15:48:00.000 | python,bash,sockets,networking,scripting | How to print text while netcat is listening at a port? | 1 | 1 | 2 | 46,846,940 | 0 |
0 | 0 | Im trying to make a tcp communication, where the server sends a message every x seconds through a socket, and should stop sending those messages on a certain condition where the client isnt sending any message for 5 seconds.
To be more detailed, the client also sends constant messages which are all ignored by the server on the same socket as above, and can stop sending them at any unknown time. The messages are, for simplicity, used as alive messages to inform the server that the communication is still relevant.
The problem is that if i want to send repeated messages from the server, i cannot allow it to "get busy" and start receiving messages instead, thus i cannot detect when a new messages arrives from the other side and act accordingly.
The problem is independent of the programming language, but to be more specific im using python, and cannot access the code of the client.
Is there any option of receiving and sending messages on a single socket simultaneously?
Thanks! | true | 46,833,561 | 1.2 | 0 | 0 | 0 | Option 1
Use two threads, one will write to the socket and the second will read from it.
This works since sockets are full-duplex (allow bi-directional simultaneous access).
Option 2
Use a single thread that manages all keep alives using select.epoll. This way one thread can handle multiple clients. Remember though, that if this isn't the only thread that uses the sockets, you might need to handle thread safety on your own | 0 | 53 | 0 | 1 | 2017-10-19T15:30:00.000 | python,tcp,send | Detecting when a tcp client is not active for more than 5 seconds | 1 | 1 | 2 | 46,833,696 | 0 |
0 | 0 | Im trying to build a python application that will be run on a specific port so that when i try to connect on that port the python application will be run.
Im guessing i have to do that with socket library but im not very sure about that. | false | 46,840,384 | 0.197375 | 0 | 0 | 1 | On Linux you can do this with xinetd. You edit /etc/services to give a name to your port, then add a line to /etc/xinetd.conf to run your server when someone connects to that service. The TCP connection will be provide to the Python script as its standard input and output. | 0 | 46 | 0 | 0 | 2017-10-19T23:40:00.000 | python | Python server with library socket | 1 | 1 | 1 | 46,840,632 | 0 |
1 | 0 | one more problem i hv,i asked similar question earlier and i tried that method but not able use that methon in this problem so pls help me. it's element
html code is - Filters
So basically, question is that there is one button its kind of toggle button and i want click on that button to select device like Desktop, Tablet & Mobile all check boxes are already (default) selected now i have to uncheck or deselect device, to do this, first i have to click on that toggle button , when i click on toggle button its id (gwt-uid-598) 598 is getting changed every time or every refresh. Can you pls help me, what should or which method should i follow in this case.
i am using below python code.
Click on device Filters
elem = driver.find_element_by_xpath('//*[@id="gwt-uid-598"]/div/div/span')
elem.click()
Thanks in advance. | true | 46,844,771 | 1.2 | 0 | 0 | 0 | Good question.
Try to use another selector, for example: css class or use xpath method contains().
Example: //div[contains(text(), "checkbox")]
I can help you if you can provide source code of the page or needed element. | 0 | 400 | 0 | 1 | 2017-10-20T08:03:00.000 | python,css,selenium,selenium-webdriver,ui-automation | id of xpath is getting changed every time in selenium python 2.7 chrome | 1 | 1 | 1 | 46,849,694 | 0 |
1 | 0 | Now that I have created a website, I wish the users can connect to my website within a local network. However my website now can only be accessible from the localhost (my pc), the others are not able to connect to my website when typing my ip address for example xxx.xxx.xxx.xxx:8000 on their browser. I launch the service on my localhost using # python manage.py runserver. May I know if there is a way/command to allow the others to connect to my website?
Note: I have tried # python manage.py runserver 0.0.0.0:8000 as well, which allow all incoming, but it didn't work. | true | 46,909,905 | 1.2 | 0 | 0 | 8 | In settings.py write
ALLOWED_HOSTS = ['*'] and run python manage.py runserver 0.0.0.0:8000
Note: you can use any port instead of 8000 | 0 | 3,829 | 0 | 3 | 2017-10-24T11:54:00.000 | python,django,server,web | How to allow others to connect to my Django website? | 1 | 1 | 2 | 46,910,029 | 0 |
0 | 0 | I've already built a python script that scrapes some data from a website that require a login-in. My question is: How can i transform this script into an api? For example i send to the api username, password and data required, then it returns the data needed. | true | 46,917,721 | 1.2 | 0 | 0 | 0 | A web API is nothing but an HTTP layer over your custom logic so that requests can be served the HTTP way (GET PUT POST DELETE).
Now, the question is, how?
The easiest way is to use already available packages called "web frameworks" which python has in abundance.
The easiest one to probably implement would mostly be Flask.
For a more robust application, you can use django as well. | 0 | 342 | 0 | 0 | 2017-10-24T18:36:00.000 | python,api,web-scraping | Web Scraping Api with Python | 1 | 1 | 1 | 46,917,799 | 0 |
0 | 0 | Okay i can send audio with some url in inline mode. But how can i send the local audio from the directory? Telegram Bot API return me this:
A request to the Telegram API was unsuccessful. The server returned
HTTP 400 Bad Request. Response body:
[b'{"ok":false,"error_code":400,"description":"Bad Request:
CONTENT_URL_INVALID"}'] | false | 46,940,780 | 0 | 1 | 0 | 0 | InlineQueryResultAudio only accepts links while InlineQueryResultCachedAudio only accepts file_id. What you can do is post the files to your own server or upload it elsewhere to use the former one, or use SendAudio to get the file_id of it and use the latter one. | 0 | 505 | 0 | 0 | 2017-10-25T19:37:00.000 | python,telegram-bot,python-telegram-bot | Telegram Bot API InlineQueryResultAudio | 1 | 1 | 1 | 46,947,479 | 0 |
0 | 0 | I'm trying to create a (very minimalist) web server with Python using the socket module.
I have a problem with, I think, web browser caching.
Explanation:
When I load a page, the first time work. It will work 2-3 other time at the beginning, then it will load just one time every two requests made by my browser (I use Firefox). I press F5, it works, I re-press F5, it loads nothing infinitely, I re-press F5 and it works.
I looked at my python console, and it seems that Firefox doesn't send any request when the loading of the page fails.
When I press Ctrl + F5, it ALWAYS work, Firefox send a request each time and my webserver send it the page.
I tried adding HTTP headers to prevent caching (Cache-Control, Pragma, Expires), but it still works one in two. I tested with Internet Explorer, and it works better, but it sometime fails (on 4-5 requests, it will fail only one time).
So, my question is:
Why Firefox and IE sometimes doesn't send request and still seems to wait for something? What is the web server supposed to do?
Thanks. | true | 46,949,681 | 1.2 | 0 | 0 | 0 | This is not about caching. You don’t close the sockets after sending a response nor do you tell the browser that you don’t support multiple requests. So the browser will assume it can request again with the same connection.
Close the connection and if you claim to support HTTP 1.1 add an appropriate Connection header.
Also don’t bind/close the server socket after each request. There’s no need and it will just make things not work. Bind on startup, close on shutdown.
The code might also fail on data that is non-ASCII since you tell the length of the string and not the actual data you send. These may be different. | 0 | 70 | 0 | 0 | 2017-10-26T08:47:00.000 | python,http | Python/HTTP - How does web browsers cache work? | 1 | 1 | 1 | 46,949,933 | 0 |
1 | 0 | I want to iterate //span[@class="postNum and contains(text(),1)] This xpath over the range of 1 to 10 and store it in a variable. I want it to be done in HTML format and not XML.
pseudo code:
for e in range(1,11):
xpathvar[e]='//span[@class="postNum and contains(text(),e)]'
how to implement this so that xpathvar[1] will contain the first xpath with e=1. I cannot do this because the element in RHS is a string. | false | 47,011,332 | 0 | 0 | 0 | 0 | I solved it myself. What I wanted to do was with using slice, I wanted to traverse to all xpath using iteration where the iterative will be in digits coming from for loop. | 0 | 204 | 0 | 0 | 2017-10-30T09:01:00.000 | python-3.x,xpath,lxml | How to iterate over a XPath in HTML format using lxml and python? | 1 | 1 | 1 | 47,045,098 | 0 |
0 | 0 | Fetch failed: Unable to find remote helper for 'https'
When I tried to fetch on PyCharm from my GitHub repository, the above is the message I ended up getting. I was wondering how I could fix this. | true | 47,023,092 | 1.2 | 0 | 0 | 1 | This may seems like an overkill way of handling the problem but I have fixed it myself by re-installing git on my machine. It seems to actually be the fix for this. Another thing you could do is try git-bash (Git for Windows app) in the future. | 0 | 161 | 0 | 2 | 2017-10-30T19:46:00.000 | python,windows,github,pycharm | Can't seem to fetch from GitHub repository in PyCharm | 1 | 1 | 1 | 47,023,220 | 0 |
0 | 0 | I am creating python script to configure router settings remotely but recently stumbled on problem how to logout or close session after job is done?
From searching I found that Basic-Authentication doesn´t have option to logout. How to solve it in python script? | false | 47,044,392 | 0 | 0 | 0 | 0 | with python requests you can open your session, do your job, then logout with:
r = requests.get('logouturl', params={...})
the logout action is just an http Get method. | 0 | 395 | 0 | 0 | 2017-10-31T20:54:00.000 | python,python-requests,basic-authentication | Basic-Auth session using python script | 1 | 2 | 2 | 47,045,020 | 0 |
0 | 0 | I am creating python script to configure router settings remotely but recently stumbled on problem how to logout or close session after job is done?
From searching I found that Basic-Authentication doesn´t have option to logout. How to solve it in python script? | true | 47,044,392 | 1.2 | 0 | 0 | 0 | Basic auth doesn't have a concept of a logout but your router's page should have some implementation. If not, perhaps it has a timeout and you just leave it.
Since you're using the requests module it may be difficult to do an actual logout if there is no endpoint or parameter for it. I think the best one can do at that point is log in again but with invalid credentials. Studying the structure of the router's pages and the parameters that appear in the urls could give you more options.
If you want to go a different route and use something like a headless web browser you could actually click a logout button if it exists. Something like Selenium can do this. | 0 | 395 | 0 | 0 | 2017-10-31T20:54:00.000 | python,python-requests,basic-authentication | Basic-Auth session using python script | 1 | 2 | 2 | 47,044,849 | 0 |
1 | 0 | Every AWS Account has an email associated to it.
How can I change that email address for that account using boto3? | false | 47,063,752 | 0.379949 | 1 | 0 | 4 | It is not possible to change an account's email address (Root) programmatically. You must log in to the console using Root credentials and update the email address. | 0 | 494 | 0 | 0 | 2017-11-01T21:13:00.000 | python,amazon-web-services,email,boto3,amazon-iam | How can I change the AWS Account Email with boto3? | 1 | 2 | 2 | 47,064,014 | 0 |
1 | 0 | Every AWS Account has an email associated to it.
How can I change that email address for that account using boto3? | false | 47,063,752 | 0 | 1 | 0 | 0 | No as of Oct 2019 you can't update account information(including email) using boto or any other AWS provided SDKs. | 0 | 494 | 0 | 0 | 2017-11-01T21:13:00.000 | python,amazon-web-services,email,boto3,amazon-iam | How can I change the AWS Account Email with boto3? | 1 | 2 | 2 | 58,530,022 | 0 |
0 | 0 | I'm trying to write the telegram bot, and i need help here
bot.deleteMessage(chat_id=chatId, message_id=mId)
This code returns the following error: 400 Bad Request: message can't be deleted
Bot has all rights needed for deleting messages. | false | 47,064,078 | 0 | 1 | 0 | 0 | A bot can delete messages:
1. in groups:
Only his own messages if he is not admin, otherwise also messages from other users.
2. in private:
only his own messages
in both the cases only if the message is not older than 48h.
Probably, since you said in comments messages aren't older than 48h, you can doing it wrong because of the first 2 points | 0 | 2,259 | 0 | 1 | 2017-11-01T21:36:00.000 | python,telegram,telegram-bot,python-telegram-bot | Telegram Bot deleteMessage function returns 400 Bad Request Error | 1 | 1 | 2 | 47,100,798 | 0 |
1 | 0 | when scraping from website using bs4 it showing response object as access denied and Forbidden how to solve this? | false | 47,088,034 | 0 | 0 | 0 | 0 | Make sure that you have added the required headers such as 'User-Agent' before firing the get Request. In most cases, if 'User-Agent' is not provided, you'll end up with 403 response. | 0 | 194 | 0 | 0 | 2017-11-03T03:26:00.000 | python-2.7,beautifulsoup | 403 Forbidden or access denied for some website why? | 1 | 1 | 1 | 47,088,273 | 0 |
1 | 0 | I have a django view that takes in a json object and from that object I am able to get a uri. The uri contains an xml object. What I want to do is get the data from the xml object but I am not sure how to do this. I'm using django rest, which I am fairly inexperienced in using, but I do not know the uri until I search the json object in the view. I have tried parsing it in the template but ran into CORS issues amongst others. Any ideas on how this could be done in the view? My main issue is not so much parsing the xml but how to get around the CORS issue which I have no experience in dealing with | false | 47,102,090 | 0 | 0 | 0 | 0 | You don't really do this in view part of Django.
What you should do is take the json, find the uri, get the content of uri through urllib, requests etc, get the relevant content from the response, add a new field to the json and then pass it to your view. | 0 | 68 | 0 | 0 | 2017-11-03T18:10:00.000 | python,django,xml,django-rest-framework | Getting data from a uri django | 1 | 1 | 1 | 47,103,539 | 0 |
0 | 0 | when I did sudo pip3 install google-api-python-client it successefully installs but when I try to do import google.oauth2 it doesn't find it. It just says ImportError: No module named 'google' | true | 47,112,803 | 1.2 | 0 | 0 | 0 | pip3 was using the wrong python, so I had to specify exacly which python version I had to use, in my case; python3.5 | 1 | 37 | 0 | 0 | 2017-11-04T16:20:00.000 | python-3.x,google-oauth,youtube-data-api | google youtube api installation problems | 1 | 1 | 1 | 47,530,288 | 0 |
1 | 0 | Everytime I make an external request (including to google.com) I get this response:
HTTPConnectionPool(host='EXTERNALHOSTSITE', port=8080): Max retries exceeded with url: EXTERNALHOSTPARAMS (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x105d8d6d0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',)) | true | 47,116,246 | 1.2 | 0 | 0 | 2 | It does seem that your server cannot resolve the hostname into IP, this is probably not Django nor Python problem but your server network setup issue.
Try to reach the same URL with ping tool / wget/curl or troubleshoot DNS with nslookup. | 0 | 138 | 0 | 0 | 2017-11-04T22:28:00.000 | python,django,python-2.7,python-requests | Django can't make external connections with requests or urllib2 on development server | 1 | 1 | 1 | 47,116,517 | 0 |
0 | 0 | I've been looking around for a solution to this and I'm completely stuck.
The icecast/shoutcast libs all seem to be Python 2.7 which is an issue as I'm using 3.6
Any ideas for where to start with broadcasting and authenticating would be very useful. I'm looking to stream mp3 files.
TIA. | false | 47,121,053 | 0 | 1 | 0 | 0 | Use liquidsoap to generate your audio stream(s) and output them to shoutcast and/or icecast2 servers. I currently have liquidsoap, shoutcast, icecast2 and apache2 all running on the same Ubuntu 18.04 server. liquidsoap generates the audio stream and outputs it to both shoutcast and icecast2. Listeners can use their browser to access either the shoutcast stream at port 8000 or the icecast2 stream at port 8010. It works very well 24 x 7.
You can have multiple streams and liquidsoap has many features including playlists and time-based (clock) actions. See the liquidsoap documentation for examples to create audio streams from your mp3 or other format audio files. Best of all liquidsoap is free. | 0 | 2,107 | 0 | 1 | 2017-11-05T11:31:00.000 | python,python-3.x,streaming,audio-streaming,shoutcast | Python broadcast to shoutcast (DNAS) or icecast | 1 | 1 | 2 | 61,761,330 | 0 |
1 | 0 | Is it possible to use Scrapy to generate a sitemap of a website including the URL of each page and its level/depth (the number of links I need to follow from the home page to get there)? The format of the sitemap doesn't have to be XML, it's just about the information. Furthermore I'd like to save the complete HTML source of the crawled pages for further analysis instead of scraping only certain elements from it.
Could somebody experienced in using Scrapy tell me whether this is a possible/reasonable scenario for Scrapy and give me some hints on how to find instructions? So far I could only find far more complex scenarios but no approach for this seemingly simple problem.
Addon for experienced webcrawlers: Given it is possible, do you think Scrapy is even the right tool for this? Or would it be easier to write my own crawler with a library like requests etc.? | false | 47,160,587 | -0.197375 | 0 | 0 | -1 | Yes, it's possible to do what you're trying with Scrapy's LinkExtractor library. This will help you document the URLs for all of the pages on your site.
Once this is done, you can iterate through the URLs and the source (HTML) for each page using the urllib Python library.
Then you can use RegEx to find whatever patterns you're looking for within the HTML for each page in order to perform your analysis. | 0 | 1,378 | 0 | 5 | 2017-11-07T14:39:00.000 | python,scrapy,scrapy-spider | Sitemap creation with Scrapy | 1 | 1 | 1 | 61,992,037 | 0 |
0 | 0 | We are running a Python Selenium script in Windows Chrome, and are now faced with an issue which we cannot resolve.
The script (medium sized) runs to completion only say 1 in 3 times.
Other times, it freezes in the middle, maybe after 10 steps, or 15 steps- after that there is no response whatsoever.
The only clue we got was that there is an error printed out (usually after 10 minutes of waiting) access denied.
After this hanging the only option is to
Kill the browser or
Kill the process
We tried --disable-extensions, and having an user-dir etc, but to no avail.
There is an anti-virus (Symantec) running in the machine, which cannot be disabled (enterprise level security settings).
Has anyone faced this issue?
Is there any solution? Please let me know. | true | 47,194,189 | 1.2 | 0 | 0 | 0 | Found that the issue was due to AV. Stopped the anti virus for some time and it is running properly now. | 0 | 95 | 0 | 0 | 2017-11-09T04:58:00.000 | python,windows,google-chrome,selenium,freeze | Chrome gets hung randomly - Python Selenium Windows | 1 | 1 | 1 | 47,257,116 | 0 |
1 | 0 | Can I keep my spot instance in use by modifying my max bid price pro-grammatically(Python boto) as soon as the bid price increases, so as to stop it from terminating itself and manually terminate it once am done with my work. I know I can use the latest spot block to use the spot instance upto 6 hours, but it reduces the profit margin. So I wanted to know if I can modify my bid pricing on the go based on the current demand.
Thanks. | true | 47,199,421 | 1.2 | 0 | 0 | 5 | No way ! You cannot change your max price once the instance is running. in order to change the price of your bid, you must cancel it and place another bid. | 0 | 2,432 | 0 | 3 | 2017-11-09T10:27:00.000 | python,amazon-web-services,amazon-ec2,boto3 | Changing max bid price of AWS Spot instance | 1 | 2 | 2 | 47,202,059 | 0 |
1 | 0 | Can I keep my spot instance in use by modifying my max bid price pro-grammatically(Python boto) as soon as the bid price increases, so as to stop it from terminating itself and manually terminate it once am done with my work. I know I can use the latest spot block to use the spot instance upto 6 hours, but it reduces the profit margin. So I wanted to know if I can modify my bid pricing on the go based on the current demand.
Thanks. | false | 47,199,421 | 0.099668 | 0 | 0 | 1 | No. It is not possible to change the bid price on an existing spot request. You will need to create a new spot request with the new bid price. However, any EC2 instances allocated with the first request will always be tied to that first request.
If your work cannot handle an EC2 instance terminating prematurely, then spot instances are not right for your work and you should use OnDemand or Reserved instances. | 0 | 2,432 | 0 | 3 | 2017-11-09T10:27:00.000 | python,amazon-web-services,amazon-ec2,boto3 | Changing max bid price of AWS Spot instance | 1 | 2 | 2 | 47,204,123 | 0 |
1 | 0 | Currently I'm in the process of tagging every S3 bucket I have using boto3. Compared to a resource like Lambdas, doing s3.put_bucket_tagging overwrites any previous tags, compared to Lambdas which only add extra tags while keeping old ones. Is there a way to only add tags, rather than overwrite them?
Secondly, I have created a method to take the current tags, add the new tags on, and then overwrite the tags with those values, so I don't lose any tags. But some of these S3 buckets are created by CloudFormation and thus are prefixed with aws: which gives me the error Your TagKey cannot be prefixed with aws: when I try to take the old tags and re-put them with the new tags.
A fix for either of these to give me the ability to automate tagging of every s3 bucket would be the best solution. | false | 47,206,395 | 0.379949 | 0 | 0 | 2 | You are out of luck. If the S3 bucket was created by a CFT, then
You cannot add new tags or
Add new tags and lose the tags created by CFT (then your delete stack will fail unless you exclude that S3 resource from deletion)
You can try updating the stack with new tags as suggested by @jarmod | 0 | 1,424 | 0 | 2 | 2017-11-09T16:03:00.000 | python,amazon-web-services,amazon-s3,boto3 | Using boto3 to add extra tags to S3 buckets | 1 | 1 | 1 | 47,210,265 | 0 |
0 | 0 | I wrote a Python script for mass commenting on a certain website.It works perfectly on my computer but when my friend tried running it on his then a Captcha would appear on the login page where as on running it on my machine no captcha appears.I tried resetting the caches,cookies but still no captcha. Tried resetting the browser settings but still no luck and on the other system the captcha always appears.If you could list down the reasons of why this happening that would be great.Thanks | true | 47,215,370 | 1.2 | 1 | 0 | 0 | His IP is probably flagged. Also recaptcha will automatically throw a captcha if an outdated or odd user agent is detected. | 0 | 40 | 0 | 0 | 2017-11-10T03:47:00.000 | python,python-2.7,python-3.x,recaptcha,captcha | Captcha not Appearing or Malfunctioning on one system and working on the other | 1 | 1 | 1 | 47,215,397 | 0 |
0 | 0 | Python Client throw "call dropped by load balancing policy grpc", if remote server restart. And connection never recovered afterwards.
The problem is hard to constant reproduce. But we confirmed that if a remote server restart, python client have chance starting send error message like this.
Other grpc client like java are working fine. I searched online, and it seems related to load balancing policy. And suggest to change from 'roundrobin' to 'pick first'. But I can not find where to add this args in python client. | false | 47,217,743 | 0.197375 | 0 | 0 | 1 | revert fly tiger resolve this issue | 0 | 267 | 0 | 0 | 2017-11-10T07:31:00.000 | python,grpc | python client throw error "call dropped by load balancing policy grpc" when remote server restart | 1 | 1 | 1 | 48,276,627 | 0 |
0 | 0 | I have a following question. Is there any possibility to enlarge the speed of sending the single api request. I want to send request and obtain json as a result. When i use requests.get(url) the time is about 150ms.
I would like to make this time lower. My aim is to speed up single request. Is there any possibility to do this? | false | 47,270,175 | 0 | 0 | 0 | 0 | GLOBAL ANSWER ( Read These methods, beside...you can try a lot of modules that can send/receive your current data to your target server ):
Check your internet connection speed, pause/stop/disable all of your downloading process.
Disable your firewall/antivirus/os_default_defender or any kind of security programs who scan your internet protocols and suspend your net IOs for a while.
Check your vpn/proxy or any kind of tunneler programs! ( because they can check and suspend your connections and make your internet connection slower ).
Check your default DNS ips, choose The fastest DNS server in the world like google = 8.8.8.8 or others ...
Clean your virtual ram buffer and check your os disk IO speed ( close your heavy programs such as converter programs ... )
Change your programming language !!! python is good but it is not fastest language for send/receive data from www. even you can try low level programming ( for your requests pyhton use your os internal services!!this bridge can make your connection slower, you will not have this bridge if you use your os friendly langs ( like .NET langs on windows ).
You can compress your data to make your request faster.
At last it depends on your target server response and your os!! linux has different net IOs handler from windows!!
Good Luck ... | 0 | 79 | 0 | 2 | 2017-11-13T17:30:00.000 | python,api,networking,request | Python, api requests | 1 | 1 | 2 | 47,270,840 | 0 |
0 | 0 | I wanted to run a discord bot from my raspberry pi so I could have it always running, so I transferred over the bot file. BTW this bot was made in python. I get an error saying no module named discord. This is because I don't have discord installed. Whenever I try to use pip3 install discord I get a message saying that it was successful, but was installed under Python 3.4. I need it to be installed under Python 3.5 so that my bot's code will run properly. If I try to use python3 -m pip install discord I get the error /usr/local/bin/python3: No module named pip. When I run pip -V I get 3.4. I want to make the version 3.5 instead of 3.4, but even after running the get-pip.py file I still am on pip 3.4. Any help? | true | 47,278,382 | 1.2 | 1 | 0 | 1 | I had a similar problem on a different machine. what I did to have the python 3.6 interpreter as the default one for the python 3 command was this:
First, edit your .bashrc file to include the following line export PATH=/path/to/python/bin:$PATH(in this case, I will be using /home/pi/python). Then, download the python 3.6 usingwget https://www.python.org/ftp/python/3.6.3/Python-3.6.3.tgz. Unarchive it using tar -zxvf Python-3.6.3.tgz, and cding into the directory. Then, configure it by doing ./configure --prefix=$HOME/python(Or to the path you had used in .bashrc), and make it using make, and sudo make install. Afterwards, reboot the raspberry pi, and you should now be able to use python 3.6 with the python3 command | 0 | 1,062 | 0 | 0 | 2017-11-14T05:36:00.000 | python,pip,discord.py | Pip Not Working On Raspberry Pi For Discord Bot | 1 | 1 | 1 | 47,361,577 | 0 |
0 | 0 | What I have
Printer internal IP
ZPL code on printer
parameters to plug into ZPL code
Is there a way to define a label and send it to the printer via Python? I would need to specify which label type to use on the printer since it can have multiple .zpl label codes stored on it.
are there dedicated libraries? Otehrwise what are some basic socket functions to get me started | true | 47,313,447 | 1.2 | 0 | 0 | 2 | OK I am not a python expert here but the general process is:
Open a TCP connection to port 9100
Write ZPL to your connection
Close your connection
You will want to look at the ^DF and ^XF commands in the ZPL programming guide to make sure you are using the templates right, but it is a pretty simple process.
If you are concerned about whether or not the printer is ready to print you could look at the ~hs command to get the current status.
In the end there is a C# and Java SDK available for the printer which has helper functions to push variables store in Maps to a template, but the JNI calls are probably more involved than just opening a TCP connection... | 0 | 2,231 | 0 | 0 | 2017-11-15T17:12:00.000 | python,python-2.7,zebra-printers,zpl | Python send label to Zebra IP printer | 1 | 1 | 1 | 47,314,727 | 0 |
1 | 0 | I wish to monitor the CSS of a few websites (these websites aren't my own) for changes and receive a notification of some sort when they do. If you could please share any experience you've had with this to point me in the right direction for how to code it I'd appreciate it greatly.
I'd like this script/app to notify a Slack group on change, which I assume will require a webhook.
Not asking for code, just any advice about particular APIs and other tools that may be of benefit. | false | 47,319,351 | 0 | 0 | 0 | 0 | One possible solution:
Crawl website for .css files, save change dates and/or filesize.
After each crawl compare information and if changes detected use slack API to notify. I haven't worked with slack that, for this part of the solution maybe someone else can give advice. | 0 | 197 | 0 | 0 | 2017-11-15T23:58:00.000 | python,css,web,automation,bots | How to monitor changes to a particular website CSS? | 1 | 2 | 3 | 47,319,389 | 0 |
1 | 0 | I wish to monitor the CSS of a few websites (these websites aren't my own) for changes and receive a notification of some sort when they do. If you could please share any experience you've had with this to point me in the right direction for how to code it I'd appreciate it greatly.
I'd like this script/app to notify a Slack group on change, which I assume will require a webhook.
Not asking for code, just any advice about particular APIs and other tools that may be of benefit. | false | 47,319,351 | 0 | 0 | 0 | 0 | I would suggest using Github in your workflow. That gives you a good idea of changes and ways to revert back to older versions. | 0 | 197 | 0 | 0 | 2017-11-15T23:58:00.000 | python,css,web,automation,bots | How to monitor changes to a particular website CSS? | 1 | 2 | 3 | 47,319,375 | 0 |
1 | 0 | When I delete through the Console all files from a "folder" in a bucket, that folder is gone too since there is no such thing as directories - the whole path after the bucket is the key.
However when I move (copy & delete method) programmatically these files through REST API, the folder remains, empty. I must therefore write additional logic to check for these and remove explicitly.
Isn't that a bug in the REST API handling ? I had expected the same behavior regardless of the method used. | false | 47,321,613 | 0.53705 | 0 | 0 | 3 | Turns out that you can safely remove all object ending with / if you don't need them once empty. "Content" will not be deleted.
If you are using Google Console, you must create a folder before uploading to it. That folder is therefore an explicit object that will remain even if empty. Same behavior apparently when uploading with tools like Cyberduck.
But if you upload the file using REST API and its full path i.e. bucket/folder/file, the folder is implicit visually but it's not getting created as such. So when removing the file, there is no folder left behind since it wasn't there in the first place.
Since the expected behavior for my use case is to auto-remove empty folders, I just have a pre-processing routine that deletes all blobs ending with / | 0 | 1,252 | 0 | 3 | 2017-11-16T04:27:00.000 | python,rest,google-cloud-storage | Empty 'folder' not removed in GCS | 1 | 1 | 1 | 47,358,658 | 0 |
0 | 0 | Using Python 2.7.12, Selenium 3.4.3, Chrome Version 62.0.3202.94 (Official Build) (32-bit).
While trying to ensure presence of element as follows but no exception is raised when x is not present:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as WDW
from selenium.webdriver.support import expected_conditions as EC
try:
WDW(driver, 20).until(EC.presence_of_element_located((By.XPATH, x)))
except Exception e:
print(e)
2nd option:
try:
we = driver.find_element(By.XPATH, x)
except Exception e:
print(e)
Is there a issue with the syntax or a logical issue? | true | 47,331,033 | 1.2 | 0 | 0 | 0 | Implicit_wait was set to 10 mins, which caused this behaviour. Reducing the value gave expected results. | 0 | 30 | 0 | 1 | 2017-11-16T13:33:00.000 | python-2.7,selenium-chromedriver | Python Selenium 3.4.3 do not raise timeouts, and goes to hung state | 1 | 1 | 1 | 47,506,732 | 0 |
0 | 0 | I would like to release the code for an application I made in Python to the public, just as a pastebin link. However, the app needs a client ID, secret and user agent to access the API (for reddit in this case).
How can I store the key strings in the python code without giving everyone free reign on my api access? Maybe a hashing function or similar? | true | 47,337,762 | 1.2 | 1 | 0 | 4 | You probably don't want to provide the actual keys and other private information at all, hashed or otherwise. Besides being a security issue, it probably violates all sorts of agreements you implicitly made with the provider.
Anyone using the application should be able to get their own key in the same manner you did. You can (and should) provide instructions for how to do this.
Your application should expect the key information in a well documented format, usually a configuration file, that the user must provide once they have obtained the keys in question. You should document the exact format you expect to read the key in as well.
Your documentation could also include a dummy file with empty or obvious placeholder values for the user to fill in. It may be overkill, but I would recommend making sure that any placeholders you use are guaranteed not to be valid keys. This will go a long way to avoiding any accidents with the agreements you could violate otherwise. | 0 | 51 | 0 | 4 | 2017-11-16T19:37:00.000 | python,python-3.x,hash | How to release a Python application that uses API keys to the public? | 1 | 1 | 1 | 47,337,902 | 0 |
0 | 0 | I'm creating a Reddit bot with praw, and I want to have the bot do things based on private messages sent to it. I searched the documentation for a bit, but couldn't find anything on reading private messages, only sending them. I want to get both the title and the content of the message. So how can I do this? | true | 47,340,391 | 1.2 | 1 | 0 | 2 | Didn't find it in the docs, but a friend who knows a bit of raw helped me out.
Use for message in r.inbox.messages() (where r is an instance of reddit) to get the messages.
Use message.text to get the content, and message.subject to get the title. | 0 | 527 | 0 | 1 | 2017-11-16T22:45:00.000 | python,praw | How to read private messages with PRAW? | 1 | 1 | 1 | 47,340,599 | 0 |
0 | 0 | I understand from previous questions that access to Google Translate API may have recently changed.
My goal is to translate individual tweets in a dataframe from X Language to English.
I tried to setup the Google Cloud Translate API with no success
I have setup gcloud sdk, enabled the billing and looks like the certification is ok. But, still no success.
Has anyone else had any recent experience with it using R and/or Python? | false | 47,378,315 | 0 | 1 | 0 | 0 | Just as an update, for anyone seeking to translate dataframes/variables
translateR doesnt seem to accept the Google API key
However, everything works very smoothly using the "googleLangaugeR" package | 0 | 200 | 0 | 2 | 2017-11-19T15:03:00.000 | r,python-3.x,google-translate | Google Cloud Translate API : challenge setup (R and/or Python) | 1 | 1 | 1 | 47,410,428 | 0 |
0 | 0 | Python Telethon
I need to receive messages from the channel
Error:
>>> client.get_message_history(-1001143136828)
Traceback (most recent call last):
File "messages.py", line 23, in
total, messages, senders = client.get_message_history(-1001143136828)
File "/Users/kosyachniy/anaconda/lib/python3.5/site-packages/telethon/telegram_client.py", line 548, in get_message_history
add_mark=True
KeyError: 1143136828 | false | 47,380,618 | 0 | 1 | 0 | 0 | Error in library
Need to update: pip install telethon --upgrade | 0 | 2,377 | 0 | 2 | 2017-11-19T18:43:00.000 | python,api,telegram,telethon | Telethon How do I get channel messages? | 1 | 1 | 1 | 47,380,957 | 0 |
0 | 0 | I am trying to get a policy from boto3 client but there is no method to do so using policy name. By wrapping the create_policy method in a try-except block i can check whether a policy exists or not. Is there any way to get a policy-arn by name using boto3 except for listing all policies and iterating over it. | false | 47,433,414 | 0.099668 | 0 | 0 | 1 | You will need to iterate over the policies to get policy names. I am not aware of a get-policy type api that uses policy names only policy ARNs.
Is there a reason that you do not want to get a list of policies? Other than to not download the list. | 0 | 2,854 | 0 | 4 | 2017-11-22T11:20:00.000 | python,amazon-web-services,boto3 | boto3 iam client: get policy by name | 1 | 1 | 2 | 47,442,387 | 0 |
1 | 0 | Does anybody have the solution yet? The script was working perfectly fine until yesterday, but suddenly it stopped working. Don't know why.. :(
I tried searching the answer all over Google, but didn't find yet.
Tried adding options.add_argument('--log-level=3') as well, but no luck.
Can someone help me out with this? | false | 47,439,668 | -0.099668 | 0 | 0 | -1 | What is going on here? I've had this issue before and my solution was to uninstall and reinstall chrome and it worked fine. But now, even uninstalling and reinstalling doesn't work and this 'Devtools listening...' line is popping up right away and my script doesn't work!
Everyone seems to have the same experience with this message. It's working fine for the longest time and then all of a sudden this message pops up and the script stops working? Chrome has to be caching something somewhere. | 0 | 1,868 | 0 | 0 | 2017-11-22T16:31:00.000 | python-3.x,google-chrome-devtools,selenium-chromedriver | Python selenium: DevTools listening on ws://127.0.0.1 keeps on popping up | 1 | 1 | 2 | 47,486,605 | 0 |
1 | 0 | For now, I'm trying to use flask-oauthlib with flask-login together.
flask-oauthlib provides a simple way to use oauth, but I want a login manager to automatically redirect all users who are not logged in to /login. In flask-login, I can use login_required to accomplish this. But how could I achieve this by flask-oauthlib?
Further, how to manage session when using flask-oauthlib? I understand how SSO works, but I'm confusing how could I know if this token is expired or not? | false | 47,449,444 | 0 | 0 | 0 | 0 | So, I've spent some time thinking about this (as I want to do it for my own website), and I've come up with a theoretical solution.
From what I understand in my implementation of Google's OAuth API, OAuth is about sending the user on a link to the server that hosts the OAuth keys, and then returning back to the sender. So, my idea is that in the login form, you have buttons that act as links to the OAuth client.
I haven't tested this, but since no one else has replied to this I figure this will give you a nice little project to implement yourself and let us know if it works. | 0 | 504 | 0 | 1 | 2017-11-23T07:04:00.000 | python,flask,single-sign-on,flask-login,flask-oauthlib | How to use flask-oauthlib with flask-login together? | 1 | 1 | 1 | 47,532,413 | 0 |
0 | 0 | How can I easily copy a local file to a remote server using python?
I don't want to map the drive to my local machine and the windows server requires a username and password.
The local machine is also a windows machine.
Examples I've seen are with linux and mapping the drive. Unfortunately that's not an option for me. | false | 47,458,024 | 0 | 0 | 0 | 0 | You can use the subprocess module or os.system to launch a command into a windows shell. Then you can use Powershell or cmd instructions. | 0 | 749 | 1 | 0 | 2017-11-23T14:41:00.000 | python,windows,server | Transfer file from local machine to remote windows server using python | 1 | 1 | 1 | 47,458,154 | 0 |
0 | 0 | I've built a server listening on a specific port on my server using Python (asyncore and sockets) and I was curious to know if there was anything possible to do when there is too many people connecting at once on my server.
The code in itself cannot be changed, but will adding more process works? or is it from an hardware perspective and I should focus on adding a load balancer in front and balancing the requests on multiple servers?
This questions is borderline StackOverflow (code/python) and ServerFault (server management). I decided to go with SO because of the code, but if you think ServerFault is better, let me know. | true | 47,473,750 | 1.2 | 0 | 0 | 2 | 1.
asyncore relies on operating system for whole connection handling, therefore what you are asking is OS dependent. It has very little to do with Python. Using twisted instead of asyncore wouldn't solve your problem.
On Windows, for example, you can listen only for 5 connections coming in simultaneously.
So, first requirement is, run it on *nix platform.
The rest depends on how long your handlers are taking and on your bandwith.
2.
What you can do is combine asyncore and threading to speed-up waiting for next connection.
I.e. you can make Handlers that are running in separate threads. It will be a little messy but it is one of possible solutions.
When server accepts a connection, instead of creating new traditional handler (which would slow down checking for following connection - because asyncore waits until that handler does at least a little bit of its job), you create a handler that deals with read and write as non-blocking.
I.e. it starts a thread and does the job, then, when it has data ready, only then sends it upon following loop()'s check.
This way, you allow asyncore.loop() to check the server's socket more often.
3.
Or you can use two different socket_maps with two different asyncore.loop()s.
You use one map (dictionary), let say the default one - asyncore.socket_map to check the server, and use one asyncore.loop(), let say in main thread, only for server().
And you start the second asyncore.loop() in a thread using your custom dictionary for client handlers.
So, One loop is checking only server that accepts connections, and when it arrives, it creates a handler which goes in separate map for handlers, which is checked by another asyncore.loop() running in a thread.
This way, you do not mix the server connection checks and client handling. So, server is checked immediately after it accepts one connection. The other loop balances between clients.
If you are determined to go even faster, you can exploit the multiprocessor computers by having more maps for handlers.
For example, one per CPU and as many threads with asyncore.loop()s.
Note, sockets are IO operations using system calls and select() is one too, therefore GIL is released while asyncore.loop() is waiting for results. This means, that you will have total advantage of multithreading and each CPU will deal with its number of clients in literally parallel way.
What you would have to do is make the server distributing the load and starting threading loops upon connection arrivals.
Don't forget that asyncore.loop() ends when the map empties. So the loop() in a thread that manages clients must be started when new connection is accepted and restarted if at some time there are no more connections present.
4.
If you want to be able to run your server on multiple computers and use them as a cluster, then you install the process balancer in front.
I do not see the serious need for it if you wrote the asyncore server correctly and want to run it on single computer only. | 0 | 78 | 0 | 1 | 2017-11-24T12:53:00.000 | python,sockets,asyncore | How to handle a burst of connection to a port? | 1 | 1 | 1 | 47,474,778 | 0 |
1 | 0 | I have JS running and essentially getting user entries from my HTML session storage and pushing these to a DB. I also need to use a HTTP request to pass a json object containing the entries to a python file hosted somewhere else.
Does anyone have any idea of documentation I could look at, or perhaps how to get JSON objects from JS to Python.
My client does not want me to grab the variables directly from the DB. | true | 47,535,356 | 1.2 | 0 | 0 | 0 | You have to create some sort of communication channel between the javascript and python code. This could be anything, SOAP, HTTP, RPC, any number of and flavor of message queue.
If nothing like that is in place, it's quite the long way around. A complex application might warrant you doing this, think micro services communicating across some sort of service bus. It's a sound strategy and perhaps that's why your client is asking for it.
You already have Firebase, though! Firebase is a real-time database that already has many of the characteristics of a queue. The simplest and most idiomatic thing to do would be to let the python code be notified of changes by Firebase: Firebase as service bus is a nice strategy too! | 1 | 37 | 0 | 0 | 2017-11-28T15:36:00.000 | python,html,json,node.js,firebase-realtime-database | How do you pass variables using a HTTP post from a JS file/function to a separate python file/function hosted somewhere else? | 1 | 1 | 1 | 47,537,413 | 0 |
1 | 0 | Can you help me with a script for autoclicking and download a photo from a photo site (ex. Flickr, Photobucket)? | false | 47,539,225 | 0.197375 | 0 | 0 | 1 | selenium, find elementByName, then use pynput lib to right click and download.
used similar for fetching market data works prety well, refer to official selenium docs for spesifics . | 0 | 38 | 0 | 0 | 2017-11-28T19:14:00.000 | python | Python script for autoclicking and downloading | 1 | 1 | 1 | 69,762,672 | 0 |
1 | 0 | I have an HTML web page with many download links in a table. I have isolated the path to my desired zips. They all contain an .xlsx file but sometimes other files.
Is there a way to avoid downloading the zips and directly accessing the files inside?
If I do need to download them, how can I track where the zips have been downloaded to? (So I can extract the .xlsx)
I am currently looking into zipfile and requests for solutions. zipfile.extract needs the path of the zip file, but I don't know exactly where the script will download to. requests gives a response object, but how do I prompt it to download? | false | 47,572,918 | 0.197375 | 0 | 0 | 1 | Is there a way to avoid downloading the zips and directly accessing the files inside?
Generally speaking : no. A web server serves files in a file system, not in a zip archive.
If I do need to download them, how can I track where the zips have been downloaded to? (So I can extract the .xlsx)
If not specified, the location is the current directory, the one the script has been launched in. | 0 | 35 | 0 | 0 | 2017-11-30T11:54:00.000 | python,python-3.x,python-requests,zip | unzipping an html link with python | 1 | 1 | 1 | 47,573,727 | 0 |
0 | 0 | I have a more open-ended design oriented question for you here. I have background in Python but not in web nor async programming. I am writing an app to save data collect from websockets 24/24, 7/7 with the aim to minmise data loss.
My initial thoughts is to use Python 3.6 with asyncio, aiohttp and aiofiles.
I don't know whether to use one co-routine per websocket connection or one thread per websocket connection. Performance may not an issue as much as good connection error handling. | false | 47,611,139 | 0.197375 | 1 | 0 | 1 | To answer your actual question, threads and coroutines will be equally reliable but coroutines are much easier to reason with and much of the modern existing code you'll find to read or copy will use them.
If you want to benefit from multiple cores, much better to use multiprocessing than threads to avoid the trickiness of the GIL. | 0 | 574 | 0 | 2 | 2017-12-02T18:29:00.000 | python,websocket,cloud,aiohttp,python-aiofiles | websocket data collection design | 1 | 1 | 1 | 47,617,568 | 0 |
0 | 0 | So I'm having an issue where when I read in info from an xml file. The data it reads in is supposed to be a list of numbers but when I read it in, it comes as a string. Is there a way to read xml data as a list or a way to convert a string to a list.
Eg I get the data from the xml as say [1.0, 2.0, 3.0, 4.0, 5.0] and if I check the type it says its a string, ie the whole thing is a string including the brackets and the comma.
Can't wrap my head around how to convert this back to a list of numbers | false | 47,614,316 | 0.197375 | 0 | 0 | 2 | For this, you would use the str.split() builtin method.
Set x = "[1.0, 2.0, 3.0, 4.0, 5.0]"
First of all, get rid of the square brackets around the string:
x = x[1:-1] (x is now "1.0, 2.0, 3.0, 4.0, 5.0", a long string)
Then, split the string to form a list of strings:
x = x.split(',') (x is now ["1.0", "2.0", "3.0", "4.0", "5.0"], a list of strings)
Then, convert all of these strings into floats (as I assume you want that):
x = [float(i) for i in x] (x is now [1.0, 2.0, 3.0, 4.0, 5.0], a list of floats) | 1 | 606 | 0 | 1 | 2017-12-03T01:13:00.000 | python,string,python-2.7,list,python-2.x | Convert string to list in python or read xml data as list | 1 | 1 | 2 | 47,614,365 | 0 |
0 | 0 | I am writing an app listening to websocket connections and making infrequent REST requests. aiohttp seems like a natural choice for this, but I'm flexible. The app is simple but needs to be reliable (gigabytes of data to collect daily while minimising data loss).
What is the best way to handle connection loss with aiohttp? I notice that other some other Python libraries have auto reconnect options available. With aiohttp, I could always manually implement this with a loop (start over again as soon as the connection is lost) but I wouldn't know what's the best practice (is it acceptable to keep making reconnection attempts without delay in a loop?). | false | 47,618,328 | 0.197375 | 0 | 0 | 1 | aiohtp is a relative low level library, autoreconnection should be built on top of it.
Websocket connection is non-blocking operation in aiohttp.
Reliable websocket reconnection is not trivial task. Maybe you need to know what data are received by peer or maybe not -- it depends. In first case you need some high level protocol on top of plain websockets to send acknowledges etc. | 0 | 1,106 | 0 | 2 | 2017-12-03T12:26:00.000 | python-3.x,websocket,python-asyncio,aiohttp | autoreconnect for aiohttp websocket | 1 | 1 | 1 | 47,633,790 | 0 |
0 | 0 | I'm trying to test a web application using selenium python. I've wrote a script to mimic a user. It logs in to the server, generates some reports and so on. It is working fine.
Now, I need to see how much time the server is taking to process a specific request.
Is there a way to find that from the same python code?
Any alternate method is acceptable.
Note:
The server is in the same LAN
Also I don't have privileges to do anything at the server side. So anything I can do is from outside the server.
Any sort of help is appreciable. Thank you | false | 47,647,252 | 0 | 0 | 0 | 0 | Have you considered the w3c HTTP access log field, "time-taken." This will report on every single request the time in milliseconds maximally. On some platforms the precision reported is more granular. In order for a web server, an application server with an HTTP access layer, an enterprise services bus with an HTTP access layer (for SOAP and REST calls) to be fully w3c standards compliant this log value must be available for inclusion in the HTTP access logs.
You will see every single granular request and the time required for processing from first byte of receipt at the server to the last byte sent minus the final TCP ACK at the end. | 0 | 527 | 0 | 0 | 2017-12-05T06:20:00.000 | python,selenium,server,automated-tests,load-testing | Request processing time in python | 1 | 1 | 1 | 47,676,817 | 0 |
0 | 0 | I'm writing a Python program that uses Selenium to navigate to and enter information into search boxes on an advanced search page. This website uses Javascript, and the IDs and Names for each search box change slightly each time the website is loaded, but the Class Names remain consistent. Class names are frequently reused though, so my goal is to use find_elements_by_class_name(classname) and then index through that list.
One box, for example, has the class name x-form-text x-form-field x-form-num-field x-form-empty-field, but I can't use this because selenium considers it a compound class name and throws an error. If I use just a portion of it, such as x-form-text, it can't find the element. My hope is to either find a way to allow the spaces or, if that can't be done, find a way to search for all elements whose class name contains a section of text without spaces, such as x-form-text.
Any help or thoughts would be greatly appreciated!
Edit:
I tried this code:
quantminclass = 'x-form-text.x-form-field.x-form-num-field.x-form-empty-field'
quantmin = '25'
browser.find_elements_by_css_selector(quantminclass)[0].send_keys(quantmin)
But got an error that the list index was out of range, implying that it can't find anything. I inspected the element and that is definitely its class name, so I'm not sure how to proceed. | false | 47,660,523 | 0.197375 | 0 | 0 | 3 | Try converting class name to a CSS selector.
With a CSS selector, a class named x-form-text x-form-field x-form-num-field
turns into .x-form-text.x-form-field.x-form-num-field
So basically just replace spaces with dots and you're good to go. | 0 | 19,954 | 0 | 6 | 2017-12-05T18:38:00.000 | python,selenium | Selenium Python - Finding Elements by Class Name With Spaces | 1 | 1 | 3 | 47,660,624 | 0 |
1 | 0 | I have a python program appending lines to a .csv file every few seconds. I also have a web app that was created using Django. My goal is to continuously read the data from the python program as it runs and display it on the website without having to refresh.
My proposed solution is to send the python output to a server using requests.put(), and then read from the server using AJAX.
1.) Is this the best solution, or is there a better manner to connect the program and the site.
2.) If this is a good solution, what's the easiest way to get a server running to POST, PUT, and GET from? It can be local and will never expect heavy traffic.
Thank you! | false | 47,687,046 | 0 | 0 | 0 | 0 | Reading from csv is not good solution,message quene is suitable solution for this context。
python program can post a data message to django when adding lines
to csv.
django webapp can send data message to browser when getting
message by SSE or socket.io lib | 0 | 173 | 0 | 0 | 2017-12-07T03:18:00.000 | python,ajax,django,server | Sending data stream from python program to web app | 1 | 1 | 1 | 47,687,089 | 0 |
1 | 0 | The setup is using S3 as a storage, API gateway for the rest endpoint and Lambda (Python) for get/fetch of file in S3.
I'm using Boto3 for the Lambda function (Python) to check if the file exists in S3, and I was able to download it but being stored in Lambda machine ("/tmp"). The API Gateway can trigger the lambda function already. Is there a way that once the lambda function is triggered then the download will happen in the browser?
Thanks! | true | 47,707,185 | 1.2 | 0 | 0 | 0 | Here is how we did it:
Check and Redirect:
API Gateway --> Lambda (return 302)
Deliver Content:
CloudFront --> S3
Check for S3 existence with Lambda returning a 302 to cloudfront. You can also return Signed URL from Lambda with a valid time to access the URL from CloudFront.
Hope it helps. | 0 | 802 | 0 | 1 | 2017-12-08T03:17:00.000 | python-3.x,amazon-web-services,amazon-s3,aws-lambda,aws-api-gateway | Retrieve a file from browser using API gateway, AWS LAMBDA and S3 | 1 | 1 | 1 | 47,707,562 | 0 |
0 | 0 | I am wondering if there is a way to write into .txt file on web server using urllib or urllib3. I tried using urllib3 POST but that doesnt do anything. Does any of these libraries have ability to write into files or do I have to use some other library? | true | 47,721,121 | 1.2 | 0 | 0 | 0 | It's not clear from your question, but I'm assuming that the Python code in question is not running on the web server. (Otherwise, it would be a matter of using the regular open() call.)
The answer is no, HTTP servers do not usually provide the ability to update files, and urllib does not support writing files over FTP / SCP. You will need to be either running some sort of an upload service on the server that exposes an API over HTTP (via a POST entry point or otherwise) that allows you to make requests to the server in a way that causes the file to be updated on the server's filesystem. Alternatively, you would need to use a protocol other than HTTP, such as FTP or SCP, and use an appropriate Python library for that protocol such as ftplib. | 0 | 164 | 0 | 0 | 2017-12-08T19:40:00.000 | python,urllib,urllib3 | Writing into text file on web server using urllib/urllib3 | 1 | 1 | 1 | 47,721,219 | 0 |
0 | 0 | I am trying to use Allure report with Python3, although the libraries used for Python Pytest are not supported, from what I can see.
The documentation say that Allure plugin for pytest support the previous version of allure only.
Is there a workaround to use pytest on python3, and get the Allure reports created? | true | 47,725,736 | 1.2 | 1 | 0 | 0 | Found the solution.
First of all, you need to remove ALL the old libraries and packages, since it may cause problems.
Then you need to install the Allure command line via Brew.
That will allow you to generate reports from the output of your tests.
Then what is left is to install via Pip the package for Python, which is called Allure-pytest.
I did notice that some imports are different from the previous version of Allure; not sure what is the problem though, since importing Allure works fine in Python, but when I generate reports, I get errors because Objevt of type bytearray is not JSON serializable.
Was working with the previous version of Allure API on Python 2.7, so I am not sure what is wrong, but most likely it is user error. | 0 | 614 | 0 | 0 | 2017-12-09T05:48:00.000 | python,pytest,allure | Use Allure report V2 via pytest on Python3 | 1 | 1 | 1 | 47,764,766 | 0 |
1 | 0 | What is the best way to export/store items in REST API ?
I want send scraped items to REST API, where should I put my
requests.post(...) ? Any examples ? | true | 47,769,299 | 1.2 | 0 | 0 | 1 | Thanks Rubber duck debugging, propably simple pipeline with process_item() method, earlier I thought only about Exporters and FeedStorage | 0 | 180 | 0 | 1 | 2017-12-12T09:34:00.000 | python,scrapy,scrapy-spider | Scrapy - best way to export/store items in REST API | 1 | 1 | 1 | 47,769,467 | 0 |
0 | 0 | Note:
I can't use third party modules so bs4 and lxml are not an option.
I need to parse HTML with the
Python 3 std lib. I thought xml.minidom would be the way to go but it doesn't seem to be able to parse invalid XML/HTML without throwing an exception like syntax error.
Am I missing something within the xml module that can do what I'm looking for?
Am I missing something in the std lib? | false | 47,801,519 | -0.197375 | 0 | 0 | -2 | if you need to handle broken html/xml, I recommend you to ckech Beautiful Soup 4 | 0 | 78 | 0 | 0 | 2017-12-13T20:21:00.000 | python,html-parsing | Can xml.minidom parse broken XML | 1 | 1 | 2 | 47,801,694 | 0 |
0 | 0 | I am asking for generally checking if all elements of a page has been loaded. Is there a way to check that basically?
In the concrete example there is a page, I click on some button, and then I have to wait until I click on the 'next' button. However, this 'Next' button is available, selectable and clickable ALL THE TIME. So how to check with selenium that 'state'(?) of a page?
As a reminder: This is a question about selenium and not the quality of the webpage in question.... | false | 47,832,693 | 0 | 0 | 0 | 0 | Reliably determining whether a page has been fully loaded can be challenging. There is no way to know if all the elements have been loaded just like that. You must define some "anchor" points in each page so that as far as you aware, if these elements has been loaded, it is fair to assume the whole page has been loaded. Usually this involves a combination of tests. So for example, you can define that if the below combination of tests passes, the page is considered loaded:
JavaScript document.readyState === 'complete'
"Anchor" elements
All kinds of "spinners", if exist, disappeared. | 0 | 5,281 | 0 | 1 | 2017-12-15T12:44:00.000 | python,selenium,selenium-webdriver | Is there a way with python-selenium to wait until all elements of a page has loaded? | 1 | 1 | 3 | 47,832,860 | 0 |
1 | 0 | I'm implementing a webhooks provider and trying to solve some problems while minimizing the added complexity to my system:
Not blocking processing of the API call that triggered the event while calling all the hooks so the response to that call will not be delayed
Not making a flood of calls to my listeners if some client is quickly calling my APIs that trigger hooks (i.e. wait a couple seconds and throw away any earlier calls if duplicates come in later)
My environment is Python (Chalice) and AWS Lambda. Ideal solution will be easy to integrate and cheap. | false | 47,837,082 | 0.197375 | 0 | 0 | 1 | I would use SQS / SNS depending on exact architecture design. Maybe Apache Kafka, if you need to store events longer...
So upcoming event would be placed on SQS, and then other lambda would be used to do processing. Problem is that time of processing is limited to 5 min. Also delivering can't be parallel.
Other option is to have one input queue, and one output queue per receiver. So the lambda function, which process input, just spreads it through other queues. And then other lambdas are responsible for delivering. That way has other obvious problems.
Finally. Your lambda, while processing input, can generate messages on outgoing queue, instrumenting what message should be delivered to which users. Then you can have one lambda triggered on each message from outgoing queue. And there you can have small loop delivering messages. Note that in case of problems you need to send back what was not delivered.
Good point is that SQS has something like dead letter queue, so that problematic messages would not stay there forever. | 0 | 55 | 0 | 0 | 2017-12-15T17:22:00.000 | python,aws-lambda,webhooks | Design for implementing web hooks (including not blocking and ignoring superseding repeat events) | 1 | 1 | 1 | 47,854,491 | 0 |
0 | 0 | I am trying to create a simple chat application using sockets(UDP) and i would like to make it automatically allow itself through firewall, like every other application does. Is there a simple way to do this? | false | 47,858,129 | 0.197375 | 0 | 0 | 1 | The whole idea of a firewall is that it decides who gets through and who doesn't. So in principle, this is not possible, and that's a good thing!
Most firewalls, however, are configured to allow e.g. web traffic (port 80) to pass. So you have to find out what ports your firewall has open, and use these. | 0 | 1,029 | 0 | 1 | 2017-12-17T18:21:00.000 | python,windows,firewall | How do i make a python program allow itself through firewall? | 1 | 1 | 1 | 47,858,149 | 0 |
0 | 0 | I'm using Python to get texts of tweets from twitter using tweepy and is it possible to get ID and password from user, pass it to twitter api, and access to the tweets and get json data.
I read "User timelines belonging to protected users may only be requested when the authenticated user either “owns” the timeline or is an approved follower of the owner." but not sure whether it means the programmer must be accessible to the protected account or the api can access to protected account by receiving ID and password. | false | 47,861,097 | 0.379949 | 1 | 0 | 2 | The User Credentials is what determines permissions. With OAuth a user gives your app permission to act on their behalf. | 0 | 1,041 | 0 | 1 | 2017-12-18T01:34:00.000 | python,api,twitter | Is it able to view protected accounts using twitter api? | 1 | 1 | 1 | 47,863,424 | 0 |
0 | 0 | For some unknown reasons ,my browser open test pages of my remote server very slowly. So I am thinking if I can reconnect to the browser after quitting the script but don't execute webdriver.quit() this will leave the browser opened. It is probably kind of HOOK or webdriver handle.
I have looked up the selenium API doc but didn't find any function.
I'm using Chrome 62,x64,windows 7,selenium 3.8.0.
I'll be very appreciated whether the question can be solved or not. | false | 47,861,813 | -0.066568 | 0 | 0 | -1 | Without getting into why do you think that leaving an open browser windows will solve the problem of being slow, you don't really need a handle to do that. Just keep running the tests without closing the session or, in other words, without calling driver.quit() as you have mentioned yourself. The question here though framework that comes with its own runner? Like Cucumber?
In any case, you must have some "setup" and "cleanup" code. So what you need to do is to ensure during the "cleanup" phase that the browser is back to its initial state. That means:
Blank page is displayed
Cookies are erased for the session | 0 | 23,577 | 0 | 17 | 2017-12-18T03:34:00.000 | python-3.x,selenium,session,selenium-webdriver,webdriver | How can I reconnect to the browser opened by webdriver with selenium? | 1 | 1 | 3 | 47,862,340 | 0 |
0 | 0 | What are the different modules/ways to copy file from a windows computer to a linux server available in python
I tried using ftplib api to connect to the windows server but i m unable to do with the error - socket.error: [Errno 111] Connection refused
What are the other modules that i can connect to a windows computer to copy or list the files under a directory | false | 47,866,365 | 0 | 0 | 0 | 0 | If you have access to linux server, and the file generated on windows automatically, you can do the folowing:
Generate ssh-key on your windows maching
Add it to authorized_hosts of the linux machine
Install simple console scp tool on windows
Write simple cmd-script to copy file with help of scp, something like:
scp c:\path\to\file.txt [email protected]:/home/user/file.txt
Run this script automatically every time, then the file is generated on windows host. | 0 | 2,296 | 1 | 0 | 2017-12-18T10:23:00.000 | python,ftplib | Copy File from Windows Host to Linux [Python] | 1 | 1 | 1 | 47,869,378 | 0 |
0 | 0 | I have clicked on a button that will update a span. So now I want to wait for that span to change from a 1 to a 2. It sounds like I'm trying to do exactly what text_to_be_present_in_element does, except without a locator. I already have the WebElement I need. Is it possible to use these expected_conditions functions with an actual WebElement? | false | 47,894,980 | 0.197375 | 0 | 0 | 1 | Sounds like you could just get the value of the span and have python check once a second (or whatever) to see if it is different than before. | 0 | 108 | 0 | 0 | 2017-12-19T20:51:00.000 | python,selenium,wait | Is it possible to wait for text to change to a specific value in Selenium (Python) with a WebElement? | 1 | 1 | 1 | 47,895,032 | 0 |
1 | 0 | I recently implanted the connection with facebook and google on my local server, and everything worked.
But, when I tried to do it in production, the connection with google returns: "Your credentials aren't allowed". (Facebook works)
I don't know why, because i'm pretty sure that my application is confirmed by Google.
Do you have some ideas ?
Thanks in advance ! | true | 47,929,909 | 1.2 | 0 | 0 | 1 | My bad, I used a wrong key on settings.py | 0 | 1,548 | 0 | 0 | 2017-12-21T17:25:00.000 | python,django,google-api,django-socialauth,social-authentication | "Your credentials aren't allowed" - Google+ API - Production | 1 | 1 | 1 | 47,943,497 | 0 |
0 | 0 | I want to create a telegram bot which would immediately forward message from another bot, that regularly posts some info. I've tried to find some ready templates, but alas... I would be grateful for any useful info. Thanks! | false | 47,952,659 | 0.197375 | 1 | 0 | 1 | I don't think this is possible. As far as I know, bots can't communicate with other bots.
But you can add bots to groups | 0 | 497 | 0 | 0 | 2017-12-23T12:52:00.000 | python,telegram-bot | Is it possible to create a telegram bot which forwards messages from another bot to telegram group or channel? | 1 | 1 | 1 | 47,952,757 | 0 |
0 | 0 | I am learning python and i am currently scraping reddit. Somehow reddit has figured out that I am a bot (which my software actually is) but how do they know that? And how we trick them into thinking that we are normal users.
I found practical solution for that, but I am asking for bit more in depth theoretical understanding. | true | 47,956,527 | 1.2 | 0 | 0 | 18 | There's a large array of techniques that internet service providers use to detect and combat bots and scrapers. At the core of all of them is to build heuristics and statistical models that can identify non-human-like behavior. Things such as:
Total number of requests from a certain IP per specific time frame, for example, anything more than 50 requests per second, or 500 per minute, or 5000 per day may seem suspicious or even malicious. Counting number of requests per IP per unit of time is a very common, and arguably effective, technique.
Regularity of incoming requests rate, for example, a sustained flow of 10 requests per second may seem like a robot programmed to make a request, wait a little, make the next request, and so on.
HTTP Headers. Browsers send predictable User-Agent headers with each request that helps the server identify their vendor, version, and other information. In combination with other headers, a server might be able to figure out that requests are coming from an unknown or otherwise exploitative source.
A stateful combination of authentication tokens, cookies, encryption keys, and other ephemeral pieces of information that require subsequent requests to be formed and submitted in a special manner. For example, the server may send down a certain key (via cookies, headers, in the response body, etc) and expect that your browser include or otherwise use that key for the subsequent request it makes to the server. If too many requests fail to satisfy that condition, it's a telltale sign they might be coming from a bot.
Mouse and keyboard tracking techniques: if the server knows that a certain API can only be called when the user clicks a certain button, they can write front-end code to ensure that the proper mouse-activity is detected (i.e. the user did actually click on the button) before the API request is made.
And many many more techniques. Imagine you are the person trying to detect and block bot activity. What approaches would you take to ensure that requests are coming from human users? How would you define human behavior as opposed to bot behavior, and what metrics can you use to discern the two?
There's a question of practicality as well: some approaches are more costly and difficult to implement. Then the question will be: to what extent (how reliably) would you need to detect and block bot activity? Are you combatting bots trying to hack into user accounts? Or do you simply need to prevent them (perhaps in a best-effort manner) from scraping some data from otherwise publicly visible web pages? What would you do in case of false-negative and false-positive detections? These questions inform the complexity and ingenuity of the approach you might take to identify and block bot activity. | 0 | 6,861 | 0 | 2 | 2017-12-23T22:28:00.000 | python,web-scraping,web,bots | How do websites detect bots? | 1 | 1 | 1 | 47,956,652 | 0 |
0 | 0 | My bot is admin in a channel and I want to read channel's recent actions(like who joined the channel and etc) using python-telegram-bot. How can I achive this? | true | 47,962,314 | 1.2 | 1 | 0 | 1 | There have no method to get recently actions by bot. And bots won't get notified when users joined channel.
If you want to know whether user in your channel, there have getChatMember method. | 0 | 967 | 0 | 0 | 2017-12-24T16:34:00.000 | python,telegram,telegram-bot,python-telegram-bot | Read channel recent actions with telegram bot | 1 | 1 | 1 | 47,962,389 | 0 |
0 | 0 | I would like to implement a transparent IMAPS (SSL/TLS) proxy from zero using python (pySocks and imaplib).
The user and the proxy are in the same network and the mail server is outside (example: gmail servers). All the traffic on port 993 is redirected to the proxy. The user should retrieve his emails using his favorite email application (example: thunderbird). The proxy should receive the commands and transmit it to the user/server and should be able to read the content of the retrieved emails.
However, as the traffic is encrypted, I don't know how to get the account and the password of the user (without using a database) OR how to read the content of the emails without knowing the account and the password of the user.
After few days looking for a solution, I still don't have any track. Maybe it is not possible ? If you have any track, I would be happy to read it.
Thank you. | true | 47,988,332 | 1.2 | 1 | 0 | 0 | You must implement your proxy as a Man In The Middle attack. That means that there are two different SSL/TLS encrypted communication channels: one between the client and the proxy, one between the the proxy and the server. That means that either:
the client explicitely sets the proxy as its mail server (if only few servers are to be used with one name/address per actual server)
the proxy has a certificate for the real mail server that will be trusted by the client. A common way is to use a dummy CA: the proxy has a private certificate trusted by the client that can be used to sign certificates for any domains like antivirus softwares do.
Once this is set up, the proxy has just to pass all commands and responses, and process the received mails on the fly.
I acknowledge that this is not a full answer, but a full answer would be far beyond the scope of SO and I hope it could a path to it. Feel free to ask more precise questions here if you are later stuck in actual implementation. | 0 | 221 | 0 | 0 | 2017-12-27T07:48:00.000 | python,imap | Transparent IMAPs proxy | 1 | 1 | 1 | 47,988,655 | 0 |
0 | 0 | I am using discord.py to create my bot and I was wondering how to create roles/permissions specific to the bot?
What that means is when the bot enters the server for the first time, it has predefined permissions and role set in place so the admin of the server doesn't need to set a role and permissions for the bot.
I have been trying to look up a reference implementation of this but no luck. If someone can point me at example of how to get a simple permission/role for a bot that will be great! | true | 47,999,716 | 1.2 | 1 | 0 | 1 | you want to do this through oath2 using the url parameters | 0 | 7,000 | 0 | 0 | 2017-12-27T23:15:00.000 | python,bots,discord | How to create a role for a Discord Bot | 1 | 1 | 3 | 48,163,714 | 0 |
0 | 0 | I recently made a music bot in my discord server but the problem I have is that I have to turn on the bot manually etc. and it goes offline when I turn off my computer. How can I make the bot stay online at all times, even when I'm offline? | false | 48,014,892 | 0.099668 | 1 | 0 | 1 | Use a Raspberry Pi with an internet connection. Install all necessary components and then run the script in its terminal. Then, you can switch off the monitor and the bot will continue running. I recommend that solution to run the bot 24/7. If you don't have a Raspberry Pi, buy one (they're quite cheap), or buy a server to run the code on. | 0 | 4,654 | 0 | 0 | 2017-12-28T21:59:00.000 | python,discord.py | How do I make my discord bot stay online | 1 | 2 | 2 | 51,890,116 | 0 |
0 | 0 | I recently made a music bot in my discord server but the problem I have is that I have to turn on the bot manually etc. and it goes offline when I turn off my computer. How can I make the bot stay online at all times, even when I'm offline? | false | 48,014,892 | 0 | 1 | 0 | 0 | You need to run the python script on a server, e.g. an EWS linux instance.
To do that you would need to install python on the server and place the script in the home directory and just run it via a screen. | 0 | 4,654 | 0 | 0 | 2017-12-28T21:59:00.000 | python,discord.py | How do I make my discord bot stay online | 1 | 2 | 2 | 48,034,102 | 0 |
0 | 0 | I am looking for a way to automatically check contacts in Dynamics 365 CRM and sync them to mailcontacts in Exchange online distribution lists.
Does anyone have suggestions on where to start with this?
I was thinking of trying to use python scripts with the exchangelib library to connect to the CRM via API, check the CRM contacts, then connect to Exchange online with API to update the mailcontacts in specific distribution lists if needed.
Does this sound plausible?
Are their more efficient ways of accomplishing this?
Any suggestions are greatly appreciated, thanks. | false | 48,025,594 | 0 | 1 | 0 | 0 | How about using MS Flow? I have done similar thing before using MS Flow to transfer emails from CRM to an online email processing service and retrieving emails back to CRM. | 0 | 76 | 0 | 0 | 2017-12-29T16:36:00.000 | python,api,dynamics-crm,exchange-server,exchangelib | Syncing Dynamics CRM contacts to Exchange mailcontacts distribution lists | 1 | 1 | 1 | 48,050,166 | 0 |
0 | 0 | Slack API provides you two options: use app as a bot and as a logged in user itself. I want to create App that will be working as a user and run channel commands. How can I do with discord.py? | false | 48,043,363 | 0 | 1 | 0 | 0 | As Wright has commented, this is against discord's ToS. If you get caught using a selfbot, your account will be banned. However, if you still REALLY want to do this, what you need to do is add/replace bot.run('email', 'password') to the bottom of your bot, or client.run('email', 'password') to the bottom depending on how your bot is programmed. | 0 | 8,029 | 0 | 2 | 2017-12-31T14:43:00.000 | python,discord,discord.py | How to login as user rather bot in discord server and run commands? | 1 | 1 | 2 | 48,051,694 | 0 |
0 | 0 | I am trying to access the internet with Google Chrome but every time I use webbrowser.open(url) it opens IE.
So I checked to make sure I have Chrome as my default, which I do, and I tried using the get() function to link the actual Chrome application but it gives me this error instead:
File "C:\Users\xxx\AppData\Local\Programs\Python\Python36\lib\webbrowser.py", line 51, in get raise Error("could not locate runnable browser") webbrowser.Error: could not locate runnable browser
I also tried to open other browsers but it gives the same error. It also reads IE as my default and only runnable browser.
What could be happening? Is there an alternative?
Using Python 3.6. | false | 48,056,052 | 0 | 0 | 0 | 0 | use "http://" in url
that works for me | 0 | 24,477 | 0 | 11 | 2018-01-02T05:39:00.000 | python,python-3.x,selenium | webbrowser.get — could not locate runnable browser | 1 | 1 | 3 | 69,779,987 | 0 |
1 | 0 | I have written a websocket server in tornado and i used websocket_ping_interval=60 to detect which connection is really closed after 60 sec. but after 60 sec the server disconnects the link(even if it's been disconnected). i think this is done because the server sends a ping packet each 60sec and the client doesn't response to the server. i want the client side(which is written in websocket python module) to response the server whenever the server sends ping req.
I have the same problem with client websocket in browsers. any idea how to solve it? | true | 48,062,744 | 1.2 | 0 | 0 | 1 | Tornado's websocket implementation handles pings automatically (and so do most other implementations). You shouldn't have to do anything.
Tornado's ping timeout defaults to 3 times the ping interval, so if you're getting cut off after 60 seconds instead of 180 seconds, something else is doing it. Some proxies have a 60-second timeout for idle connections, so if you're going through one of those you may need a shorter ping interval.
If that's not it, you'll need to provide more details, ideally a reproducible test setup with client and server code. | 0 | 720 | 1 | 1 | 2018-01-02T14:28:00.000 | python,websocket,tornado | how to response to the server ping in tornado websocket | 1 | 1 | 1 | 48,070,341 | 0 |
0 | 0 | In python, tcp connect returns success even though the connect request is in queue at server end. Is there any way to know at client whether accept happened or in queue at server? | false | 48,089,780 | 0 | 0 | 0 | 0 | The problem is not related to Python but is caused by the underlying socket machinery that does its best to hide low level network events from the program. The best I can imagine would be to try a higher level protocol handshake (send a hello string and set a timeout for receiving the answer) but it would make no difference between the following problem:
connection is queued on peer and still not accepted
connection has been accepted, but for any other reason the server could not process it in allocated time
(only if timeout is very short) congestion on machines (including sender) and network added a delay greater that the timeout
My advice is simply that you do not even want to worry with such low level details. As problems can arise server side after the connection has been accepted, you will have to deal with possible higher level protocol errors, timeouts or connection loss. Just say that there is no difference between a timeout after connection has been accepted and a timeout to accept the connection. | 0 | 1,179 | 0 | 1 | 2018-01-04T06:25:00.000 | python,sockets,tcp,tcpclient | How to know the status of tcp connect in python? | 1 | 1 | 2 | 48,090,538 | 0 |
1 | 0 | I've been trying to implement some HTML code that accesses weather data in historical CSV files online, and perform maths on the data once I selectively extract it.
In the past, I've programmed in Python and had no problems doing this by using pycurl.Curl(). HTML is a complete nightmare in comparison: XMLHttpRequest() does technically work, but web browsers automatically block access to all foreign URLs (because of the Same-Origin Policy). Not good.
Any ideas and alternative approaches would be very helpful! | false | 48,179,772 | 0 | 0 | 0 | 0 | I'm pushing against the boundaries set forth in your question.. if you're simply wanting to do it in JavaScript, but outside the context of a browser, have you considered working in Node? | 0 | 238 | 0 | 1 | 2018-01-10T03:15:00.000 | javascript,python,html,csv,remote-access | HTML - Access data from remote online CSV files | 1 | 1 | 2 | 48,180,149 | 0 |
0 | 0 | Hi I am trying to connect to the outlook server to sending the email with smtplib in python.
Trying this code smtplib.SMTP('smtp-mail.outlook.com') does not print out anything and also does not return an error.
Can somebody tell me what might be the problem and how to fix it.
Thank you a lots. | false | 48,204,680 | 0 | 1 | 0 | 0 | thank you for all your answer, the problem is actually caused because of some restriction in my work network. I have talked with them and the problem is solved. | 0 | 1,765 | 0 | 0 | 2018-01-11T10:20:00.000 | python,smtplib | Python SMTPLIB does not connect | 1 | 1 | 3 | 51,780,455 | 0 |
0 | 0 | On my Discord server, I have a #selfies channel where people share photos and chat about them. Every now and then, I would like to somehow prune all messages that do not contain files/images. I have tried checking the documentation, but I could see not see any way of doing this. Is it not possible? | false | 48,255,732 | 0.197375 | 1 | 0 | 3 | You can iterate through every message and do:
if not message.attachments:
...
message.attachments returns a list and you can check if it is empty using if not | 0 | 10,898 | 0 | 5 | 2018-01-15T00:59:00.000 | python,python-3.x,discord,discord.py | Is there a way to check if message.content on Discord contains a file? | 1 | 1 | 3 | 48,287,783 | 0 |
0 | 0 | Situation: The file to be downloaded is a large file (>100MB). It takes quite some time, especially with slow internet connection.
Problem: However, I just need the file header (the first 512 bytes), which will decide if the whole file needs to be downloaded or not.
Question: Is there a way to do download only the first 512 bytes of a file?
Additional information: Currently the download is done using urllib.urlretrieve in Python2.7 | true | 48,257,994 | 1.2 | 0 | 0 | 2 | I think curl and head would work better than a Python solution here:
curl https://my.website.com/file.txt | head -c 512 > header.txt
EDIT: Also, if you absolutely must have it in a Python script, you can use subprocess to perform the curl piped to head command execution
EDIT 2: For a fully Python solution: The urlopen function (urllib2.urlopen in Python 2, and urllib.request.urlopen in Python 3) returns a file-like stream that you can use the read function on, which allows you to specify a number of bytes. For example, urllib2.urlopen(my_url).read(512) will return the first 512 bytes of my_url | 0 | 1,451 | 0 | 6 | 2018-01-15T06:34:00.000 | python,python-2.7,download,urllib,urlretrieve | How to Download only the first x bytes of data Python | 1 | 1 | 2 | 48,258,054 | 0 |
1 | 0 | I am creating an app which only needs to access the Uber API with one account, mine. Is it possible to connect to the API with my account credentials? And if not, how else can I programmatically order rides through my account? | true | 48,272,916 | 1.2 | 0 | 0 | -1 | The Uber Api does not support oauth2 with a username and password (at least not from python) | 0 | 219 | 0 | 0 | 2018-01-16T01:30:00.000 | python,oauth-2.0,uber-api | How to get oauth2 authorization with developer password for Uber API? | 1 | 1 | 2 | 48,273,068 | 0 |
0 | 0 | Edited Question:
I guess I worded my previous question improperly, I actually want to get away from "unit tests" and create automated, modular system tests that build off of each other to test the application as whole. Many parts are dependent upon the previous pages and subsequent pages cannot be reached without first performing the necessary steps on the previous pages.
For example (and I am sorry I cannot give the actual code), I want to sign into an app, then insert some data, then show that the data was sent successfully. It is more involved than that, however, I would like to make the web driver portion, 'Module 1.x'. Then the sign in portion, 'Module 2.x'. The data portion, 'Module 3.x'. Finally, success portion, 'Module 4.x'. I was hoping to achieve this so that I could eventually say, "ok, for this test, I need it to be a bit more complicated so let's do, IE (ie. Module 1.4), sign in (ie. Module 2.1), add a name (ie Module 3.1), add an address (ie. Module 3.2), add a phone number (ie Module 3.3), then check for success (ie Module 4.1). So, I need all of these strung together. (This is extremely simplified and just an example of what I need to occur. Even in the case of the unit tests, I am unable to simply skip to a page to check that the elements are present without completing the required prerequisite information.) The issue that I am running into with the lengthy tests that I have created is that each one requires multiple edits when something is changed and then multiplied by the number of drivers, in this case Chrome, IE, Edge and Firefox (a factor of 4). Maybe my approach is totally wrong but this is new ground for me, so any advice is much appreciated. Thank you again for your help!
Previous Question:
I have found many answers for creating unit tests, however, I am unable to find any advice on how to make said tests sequential.
I really want to make modular tests that can be reused when the same action is being performed repeatedly. I have tried various ways to achieve this but I have been unsuccessful. Currently I have several lengthy tests that reuse much of the same code in each test, but I have to adjust each one individually with any new changes.
So, I really would like to have .py files that only contain a few lines of code for the specific task that I am trying to complete, while re-using the same browser instance that is already open and on the page where the previous portion of the test left off. Hoping to achieve this by 'calling' the smaller/modular test files.
Any help and/or examples are greatly appreciated. Thank you for your time and assistance with this issue.
Respectfully,
Billiamaire | false | 48,309,776 | 0 | 1 | 0 | 0 | You don't really want your tests to be sequential. That breaks one of the core rules of unit tests where they should be able to be run in any order.
You haven't posted any code so it's hard to know what to suggest but if you aren't using the page object model, I would suggest that you start. There are a lot of resources on the web for this but the basics are that you create a single class per page or widget. That class would hold all the code and locators that pertains to that page. This will help with the modular aspect of what you are seeking because in your script you just instantiate the page object and then consume the API. The details of interacting with the page, the logic, etc. all lives in the page object is exposed via the API it provides.
Changes/updates are easy. If the login page changes, you edit the page object for the login page and you're done. If the page objects are properly implemented and the changes to the page aren't severe, many times you won't need to change the scripts at all.
A simple example would be the login page. In the login class for that page, you would have a login() method that takes username and password. The login() method would handle entering the username and password into the appropriate fields and clicking the sign in button, etc. | 0 | 150 | 0 | 0 | 2018-01-17T20:50:00.000 | python,selenium,automated-tests,modular | Is it possible to make sequential tests in Python-Selenium tests? | 1 | 1 | 1 | 48,311,473 | 0 |
1 | 0 | I am able to get historical data on one stock per each request. But I need to get historical data for multiple stocks in a single request from Google finance using python.
Any help will be highly appreciated!
Thanks | false | 48,335,000 | 0 | 0 | 0 | 0 | You can't request time series data for multiple stocks from that source at once. Instead, you have to put your request into a loop. Putting your request into a loop, you can request time series stock by stock. | 0 | 799 | 0 | 0 | 2018-01-19T06:00:00.000 | python,google-finance,google-finance-api | Get historical data for multiple stocks in single request using python from Google finance | 1 | 1 | 1 | 48,366,381 | 0 |
0 | 0 | I am new to Python and looking to build rest full web services using python. Due to some dependency, Could not use any other scripting language.
Anyone can suggest if Python has any api-only kind of framework or if any other lightweight framework for rest APIs in Python.
Thanks,
Pooja | false | 48,339,205 | 0.462117 | 0 | 0 | 5 | as of SEP18, also have a look on Quart, APIstar
as of MAR19, add fastAPI, looks very promising
nota:
Bottle is lighter (& faster) than Flask, but with less bells & whistles
Falcon is fast !
fastAPI is fast as well !
also, as Bottle/Flask are more general frameworks (they have templating features for instance, not API related), frameworks such as Falcon or fastAPI are really designed to serve as APIs framework. | 0 | 10,417 | 0 | 10 | 2018-01-19T10:40:00.000 | python-3.x,rest,django-rest-framework | Which python framework will be suitable to rest api only | 1 | 1 | 2 | 52,226,988 | 0 |
0 | 0 | So I've been making a small game in Python named Cows and Bulls. For those who don't know it's very simple. 1 player generates a number the other tries to guess. If the guess has a number on the correct position it gives you a cow. If it has a number but on the wrong position it gives you a bull, so until the cow value isn't 4 (4 digit number) the game keeps going. It keeps giving hints until the number is guessed.
I've actually sucessfully created the player part of the program. Now I moved on to creating an AI. I generate a number, and the PC tries to guess that number.
My problem is the conditions to help the PC find this number. Right now I have the basic ones. If the PC guess finds no bulls and no cows, it discards all those numbers for the next guesses, if it finds all bulls is tries every combination of with those 4 numbers and of course the normal winning conditions.
The PC takes a long time to guess it though. There aren't enough conditions that facilitate the process of guessing the number.
So I was wondering if anyone can give me some tips on what conditions I can put onto my program to facilitate him guessing the right number? I've been thinking about it but been struggling with it. Can't seem to find a good condition that actually helps considerably the time the PC takes to guess.
In any way thanks in advance! | true | 48,357,141 | 1.2 | 0 | 0 | 1 | I would use the process of elimination. Start off with a set of all 4 digit numbers from 1000 to 9999.
Then if you give the computer a cow, so the computer knows it is of the form _ _ 3 _. Remove all numbers that are not of that form from the set.
If you give the computer a bull, say for the number 4. Remove all 4 digit numbers that don't have a 4 in them somewhere.
For the computers next turn, just pick a random number from the set of numbers that it now knows are still potential values.
Also, if you don't get a bull or a cow from a number, you can remove all numbers that include the digits for the numbers you didn't get a bull or cow for.
Then repeat.
You'll whittle down the potential numbers pretty quickly. Then the computer will either guess the correct number or there will only be one left.
I hope this helps :) | 0 | 593 | 0 | 0 | 2018-01-20T14:12:00.000 | python | Cows and Bulls PC Guessing | 1 | 1 | 1 | 48,357,241 | 0 |
0 | 0 | I am creating a web application which scrapes data from some other websites based on what the user searches.
I am planning to host this application on hosting websites like Hostgator or Namecheap.
Currently, the application contains a total of 2 pages. One is index.html and another is tool.py.
index.html takes an input via form and post it to tool.py.
tool.py is responsible for web scraping. I have 2 questions regarding this:
1) Let's say 2 users come to my website and searched simultaneously. Which IP will go to these websites which are to be scraped? Is it users own IP will go or the script IP will go (where the tool.py is located in this case let's suppose Namecheap server ip).
2) If 100's of users search simultaneously, how will the tool.py script reacts? Is there a better way to prevent excessive load to the single script? Maybe distributing and picking scripts randomly (eg: tool1.py, tool2.py, tool3.py etc) | false | 48,364,941 | 0 | 0 | 0 | 0 | Ok to answer your questions in order.
As @GalAbra mentions above, it is dependent on the design of the tool. From the sounds of it though, if index.html forces the browser to post data to tool.py then the IP of where tool.py is located will be the one that requests the page.
The ideal way would be to have a queing system built into the tool. You could have the client add their request to the queue (possibly in a database) and then have the tool.py monitor the queue for new entries and have it then request. Possibly using threading where there are multiple new requests in the queue, depending on how much activity you think this tool will see.
Hope this helps | 0 | 41 | 0 | 0 | 2018-01-21T08:19:00.000 | python,web-scraping | Which IP address will go to the destination website? | 1 | 1 | 1 | 48,365,456 | 0 |
1 | 0 | Is it possible to delete an item from DynamoDB using the Python Boto3 library by specifying a secondary index value? I won't know the primary key in advance, so is it possible to skip the step of querying the index to retrieve the primary key, and just add a condition to the delete request that includes the secondary index value? | false | 48,384,337 | 0.379949 | 0 | 0 | 2 | No, as of now it's not possible.
You have to specify the primary key to delete an item, although you can optionally pass ConditionExpression to prevent it from being deleted if some condition is not met. Only this much flexibility api is providing us. | 0 | 1,939 | 0 | 1 | 2018-01-22T14:53:00.000 | python,boto,boto3 | Is it possible to delete a DynamoDB item based on secondary index with Python Boto3 lib? | 1 | 1 | 1 | 48,384,477 | 0 |
0 | 0 | I have added my bot to a group chat, now for few commands I need to give access only to the group admin, so is it possible to identify if the message sender is admin of the group?
I am using python-telegram-bot library | false | 48,387,330 | -0.066568 | 1 | 0 | -1 | No. You need to hardcode user id in your source and compare if user id in admin-ids array. | 0 | 6,220 | 0 | 3 | 2018-01-22T17:41:00.000 | telegram-bot,python-telegram-bot | Telegram Bot: Can we identify if message is from group admin? | 1 | 1 | 3 | 48,407,514 | 0 |
0 | 0 | My telegram bot needs to send a message to all the users at the same time. However, Telegram claims a max of 30 calls/sec so it gets really slow. I am sure that there is a telegram bot which sends over 30 calls/sec. Is there a paid plan for this? | true | 48,397,015 | 1.2 | 1 | 0 | 2 | Telegram don't provide paid plan at this time.
For sending massive amount of message, it is better to use channel, and ask users to join.
If you really want to send via PM, you can send 1,800 messages per minute, I think this limit is enough for most use case. | 0 | 1,442 | 0 | 1 | 2018-01-23T08:18:00.000 | telegram,telegram-bot,python-telegram-bot,php-telegram-bot,telegram-webhook | API call limitation on my telegram bot | 1 | 1 | 1 | 48,397,565 | 0 |
0 | 0 | I've a made a selenium test using python3 and selenium library.
I've also used Tkinter to make a GUI to put some input on (account, password..).
I've managed to hide the console window for python by saving to the .pyw extension; and when I make an executable with my code, the console doesn't show up even if it's saved with .py extension.
However, everytime the chromedriver starts, it also starts a console window, and when the driver exists, this window does not.
so in a loop, i'm left with many webdriver consoles.
Is there a work around this to prevent the driver from launching a console everytime it runs ? | true | 48,414,156 | 1.2 | 0 | 0 | 0 | driver.close() and driver.quit() are two different methods for closing the browser session in Selenium WebDriver.
driver.close() - It closes the the browser window on which the focus is set.
driver.quit() – It basically calls driver.dispose method which in turn closes all the browser windows and ends the WebDriver session gracefully.
You should use driver.quit whenever you want to end the program. It will close all opened browser window and terminates the WebDriver session. If you do not use driver.quit at the end of program, WebDriver session will not close properly and files would not be cleared off memory. This may result in memory leak errors. | 0 | 617 | 0 | 0 | 2018-01-24T03:01:00.000 | python,selenium,webdriver,selenium-chromedriver | Python3, Selenium, Chromedriver console window | 1 | 1 | 2 | 48,414,334 | 1 |
1 | 0 | My task is to extract full songs from radio streaming using python 2.7.
I have managed to record radio streaming, but I can't find a good way to detect if the audio that I record is music, ads, or just talking.
I tried to detect by threshold, but it wasn't good because there are not enough silence between the talking or the ads to the songs.
If someone knows a good solution for me I would love to hear about it.
import pydub
streamAudio = pydub.AudioSegment.from_mp3("justRadioStream.mp3")
listMp3 = pydub.silence.detect_silence(streamAudio, min_silence_len=400, silence_thresh=-38)
print listMp3
I tried to play with the min_silence_len and silence_thresh, but there is not enough time of silence between songs and ads or talking, or louder voice to detect properly
thanks a lot! | false | 48,417,561 | 0 | 0 | 0 | 0 | This is not the kind of problem that will be solved in a couple of lines of Python. The problem is under-specified - there's no guarantee that there will even be silence between songs, ads and announcers on any given radio stream, as they try to make it harder to usefully record full songs from their streams for piracy purposes.
To do this robustly, it's likely that you'll need to apply AI / deep learning techniques to distinguish music from ads and announcements. Even then it's tricky, as some music will have regular talking in it, some songs are short, and some ads are long and contain music. | 0 | 203 | 0 | 0 | 2018-01-24T08:08:00.000 | python | python extract songs from radio streaming | 1 | 1 | 1 | 48,418,721 | 0 |
0 | 0 | I am currently running spark-submit jobs on an AWS EMR cluster. I started running into python package issues where a module is not found in during imports.
One obvious solution would be to go into each individual node and install my dependencies. I would like to avoid this if possible. Another solution I can do is write a bootstrap script and create a new cluster.
Last solution that seems to work is I can also pip install my dependencies and zip them and pass them through the spark-submit job through --py-files. Though that may start becoming cumbersome as my requirements increase.
Any other suggestions or easy fixes I may be overlooking? | false | 48,433,297 | 0 | 0 | 0 | 0 | bootstrap is the solution. write a shell script, pip install all your required packages and put it in the bootstrap option. It will be executed on all nodes when you create a cluster. just keep in mind that if the bootstrap takes too long time (1 hour or so?), it will fail. | 0 | 724 | 0 | 4 | 2018-01-24T23:21:00.000 | python,amazon-web-services,apache-spark,pyspark,emr | Module not found in AWS EMR slave nodes | 1 | 1 | 1 | 50,046,220 | 0 |
1 | 0 | I have a Flask application running on AWS Application Load Balancer, but can't get web sockets to work. After reading several posts and configuring Load Balancers, Target Groups, stickiness on EC2, I came to the conclusion that it might be that ALB is not staring the application correctly.
Flask-SocketIo says to use socketio.run(application, host='0.0.0.0', port=port) to start up the web server as it encapsulates application.run(). But after further reading I found that EC2 already calls application.run() without the need of explicitly doing so in the start up script, and therefore it might just bypassing my socketio.run() and not be starting my web server.
Could this be the case? How can I verify it and make sure socketio is started properly? | false | 48,459,764 | 0 | 0 | 0 | 0 | To access the application via Load Balancer you have to make sure first that your target in Target Group is healthy. The health status is displayed in AWS Web console on your target group instance details on Targets tab.
If there are no targets in your Target Group, add one by pressing Edit button and selecting your EC2 instance from the list. Don't forget to use the appropriate port. Also make sure health checks are configured correctly (path, port...). You can find them on Health Checks tab of your target group details page.
If all above is ok and you have a healthy target in TG, but the ELB doesn't show your application, I'd recommend you to SSH to your EC2 instance with Flask app and check if that one is running correctly. | 0 | 648 | 0 | 1 | 2018-01-26T10:41:00.000 | python,amazon-ec2,websocket,amazon-elb | flask-socketio run on Application Load Balancer | 1 | 1 | 1 | 48,469,075 | 0 |
1 | 0 | I know this is sort of counter to the purpose of headless automation, but...
I've got an automation test running using Selenium and Chromedriver in headless mode. I'd prefer to keep it running headless, but occasionally, it runs into an error that really needs to be looked at and interacted with. Is it possible to render and interact with a headless session? Maybe by duplicating the headless browser in a non-headless one? I can connect through remote-debugging, but the Dev Tools doesn't seem to do allow me to view the rendered page or interact with anything.
I am able to take screenshots, which sort of helps. But I'm really looking for the ability to interact--there's some drag-and-drop elements that aren't working well with Selenium that are causing issues occasionally. | false | 48,467,961 | 0.132549 | 0 | 0 | 2 | What you are asking for is currently not possible. Further, such a "feature" would have nothing to do with Selenium, but the vendor of the browser. You can search their bug tracker to see if such a feature has already been requested.
The only currently available option is to run full GUI browser during debug / development of your tests. | 0 | 3,716 | 0 | 4 | 2018-01-26T19:06:00.000 | python,selenium,headless,browser-automation | Possible to open/display/render a headless Selenium session? | 1 | 1 | 3 | 48,468,760 | 0 |
0 | 0 | I am using Aws Lex for constructing chatbots. I had a scenario where I need to have welcome message initially without user input so that I can give a direction to the user in my chatbot. | false | 48,476,742 | 0 | 1 | 0 | 0 | If you are using your own website or an app for integrating the chatbot, then you can send some unique welcome text from that website/app when it loads for the first time i.e on load method to the amazon lex. And in amazon lex you can create a welcome intent and put exact same text as utterance.
This way, when the website/app loads, it will send text to amazon lex and lex can fire the welcome intent and reply to it. | 0 | 4,953 | 0 | 2 | 2018-01-27T14:24:00.000 | python,amazon-web-services,amazon-lex | How to get welcome messages in AWS Lex (lambda in Python)? | 1 | 2 | 2 | 48,495,195 | 0 |
0 | 0 | I am using Aws Lex for constructing chatbots. I had a scenario where I need to have welcome message initially without user input so that I can give a direction to the user in my chatbot. | false | 48,476,742 | 0.462117 | 1 | 0 | 5 | You need to work that scenario using API call to start a context with your user.
You can follow these steps:
You need to create an Intent called AutoWelcomeMessage.
Create a Slot type with only one value i.e: HelloMe.
Create an Utterances HelloMessage.
Create a Slot as follow: Required, name: answer, Slot Type: HelloMe, Prompt: 'AutoWelcomePrompt'.
Pick Amazon Lambda for your Fulfillment that will send a response to your user. I.e:
Hello user, may I help? (Here the user will enter another Intent and your Bot will respond).
Now, start a conversation with your User, just call via API your Lex Bot and send an intention with Intent AutoWelcomeMessage, that call starts a context with your Lex Bot and the fulfillment will execute your Lambda. | 0 | 4,953 | 0 | 2 | 2018-01-27T14:24:00.000 | python,amazon-web-services,amazon-lex | How to get welcome messages in AWS Lex (lambda in Python)? | 1 | 2 | 2 | 48,477,745 | 0 |
0 | 0 | TL;DR : Is passing auth data to a boto3 script in a csv file named as an argument (and not checked in) less secure than a plaintext shared credentials file (the default answer in docs) for any reason?
I want to write a boto3 script intended to run from my laptop that uses an IAM key. The main accepted way to initialize your session is to include the API key, the secret, the region, and (if applicable) your session key in a shared credentials file identified by AWS_SHARED_CREDENTIALS_FILE, or to have the key and secret be environment variables themselves (AWS_ACCESS_KEY_ID, etc.) What I would like to do is load these values in a dictionary auth from a csv or similar file, and then use the keys and values of this dictionary to initialize my boto3.Session. This is easy to do; but, because a utility to load auth data from csv is so obvious and because so few modules provide this utility, I assume there is some security problem with it that I don't know.
Is there a reason the shared credentials file is safer than a csv file with the auth data passed as an argument to the boto3 script? I understand that running this from an EC2 instance with a role assignment is best, but I'm looking for a way to test libraries locally before adding them to one run through role security. | false | 48,477,832 | 0.379949 | 1 | 0 | 2 | There is nothing special or secure with a csv file. Its security risks are same as credentials file since both are text files. If you are worried about security and prefer a file option, one alternative I can think of:
Encrypt the credentials and store them as binary data in a file
In your Boto3 script, read the file, decrypt the data and supply the credentials to Boto3
You can use simple symmetric keys to encrypt the creds | 0 | 931 | 0 | 1 | 2018-01-27T16:23:00.000 | python,amazon-web-services,security,boto3 | AWS BOTO3 : Handling API keys | 1 | 1 | 1 | 48,478,740 | 0 |
0 | 0 | Is there any way that can handle deleted message by user in one-to-one chat or groups that bot is member of it ?
there is method for edited message update but not for deleted message . | false | 48,484,272 | 0 | 1 | 0 | 0 | The pyrogram have DeletedMessagesHandler / @Client.on_deleted_messages(). If used as Userbot, It handles in all chat groups channels. I failed to filter. Maybe it will work in a bot | 0 | 1,993 | 0 | 8 | 2018-01-28T07:49:00.000 | telegram-bot,python-telegram-bot | handle deleted message by user in telegram bot | 1 | 2 | 2 | 72,498,718 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.