Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I am using the python libraries from the Assistant SDK for speech recognition via gRPC. I have the speech recognized and returned as a string calling the method resp.result.spoken_request_text from \googlesamples\assistant\__main__.py and I have the answer as an audio stream from the assistant API with the method resp.audio_out.audio_data also from \googlesamples\assistant\__main__.py I would like to know if it is possible to have the answer from the service as a string as well (hoping it is available in the service definition or that it could be included), and how I could access/request the answer as string. Thanks in advance.
false
44,123,641
0.462117
0
0
5
Currently (Assistant SDK Developer Preview 1), there is no direct way to do this. You can probably feed the audio stream into a Speech-to-Text system, but that really starts getting silly. Speaking to the engineers on this subject while at Google I/O, they indicated that there are some technical complications on their end to doing this, but they understand the use cases. They need to see questions like this to know that people want the feature. Hopefully it will make it into an upcoming Developer Preview.
1
1,474
0
3
2017-05-22T23:41:00.000
python,google-assistant-sdk,google-assist-api
How to receive answer from Google Assistant as a String, not as an audio stream
1
1
2
44,134,868
0
0
0
There is a way to read GDX(GAMS Data eXchange ) files, without need of GAMS??? There is a way to read via GAMS and its api for Python,but I haven't GAMS in my pc. Thnx
false
44,160,902
0
0
0
0
I don't know of any way, but: To use the api the free/demo version is enough! If you can not install it on your machine due to missing privileges, etc... You can also copy the installation from another machine. You might need to tell the api-implementation then where to find your gams "installation".
1
527
0
4
2017-05-24T14:11:00.000
python,file
How to read GDX file with Python,without GAMS
1
1
2
47,921,223
0
0
0
I'd like to poll servers/network devices via SNMP/ICMP/SSH and serve up content about each device in a web browser to a user. Things like uptime, CPU usage, etc. What's a good way to do this? Would I store the data in a db and then present it the framework of my choosing?
false
44,168,482
0
0
0
0
I suggest you to use monitoring tools for do that. If you need alerting use tool like nagios or zabbix. If you don't need alerting you can setup tool like munin o cacti.
0
61
0
0
2017-05-24T21:07:00.000
python,html,apache,nginx
How do I serve dynamic content in a dashboard?
1
1
1
44,175,812
0
1
0
I am trying to spawn a mapreduce job using the mrjob library from AWS Lambda. The job takes longer than the 5 minute Lambda time limit, so I want to execute a remote job. Using the paramiko package, I ssh'd onto the server and ran a nohup command to spawn a background job, but this still waits until the end of job. Is there anyway to do this with Lambda?
true
44,184,942
1.2
1
0
0
Instead of using SSH I've solved this before by pushing a message onto a SQS queue that a process on the server monitors. This is much simpler to implement and avoids needing to keep credentials in the lambda function or place it in a VPC.
0
441
0
0
2017-05-25T16:02:00.000
python,amazon-web-services,aws-lambda,paramiko,mrjob
How to execute a job on a server on Lambda without waiting for the response?
1
1
2
44,213,779
0
0
0
I am trying to build a node app which calls python script (takes a lot of time to run).User essentially chooses parameters and then clicks run which triggers event in socket.on('python-event') and this runs python script.I am using sockets.io to send real-time data to the user about the status of the python program using stdout stream I get from python.But the problem I am facing is that if the user clicks run button twice, the event-handdler is triggered twice and runs 2 instances of python script which corrupts stdout.How can I ensure only one event-trigger happens at a time and if new event trigger happens it should kill previous instance and also stdout stream and then run new instance of python script using updated parameters.I tried using socket.once() but it only allows the event to trigger once per connection.
false
44,187,592
0
0
0
0
I will use a job queue to do such kind of job, store each job's info in a queue, so you can cancel it and get its status. You can use a node module like kue.
0
80
0
0
2017-05-25T18:36:00.000
python,node.js,sockets,express,socket.io
Make sure only one instance of event-handler is triggered at a time in socket.io
1
1
1
44,192,695
0
1
0
How can I catch a request that is forbidden by robots.txt in scrapy? Usually this seems to get ignored automatically, i.e. nothing in the output, so I really can't tell what happens to those urls. Ideally if crawling a url leads to this forbidden by robots.txt error, I'd like to output a record like {'url': url, 'status': 'forbidden by robots.txt'}. How can I do that? New to scrapy. Appreciate any help.
false
44,187,774
0.379949
0
0
2
Go to settings.py in the project folder and change ROBOTSTXT_OBEY = True to ROBOTSTXT_OBEY = False.
0
1,197
0
1
2017-05-25T18:47:00.000
python,scrapy
How to catch forbidden by robots.txt?
1
1
1
45,843,257
0
0
0
I want to make a comparison list between different libraries for TCP packet capturing, replaying and monitoring. I have come across several libraries and cannot decide with which I should start working. I want to compare pros and cons of these libraries so that its easier for me to decide. The libraries which I wanna compare are, pypcap pcap_ylg pcapy scapy3k pcap pylibpcap I tried to find online and read the documentation but could not find useful information.
false
44,200,097
0
0
0
0
I guess you are trying to work with python. Here's a list of the libraries: pypcap: not maintened, the last update was in 2015 pcap_ylg: even older... updated 4 years ago pcapy: recently updated. quite low level library, hard to use for beginners pcap/pylibpcap: again, so old... The best choice seems to be scapy, which has tons of options and is easy-to-use. There are currently two versions of scapy: the original scapy: active dev. supporting all python versions and high level library for any pcap Apis scapy3k: a scapy fork which used to support python 3, but that has not recieved scapy's new features/updates since 2015
0
516
0
0
2017-05-26T11:12:00.000
python-3.x,networking,tcp,scapy,pcap
Comparison between packet capturing libraries in Python 3
1
1
1
44,315,276
0
1
0
When the content-type of the server is 'Content-Type:text/html', requests.get() returns improperly encoded data. However, if we have the content type explicitly as 'Content-Type:text/html; charset=utf-8', it returns properly encoded data. Also, when we use urllib.urlopen(), it returns properly encoded data. Has anyone noticed this before? Why does requests.get() behave like this?
false
44,203,397
1
0
0
20
The default assumed content encoding for text/html is ISO-8859-1 aka Latin-1 :( See RFC-2854. UTF-8 was too young to become the default, it was born in 1993, about the same time as HTML and HTTP. Use .content to access the byte stream, or .text to access the decoded Unicode stream. If the HTTP server does not care about the correct encoding, the value of .text may be off.
0
113,907
0
56
2017-05-26T13:54:00.000
python,utf-8,python-requests
python requests.get() returns improperly decoded text instead of UTF-8?
1
1
4
44,203,633
0
1
0
I'm using Selenium in Python for a project and I was wondering if there was a way to record or stream audio that is being played in the browser. Essentially I want to use selenium to get the audio and pass it to my application in Python so that I can process it. Is there a way to do this in Selenium, or using another Python package? Ideally I would want to be able to get any audio playing in the browser window irregardless of the source.
false
44,221,542
0
0
0
0
There is no way to do it with just selenium, but it will be not so hard to implement it with python, just search python module or external program which suite your purpose
0
1,354
0
5
2017-05-27T20:57:00.000
javascript,python,html,selenium,audio
Selenium, streaming audio from browser
1
1
1
44,233,382
0
1
0
I want to upload video into S3 using AWS lambda function. This video is not available in my local computer. I have 'download URL'. I don't want to download it into my local computer and upload it into S3. I am looking for a solution to directly put this video file into S3 using lambda function. If I use buffer or streaming, I will consume lots of memory. Is there a better efficient solution for this? I really appreciate your help.
true
44,233,777
1.2
0
0
2
You could certainly write an AWS Lambda function that would: Download the file from the URL and store it in /tmp Upload to Amazon S3 using the AWS S3 SDK It would be easiest to download the complete file rather than attempting to stream it in 'bits'. However, please note that there is a limit of 500MB of disk space available for storing data. If your download is larger than 500MB, you'll need to do some creative programming to download portions of it and then upload it as a multi-part upload. As for how to download it, use whatever library you prefer for download a web file.
0
7,999
0
7
2017-05-29T02:09:00.000
python,amazon-web-services,amazon-s3,aws-lambda
Use AWS lambda to upload video into S3 with download URL
1
1
2
44,234,884
0
0
0
Is there any way to automatically log on through the Bloomberg API to fetch data? From my testing with Python the API can pull data when I am logged into Bloomberg through the terminal and some time after I log off. The next morning I try to run my code with an error, but the problem goes away as soon as I log on the terminal. Thanks.
false
44,251,610
0.379949
0
0
2
Yes, you need to log in to be able to run the API. Once you log off, you can still pull data as long as you don't log onto a different computer. That's by design. So there is no other solution than either manually logging onto your terminal in the morning or making sure that you don't use Bloomberg Anywhere on another PC.
0
1,291
0
2
2017-05-29T23:59:00.000
python,api,bloomberg
Automatically log onto Bloomberg through API Python
1
1
1
44,255,713
0
1
0
I'm working on a project and I just wanted to know what route (If I'm not going down the correct route) I should go down in order to successfully complete this project. Ferry Time: This application uses boat coordinates and ETA algorithms to determine when a boat will be approaching a certain destination. Now, I've worked up a prototype that works, but not the way I want it to. In order for my ETA's to show up on my website accurately, I have a python script that runs every minute to webscrape these coordinates from a specific site, do an algorithm, and spit out an ETA. The ETA is then sent to my database, where I display it on my website using PHP and SQL. The ETA's only update as long as this script is running (I literally run the script on Eclipse and just leave it there) Now my question: Is there a way I can avoid running the script? Almost like an API. Or to not use a database at all? Thanks !
true
44,270,585
1.2
1
0
1
If the result of your algorithm only depends on the LAST scrape and not a history of several scrapes, then you could just scrape "on demand" and deploy your algo as an AWS lambda function.
0
71
0
0
2017-05-30T19:38:00.000
php,python,api,server
Project using Python Webscraping
1
2
2
44,271,429
0
1
0
I'm working on a project and I just wanted to know what route (If I'm not going down the correct route) I should go down in order to successfully complete this project. Ferry Time: This application uses boat coordinates and ETA algorithms to determine when a boat will be approaching a certain destination. Now, I've worked up a prototype that works, but not the way I want it to. In order for my ETA's to show up on my website accurately, I have a python script that runs every minute to webscrape these coordinates from a specific site, do an algorithm, and spit out an ETA. The ETA is then sent to my database, where I display it on my website using PHP and SQL. The ETA's only update as long as this script is running (I literally run the script on Eclipse and just leave it there) Now my question: Is there a way I can avoid running the script? Almost like an API. Or to not use a database at all? Thanks !
false
44,270,585
0
1
0
0
Even if they had an API you'd still need to run something against it to get the results. If you don't want to leave your IDE open you could use cron to call your script.
0
71
0
0
2017-05-30T19:38:00.000
php,python,api,server
Project using Python Webscraping
1
2
2
44,270,817
0
0
0
I am using the websocket-client Python Package. This works fine. But when I send messages to the server the client prints this message to stdout. Is there a possibility to disable this?
true
44,271,059
1.2
0
0
7
websocket.enableTrace(False) solved this.
0
1,702
0
4
2017-05-30T20:10:00.000
python-2.7,logging,websocket
Websocket-client Python disable output
1
1
1
44,287,918
0
0
0
Need some help in Twilio. Twilio Gather verb supports input="speech" which recognizes speech as mentioned in their site. Does the Twilio python package support this speech recognition yet? Couldn't find any help documentation regarding this.
false
44,274,141
0
0
0
0
Twilio developer evangelist here. It seems that the latest Python package only implemented partial support for input="speech" and the related TwiML attributes which makes it unusable at the moment. I am raising a bug internally and will get this fixed as soon as possible. I will notify you here when it is ready.
0
220
0
0
2017-05-31T01:10:00.000
python,twilio
Twilio Speech Recognition support in twilio python package
1
1
1
44,371,828
0
0
0
I am testing Trello API via Postman and I have a problem adding a comment to a card. I'm submitting a PUT request to https://api.trello.com/1/cards/card id/actions/commentCard/comments?&key=my_key&token=my_token&text=comment, but I receive an error: invalid value for idAction Could someone point out what I'm doing wrong?
false
44,288,336
0
0
0
0
you try POST to https://api.trello.com/1/cards/cardID/actions/comments?text=yourComment&key=apiKey&token=trelloToken where you must change the variables cardI, yourComment, apiKey and trelloToken to your variables
0
1,702
0
0
2017-05-31T15:05:00.000
python,trello
How to add a comment to Trello card via API calls?
1
1
3
58,631,278
0
0
0
i'm using a python client that makes request to a push server, and has the option to use certificates with the lib OpenSSL for python(pyopenssl), but this client ask me for the private key(can be on the same file or different path. already checked with another server that i have both cert and key). but for the "real" server they are using self-signed cert and they don't have any private key in the same file or another one as far as they told me, they just share with me this cert file, is there any way to use this kind of cert with OpeenSSL in python?, thanks
false
44,293,780
0
1
0
0
In order to use a client certificate, you must have a private key. You will either sign some quantity with your private key or engage in encrypted key agreement in order to mutually agree on a key. Either way, the private key is required. It's possible though that rather than using client certificates they use something like username/password and what they have given you is not a certificate for you to use, but the certificate you should expect from them.
0
95
0
0
2017-05-31T20:05:00.000
python,ssl,pyopenssl
OpenSSL ask for private key
1
1
1
44,293,925
0
1
0
I have a menu.py which contains all my menus. I would like to use the menu.py file and in case someone from another team needs to add additional sub-menus they can add them on their own file, then import menu.py as well. for example: i have 2 sub-menus under /models/menu.py: system_sub_menu = [ ... .... ... .... ] file_sub_menu = [ ... ... ... ... ] Can i separate them into 2 files? Thanks Yaron
false
44,302,025
0
0
0
0
You have two options. First, you can put the two sets of items in two different files in the /models folder. Model files are executed in alphabetical order, so you can put together the final response.menu object in the second of the two files (any variables defined in the first model file will be available globally in the second file, without any need to import). Alternatively, you can put one of the sub-menus in a module (in the /modules folder) and simply import it where needed.
0
72
0
0
2017-06-01T08:12:00.000
python,linux,python-2.7,web2py
web2py - is it possible to import content of menu.py from another file?
1
1
1
44,313,012
0
1
0
This is my first post so I did some research before asking this question, but it was all in vaine. I'm writing my python script for Android application and I need to use basic click() command, in order to get deeper. Android 6.0.1 (xiaomi redmi note 3 pro), SDK installed for Android 6.0, python 3.6.1, Appium 1.0.2 + Pycharm. Element is localized with no problems, but click() doesn't work, nothing happens. Part of my script: driver.find_element_by_id('com.socialnmobile.dictapps.notepad.color.note:id/main_btn1').click() I tried to use .tap() instead, but it says "AttributeError: 'WebElement' object has no attribute 'tap'". I would be very grateful for your help, cause I'm stuck with it for good.
false
44,325,823
0
0
0
0
Try this driver.find_element_by_id('main_btn1').click() Find the ID mentioned under the resource id in case you are using appium version less than 1.0.2 You are pasting the whole package id com.socialnmobile.dictapps.notepad.color.note:id/main_btn1 which appium wont detect because that is certainly not the element id. In case this doesnt work, please let me know the contents you see in the inspector.
0
305
0
0
2017-06-02T09:46:00.000
android,python,appium
Python+Appium+Android 6.0.1 - 'Click()' doesn't work
1
2
2
44,330,828
0
1
0
This is my first post so I did some research before asking this question, but it was all in vaine. I'm writing my python script for Android application and I need to use basic click() command, in order to get deeper. Android 6.0.1 (xiaomi redmi note 3 pro), SDK installed for Android 6.0, python 3.6.1, Appium 1.0.2 + Pycharm. Element is localized with no problems, but click() doesn't work, nothing happens. Part of my script: driver.find_element_by_id('com.socialnmobile.dictapps.notepad.color.note:id/main_btn1').click() I tried to use .tap() instead, but it says "AttributeError: 'WebElement' object has no attribute 'tap'". I would be very grateful for your help, cause I'm stuck with it for good.
false
44,325,823
0.099668
0
0
1
Ok, after a long fight I came up with the solution. My smartphone - Xiaomi Redmi Note 3 Pro apart from standard USB Debugging option in settings, has another USB Debugging (security option). It has to be enabled as well, cause the second option protects smartphone from remote moves. Regards.
0
305
0
0
2017-06-02T09:46:00.000
android,python,appium
Python+Appium+Android 6.0.1 - 'Click()' doesn't work
1
2
2
44,336,428
0
1
0
So, I created a simple site written in Python using Web.py. It's now live and running. I can access it through my computer that runs the server by typing one of this: http://0.0.0.0:8080/ or http://localhost:8080/. However, I can only access it through that computer that runs the Python Web.py server. I tried accessing it with another computer on the same network, but it gives me an error. Help?
false
44,328,855
0
0
0
0
You need to check your firewall. Try to disable it and it will work fine.
0
947
0
0
2017-06-02T12:21:00.000
python,python-2.7,network-programming,web.py
How to access Python web.py server in LAN?
1
1
2
50,786,355
0
1
0
I read the Flask doc, it said whenever you need to access the GET variables in the URL, you can just import the request object in your current python file? My question here is that if two user are hitting the same Flask app with the same URL and GET variable, how does Flask differentiate the request objects? Can someone tell me want is under the hood?
false
44,337,390
0
0
0
0
Just wanted to highlight one more fact about the requests object. As per the documentation, it is kind of proxy to objects that are local to a specific context. Imagine the context being the handling thread. A request comes in and the web server decides to spawn a new thread (or something else, the underlying object is capable of dealing with concurrency systems other than threads). When Flask starts its internal request handling it figures out that the current thread is the active context and binds the current application and the WSGI environments to that context (thread). It does that in an intelligent way so that one application can invoke another application without breaking.
0
553
0
0
2017-06-02T21:02:00.000
python,flask
Flask/Python: from flask import request
1
1
2
48,550,508
0
1
0
I was wondering if it is possible to get all the URL's from the history page of a person, perhaps with python (using selenium, or any other tool)?
true
44,354,833
1.2
0
0
2
No, that is not possible for two reasons: Python is a server side language that is not executed in the user's browser Even if you did use a client side language that runs in the user's browser (i.e. JavaScript), security restrictions prevent this Think of the implications if this were possible: Would you want any company whose website you are accessing to be able to spy on you like that?
0
181
0
0
2017-06-04T13:55:00.000
python,web
Getting the user's browsing history's URLs
1
1
2
44,354,888
0
0
0
TL;DR: If I spawn 10 web requests, each on its own thread, with a CPU that has a 4 thread limit, is this okay or inefficient? Threads are IO bound so sit idle while awaiting server response (I believe). How does CPU deal if more than 4 threads return simultaneously? I've got a script that currently starts a new thread for every file I need to download (each located at a unique URL) through an http.client.HTTPSConnection. At max, I may need to spawn 730 threads. I have done this, since the threads are all IO bound work (downloading and saving to file), but I am not sure if they are executing in parallel or if the CPU is only executing a set at a time. Total run time for file sizes ranging between 20MB to 110MB was roughly 15 minutes. My CPU is quad-core with no Hyper-Threading. This means that it should only support 4 threads simultaneously at any given time. Since the work is IO bound and not CPU bound, am I still limited by the hold of only 4 simultaneous threads? I suppose what is confusing is I am not sure what sequence of events takes place if say I send out just 1 request on 10 threads; what happens if they all return at the same time? Or how does the CPU choose which 4 to finish before moving onto the next available thread? And after all of this, if the CPU is only handling 4 threads at a time, I would image it is still smart to spawn as many IO threads as I need (since they will sit idle while waiting for server response) right?
true
44,359,336
1.2
0
0
1
You can have significantly more than 4 IO-bound threads on a quad-core CPU. However, you do want to have some maximum. Even IO bound processes use the CPU some of the time. For example, when a packet is received, that packet needs to be handled to update TCP state. If you are reading from a socket and writing to a file, some CPU is required to actually copy the characters from the socket buffer to the file buffer under most circumstances. If you use TLS, CPU is typically required to decrypt and encrypt data. So even threads that are mostly doing IO use the CPU some. Eventually the small fraction of time that you are using the CPU will add up and consume the available CPU resources. Also, note that in Python, because of the global interpreter lock, you can only have one thread using the CPU to run python code at a time. So, the GIL would not typically be held while doing something like waiting for an outgoing connection. During that time other threads could be run. However, for some fraction of the time while reading and writing from a socket or file, the GIL will be held. It's likely with most common work loads that the performance of your application will reach a maximum when the fraction of time your threads need a CPU reaches one full CPU rather than four full CPUs. You may find that using asyncio or some other event-driven architecture provides better performance. When true this is typically because the event-driven model is better at reducing cross-thread contention for resources. In response to your question edit, I would not expect 10 threads to be a problem
0
269
0
2
2017-06-04T22:17:00.000
python,multithreading,cpu
Am I overspawning IO bound web scraping threads?
1
1
1
44,359,362
0
1
0
So I just bought a VPS server from Vultr. Then I went on ServerPilot, and installed it on my server. Now I can access, via SFTP, all the files on my server. But how can I access these files from my web-browser via Internet? I mean, when I type in the IP address of my Vultr Server, I land on the ServerPilot page "your app xxx is set up". Alright, but how can I access the other files I uploaded now? Thanks
false
44,376,270
0.197375
0
0
1
you can connect to your server/serverpilot app via SSH/SFTP. Filezilla, codeanywhere are options that allow you to do this.
0
110
0
0
2017-06-05T19:26:00.000
python,html,hosting,vps
Beginner VPS Vultr/ServerPilot -> How to change the homepage & access the files I uploaded?
1
1
1
46,166,962
0
0
0
I recently updated from Python 3.5 to Python 3.6 and am trying to use packages that I had previously downloaded, but they are not working for the updated version of Python. When I try to use pip, I use the command "pip install selenium" and get the message "Requirement already satisfied: selenium in /Users/Jeff/anaconda/lib/python3.5/site-packages" How do I add packages to the new version of Python?
false
44,396,972
0.197375
0
0
1
First, make sure that your packages do have compatibility with the version of Python you're looking to use. Next, run pip freeze > requirements.txt in the base directory of your Python project. This puts everything in a readable file to re-install from. If you know of any packages that require a certain version that you'll want to re-install, put package==x.x.x (where package is the package name and x.x.x is the version number) in the list of packages to make sure it downloads the correct version. Run pip uninstall -r requirements.txt -y to uninstall all packages. Afterwards, run pip install -r requirements.txt. This allows you to keep packages at the correct version for the ones you assign a version number in requirements.txt, while upgrading all others.
1
86
0
0
2017-06-06T18:10:00.000
python,python-3.x,pip
Install packages using pip for updated versions of python
1
1
1
44,397,097
0
0
0
I have a quick question in regards to the Netmiko module (based on the Paramiko module). I have a for loop that iterates through a bunch of Cisco boxes (around 40)...and once they complete I get the following each time a SSH connection establishes: SSH connection established to ip address:22 Interactive SSH session established This isn't in my print statements or anything, it's obviously hard coded within ConnectionHandler (which I use to make the SSH connections). This output really makes my output muddled and full of 40 extra lines I do not need. Is there any way I can get these removed from the output? Regards,
true
44,399,531
1.2
1
0
0
Found this answer out through reddit. Add the key 'verbose': False to your network device dictionary. More information on the Netmiko Standard Tutorial page
0
384
0
0
2017-06-06T20:45:00.000
python-3.x,ssh
Remove Netmiko Automatic Output
1
1
1
44,401,977
0
0
0
how to send certificate authentication in python post request, for example I used next but in get request: requests.get(url, params = params, timeout=60,cert=certs) where certs is path to certificate, it's worked fine. requests.post(url_post,data=params,cert = certs, timeout=60) not working, error - SSL authentication error
false
44,452,143
0.197375
0
0
1
To send certificate, you need the certificate which contains public key like server.crt. If you have this crt file then you can send it as r=requests.get('https://server.com', verify='server.crt' ) or if you don't have that file then you can get it using get_ssl_certificate method ert=ssl.get_server_certificate(('server.com',443),ssl_version=3) then you can write it into a file and send it.
0
6,412
0
3
2017-06-09T07:45:00.000
python,python-requests
python post request with certificate
1
1
1
45,435,412
0
0
0
I'm working on a unit tests for an API client class. There is a class variable self.session that is supposed to hold the session. In my setup method for my test I create a new instance of the client class and then call its authenticate method. However when the tests themselves go to send requests using this object they all return 401 forbidden errors. If I move the authenticate call (but not the creation of the class) into the tests and out of setup everything works great, but I understand that that defeats the purpose of setup().
false
44,464,654
0
1
0
0
An example of the code you're talking about (with proprietary stuff removed, of course), might help clarify. The variable, self.session, is on the test class itself, rather than the instance? That sounds as if it might end up leaking state between your tests. Attaching it to the instance might help. Beyond that, I generally think it makes sense to move as much out of setUp methods as possible. Authentication is part of the important part of your test, and it should probably be done alongside all the other logic.
0
27
0
0
2017-06-09T18:35:00.000
python-3.x,python-requests,python-unittest
Lifetime of Request Sessions in Unit Testing
1
1
1
44,464,795
0
0
0
I am trying to import urllib.request on python, but when I try to do so, i get the following error: ImportError: No module named setuptools. I tried doing 'sudo apt-get install -y python-setuptools' , but after doing so too I am getting the same error. I am using PyCharm and my Python version is Python 2.7.12+.
false
44,473,829
0
0
0
0
Unless there is any specific requirement to use urllib. I suggest you use python-requests. Both do the same.
1
439
0
0
2017-06-10T13:28:00.000
python,pycharm
ImportError: No module named setuptools
1
1
1
44,473,989
0
0
0
I am using ET.parse(path) to parse a xml file and read from it. Does ET.parse auto closes the xml file after opening ? Is it a safe way to access the file for reading ?
false
44,481,930
-0.099668
0
0
-1
It is the best way to open a XML file. It just gets everything that is in the file so no opening or closing.
1
1,137
0
0
2017-06-11T08:41:00.000
python,python-2.7,elementtree,raspberry-pi3
Does ET.parse automatically open and close a XML file?
1
1
2
44,481,957
0
1
0
I'm using python Tornado to perform asynchronous requests to crawl certain websites and one of the things I want to know is if a URL results in a redirect or what it's inital status code is (301, 302, 200, etc.). However, right now I can't figure out a way to find that information out with a Tornado response. I know a requests response object has a history attribute which records the redirect history, is there something similar for Tornado?
false
44,508,737
0
0
0
0
Tornado's HTTP clients do not currently provide this information. Instead, you can pass follow_redirects=False and handle and record redirects yourself.
0
65
1
0
2017-06-12T21:08:00.000
python,tornado
Is there a way to get the redirect history from a Tornado response?
1
1
1
44,511,338
0
0
0
I'm new to Python and I'm trying to make a program that involves me logging into Gmail and iCloud, using Selenium. I've completed the Gmail part, so I know I'm not completely off track, but I can't seem to surmount the error that occurs when I try to locate the login/password fields on the iCloud website. I keep getting: NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="appleId"]"} I've attempted to use WebDriverWait, and I've done waits for like 30 seconds just to see if timing was the issue, but I keep getting the same error even if I try to locate the login/password fields using Xpath, ID name, CSS selector, etc.
false
44,553,860
0.379949
0
0
2
It's within an iframe so you need to have Selenium switch to it. driver.switch_to.frame('auth-frame') Once you do that you should be able to locate it by id or xpath.
0
192
0
1
2017-06-14T20:08:00.000
python,html,selenium,xpath
iCloud Website - NoSuchElementException: Unable to Locate Element
1
1
1
44,554,005
0
0
0
Im trying to load xml file into the google bigquery ,can any one please help me how to solve this . I know we can load JSON ,CSV and AVRO files into big query . I need suggestion/help, Is the any way can i load xml file into bigquery
false
44,588,770
0.197375
0
1
1
The easiest option is probably to convert your XML file either to CSV or to JSON and then load it. Without knowing the size and shape of your data it's hard to make a recommendation, but you can find a variety of converters if you search online for them.
0
834
0
0
2017-06-16T12:00:00.000
xml,python-2.7,google-bigquery
how to load xml file into big query
1
1
1
44,590,333
0
0
0
I am currently trying to figure out a way to know who invited a user. From the official docs, I would think that the member class would have an attribute showing who invited them, but it doesn't. I have a very faint idea of a possible method to get the user who invited and that would be to get all invites in the server then get the number of uses, when someone joins the server, it checks to see the invite that has gone up a use. But I don't know if this is the most efficient method or at least the used method.
false
44,594,309
0.066568
1
0
1
In Discord, you're never going to 100% sure who invited the user. Using Invite, you know who created the invite. Using on_member_join, you know who joined. So, yes, you could have to check invites and see which invite got revoked. However, you will never know for sure who invited since anyone can paste the same invite link anywhere.
0
13,355
0
3
2017-06-16T16:50:00.000
python,discord,discord.py
Discord.py show who invited a user
1
2
3
44,830,944
0
0
0
I am currently trying to figure out a way to know who invited a user. From the official docs, I would think that the member class would have an attribute showing who invited them, but it doesn't. I have a very faint idea of a possible method to get the user who invited and that would be to get all invites in the server then get the number of uses, when someone joins the server, it checks to see the invite that has gone up a use. But I don't know if this is the most efficient method or at least the used method.
false
44,594,309
0.066568
1
0
1
Watching the number of uses an invite has had, or for when they run out of uses and are revoked, is the only way to see how a user was invited to the server.
0
13,355
0
3
2017-06-16T16:50:00.000
python,discord,discord.py
Discord.py show who invited a user
1
2
3
45,571,128
0
0
0
how do I sniff packets that are only outbound packets? I tried to sniff only destination port but it doesn't succeed at all
false
44,618,843
0
0
0
0
Maybe you can get your device MAC address and filter any packets with that address as source address.
0
736
0
1
2017-06-18T19:41:00.000
python-3.x,scapy,packet-sniffers,sniffing
Python(scapy): how to sniff packets that are only outboun packets
1
1
2
44,703,883
0
0
0
I'm writing Multiple Add-ons for a personal build of KODI. What I am trying to achieve is: Service(Add-on A) will authenticate the box using its MAC Address. Service(Add-on A) will save a token(T1). Service(Add-on B) will use T1 and load movies if (T1 != None) BUT xbmcplugin.setSetting( "token" ) AND xbmcplugin.getSetting( "token" ) saves the value in the context of Add-on where it was called. HOW to achieve Saving Global Values in KODI with Python
true
44,622,280
1.2
0
0
1
You can use window properties for that. Window 10000 is one of the main windows so it will always exist. Set it xbmcgui.Window(10000).setProperty('myProperty', 'myValue') Read it xbmcgui.Window(10000).getProperty('myProperty')
0
570
0
0
2017-06-19T04:29:00.000
python,xbmc,kodi
Save settings value accessible globally in KODI
1
1
1
44,646,296
0
0
0
I am trying to build a twitter chat bot which is interactive and replies according to incoming messages from users. Webhook documentation is unclear on how do I receive incoming message notifications. I'm using python.
true
44,632,982
1.2
1
0
0
Answering my own question. Webhook isn't needed, after searching for long hours on Twitter Documentation, I made a well working DM bot, it uses Twitter Stream API, and StreamListener class from tweepy, whenever a DM is received, I send requests to REST API which sends DM to the mentioned recipient.
0
662
0
0
2017-06-19T14:15:00.000
python,api,twitter,twitter-oauth,chatbot
Does twitter support webhooks for chatbots or should i use Stream API?
1
1
2
44,717,595
0
0
0
When i execute the below command, it usually asks for the user input. How can we automate the user interaction in python script. os.system("openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095")
false
44,667,086
0
0
0
0
For that specific command, you should not need any automation tool to feed input to your script. Piping a file to it should allow it to execute without user interaction (like Coldspeed said in his comment). Most command line interfaces allow parametrized execution and most parameters you can either build into your script or read them from a config file somewhere. For those command line tools that require "real" user interaction (i.e. you can't pipe the input, parametrize it or somehow build it into the command itself), I used the pexpect module with great success.
0
322
0
0
2017-06-21T04:52:00.000
python,user-interface,automation
How can we automate the user interaction in python script
1
1
2
44,670,539
0
1
0
I want to create a web app for my company that functions like a tool suite for non-technical employees to use. I'm using Python Selenium WebDriver, BeautifulSoup, Srapy, Django Is it possible and is this the right approach?
false
44,668,401
0.197375
0
0
1
"The idea is to create a variety of methods that can be run by clicking a button and/or inserting content in inputs, and having the tests / functions run and returning the specified results." Maybe Flask can help you here. It has a nice functionality to route urls to specific functions or methods in your code. It provides a convenient way to bind actions to code. You can search the web for how you can leverage Flask to cater to your specific needs, but here I just wanted to convey that it would help.
0
322
0
0
2017-06-21T06:32:00.000
python,django,selenium,beautifulsoup,scrapy
Selenium / Web crawling / Web scraping App in Python
1
1
1
44,668,515
0
0
0
I am running some code with selenium using python, and I figured out that I need to dynamically change the UserAgent after I already created the webdriver. Any advice if it is possible and how this could be done? Just to highlight - I want to change it on the fly, after almost each GET or POST request I send
true
44,678,133
1.2
0
0
0
I would go with creating a new driver and copy all the necessary attributes from the old driver except the user agent.
0
1,149
0
0
2017-06-21T13:53:00.000
python,selenium,selenium-chromedriver,user-agent
Python selenium with chrome webdriver - change user agent
1
1
1
44,678,533
0
0
0
I am trying to make a filter for packets that contain HTTP data, yet I don't have a clue on how to do so. I.E. Is there a way to filter packets using Scapy that are only HTTP?
false
44,679,656
0.066568
0
0
1
Yes, you can. You can filter by TCP port 80 (checking each packet or using BPF) and then check the TCP payload to ensure there is an HTTP header.
0
3,860
0
2
2017-06-21T14:58:00.000
python,http,scapy
Using Scapy to fitler HTTP packets
1
1
3
44,703,791
0
1
0
I'm looking for ways to scrape all tables on a certain website. The tables are formatted exactly the same in all subpages. The problem is, the urls of those subpages are in this way: url1 = 'http.../Tom', url2 = 'http.../Mary', url3 = 'http.../Jason', such that I cannot set a loop by altering the url incrementally. Are there any possible ways to solve this by pandas?
false
44,703,138
0
0
0
0
Another idea would be to use BeautifulSoup library first and get all the table elements from a webpage and then apply pd.read_html()
0
424
0
0
2017-06-22T15:05:00.000
python,pandas,beautifulsoup
Is it possible to use pandas to scrape html tables over multiple web pages?
1
1
2
44,703,767
0
0
0
I have a webhook that handles any sms messages sent to my Twilio number. However, this webhook only works if there is text in the message (there will be a body in the GET request). Is it possible to parse a message if it is a location message? e.g. if I send my current location to my Twilio number and it redirects this message as a GET request to the webhook, could I possibly retrieve that location? This is what my webhook receives if I send my current location on an iPhone: at=info method=GET path="/sms/?ToCountry=US&MediaContentType0=text/x-vcard&ToState=NJ&SmsMessageSid=MMde62b3369705a8f65f18abe5b7387c2b&NumMedia=1&ToCity=NEWARK&FromZip=07920&SmsSid=MMde62b3369705a8f65f18abe5b7387c2b&FromState=NJ&SmsStatus=received&FromCity=SOMERVILLE&Body=&FromCountry=US&To=%2B18627019482&ToZip=07102&NumSegments=1&MessageSid=MMde62b3369705a8f65f18abe5b7387c2b&AccountSid=ACe72df68a68db79d9a4ac6248df6e981e&From=%2B19083925806&MediaUrl0=https://api.twilio.com/2010-04-01/Accounts/ACe72df68a68db79d9a4ac6248df6e981e/Messages/MMde62b3369705a8f65f18abe5b7387c2b/Media/MEcd56717ce17f3a320b06c4ee11df2243&ApiVersion=2010-04-01" For comparison, here's a normal text message: at=info method=GET path="/sms/?ToCountry=US&ToState=NJ&SmsMessageSid=SM4767dabb915fae749c7d5b59d6f655a2&NumMedia=0&ToCity=NEWARK&FromZip=07920&SmsSid=SM4767dabb915fae749c7d5b59d6f655a2&FromState=NJ&SmsStatus=received&FromCity=SOMERVILLE&Body=Denver+E+union&FromCountry=US&To=%2B18627019482&ToZip=07102&NumSegments=1&MessageSid=SM4767dabb915fae749c7d5b59d6f655a2&AccountSid=ACe72df68a68db79d9a4ac6248df6e981e&From=%2B19083925806&ApiVersion=2010-04-01" In the normal sms message, I can parse out the Body=Denver+E+union to get the message, but I'm not sure you could do anything with the content of the location message. If I can't get the location, what are some other easy ways I could send a parseable location?
false
44,728,566
0.197375
1
0
2
I solved a similar problem by creating a basic webpage which uses the HTML5 geolocation function to get lat/lng of the phone. It then submits coordinates to a php script via AJAX. My server geocodes the employees location, calculates travelling time to next job and sends the customer an SMS with ETA information using the Twilio API. You could bypass Twilio altogether and get your server to make the request direct to your webhook, or even via the AJAX call if it's all on the same domain. All depends what you are trying to achieve I guess.
0
525
0
0
2017-06-23T19:19:00.000
python,iphone,location,twilio
iPhone "Send My Current Location" to Twilio
1
1
2
44,731,799
0
1
0
Just wondering if anyone could explain why I navigate to a webpage using Chrome and the request headers include Accept, Accept-Encoding, Accept-Language, Connection, Cookie, Host, Referer, Upgrade-Insecure-Request, and User-Agent but when I make a request via Python and print request.headers it only returns Connection, Accept-Encoding, Accept, and User-Agent even if I set the User-Agent to the same one I see in Chrome. Also I'm wondering if it's possible to return those request headers I see in Chrome rather than those I see in Python. Thank you.
false
44,742,686
0
0
0
0
Your using two different libraries (Chrome's internal http library and requests). It's very rare for two unrelated libraries to send the same set of headers, especially when one is from a browser. You could manually set those headers in requests but I'm not sure what you're trying to do
0
498
0
0
2017-06-25T02:35:00.000
python,python-2.7,google-chrome,python-requests,request-headers
Python request.headers differs from what I see in Chrome
1
1
1
44,744,069
0
0
0
I'm running automated tests with Selenium and Python on Macbook and two monitors. The big issue with the tests is that the tests kept appearing wherever I was working. E.g. the test start on monitor A and I was googling or reporting bugs on monitor B. When the test teardown and setup again, but on monitor B. It's very frustrating and restricting me from doing work when the tests are running. I am looking for solutions that can command the tests to stay in one place or on one monitor.
false
44,783,953
0.197375
1
0
2
Run the tests in a virtual machine. They will appear in the window you've logged into the VM with, which you can put anywhere you like or minimize/iconify and get on with your work. (the actual solution I used at work was to hire a junior test engineer to run and expand our Selenium tests, but that doesn't always apply)
0
43
0
2
2017-06-27T15:15:00.000
python,macos,selenium,automated-tests
How to command automated tests to stay at one place of the monitor
1
1
2
44,784,204
0
0
0
Are there any TFTP libraries in Python to allow the PUT transfer of binary file to an IP address. Ideally I would rather use a built in library if this is not possible then calling the cmd via python would be acceptable. Usually if TFTP is installed in windows the command in command prompt would be: tftp -i xxx.xxx.xxx.xxx put example_filename.bin One thing to note is that python is 32bit and running on a 64bit machine. I've been unable to run tftp using subprocess.
false
44,796,047
0.53705
0
0
3
You can use TFTPy TFTPy is a pure Python implementation of the Trivial FTP protocol. TFTPy is a TFTP library for the Python programming language. It includes client and server classes, with sample implementations. Hooks are included for easy inclusion in a UI for populating progress indicators. It supports RFCs 1350, 2347, 2348 and the tsize option from RFC 2349.
0
2,164
1
2
2017-06-28T07:22:00.000
python,tftp
Running TFTP client/library in python
1
1
1
44,796,119
0
0
0
I want to choose any given node and build a subgraph for that node based on what distance degree I want. I need to input a node's ID, and specify the degree ( 1 = only direct friends, 2 = direct friends, and friends of friends, 3 = direct friends, friends of friends, friends of friends of friends and so on...) and generate a subgraph for that node's social network to that degree. Does anyone know a good way to do this in networkx?
false
44,810,388
0.197375
0
0
1
I found the feature I'm looking for like, right after I posted this. It is the networkx ego_graph feature. networkx.github.io/documentation/networkx-1.10/reference/
0
169
0
1
2017-06-28T18:50:00.000
python,social-networking,networkx
Generating NetworkX subgraphs based on distance
1
1
1
44,811,313
0
0
0
I have a python program that handles sensor data. Most of the functionality takes place locally and requires no network connection. The one thing i want it to do with a client is send a continuous stream of data IF a client is connected but for everything else to run regardless of weather there is a connected client or not. The only setups I've managed before all required a client to connect before anything else could happen. How do I set this up so my program doesn't depend on first having a client connected before it does anything??
true
44,813,676
1.2
0
0
0
Try to connect, and if it fails after n retries, consider you don't have a connection. There are many strategies to setup retries, so search about it to figure what is best for you. Then if you can't connect, just keep going on after the failure... That's about it.
0
41
0
0
2017-06-28T22:43:00.000
python,networking,dependencies
Python run certain functions with or without a client connection
1
1
1
44,813,734
0
1
0
I have a python script that continuously generates JSON object { time: "timestamp", behaviour: 'random behaviour'} and display it on stdout. Now I want to build a web app and modify the script so that I can read continous data to my web page. I donno where to start from. Well in details: I want a web API to have an on/off button and by clicking on it start reading the beahviour and on clicking off it stops. I am very new to programming so please suggest me topics and ways I need to study/ look upon to make this app. thanks
false
44,829,186
0
0
0
0
Here's a simple way I'd go about doing it: Create a backend service that exposes the python script's output via HTTP. You can use any familiar backend programming language: python, java, .net etc to host a simple http web server, and expose the data. Create a simple html page, that makes an ajax call to your backend service, pulls the data, and shows it on the page. Later add your buttons, make the page nicer, responsive on other devices. Topics to read depends on what you are missing above. My best guess based on your question is that you are not familar with front end programming. So read up on html, js, css and ajax.
0
38
0
0
2017-06-29T15:21:00.000
python,rest,reactjs,web-applications,websocket
Web App to read data continously from another app
1
1
1
44,830,106
0
1
0
I have built a MITM with python and scapy.I want to make the "victim" device be redirected to a specific page each time it tried to access a website. Any suggestions on how to do it? *Keep in mind that all the traffic from the device already passes through my machine before being routed.
true
44,851,959
1.2
1
0
0
You can directly answer HTTP requests to pages different to that specific webpage with HTTP redirections (e.g. HTTP 302). Moreover, you should only route packets going to the desired webpage and block the rest (you can do so with a firewall such as iptables).
0
141
0
0
2017-06-30T17:24:00.000
python,scapy
Python : Redirecting device with MITM
1
1
1
44,997,621
0
1
0
I need to use selenium for a scraping job with a heap javascript generated webpages. I can open several instances of the webdriver at a time and pass the websites to the instances using queue. It can be done in multiple way.s though. I've experimented with both the Threading module and the Pool- and Process-ways from the multiprocessing module. All work and will do the job quite fast. This leaves me wondering: Which module is generalky prefered in a situation like this?
false
44,860,570
0.197375
0
0
1
The main factor in CPython for choosing between Threads of Processes is based on your type of workload. If you have a I/O bound type of workload, where most of your application time is spent waiting for data to come in or to go out, then your best choice is using Threads. If, instead, your application is spending great time using the CPU, then Processes are your tool of choice. This is due to the fact that, in CPython (the most commonly used interpreter) only one Thread at a time can make use of the CPU cores. For more information regarding this limitation just read about the Global Interpreter Lock (GIL). There is another advantage when using Processes which is usually overlooked: Processes allow to achieve a greater degree of isolation. This means that if you have some unstable code (in your case could be the scraping logic) which might hang or crash badly, encapsulating it in a separate Process allows your service to detect the anomaly and recover (kill the Process and restart it).
0
632
0
0
2017-07-01T11:49:00.000
multithreading,python-3.x,selenium-webdriver,multiprocessing
Threading or multiprocessing for webscraping with selenium
1
1
1
44,864,054
0
0
0
I couldn't find any API to configure my codeship project :( I have multiple repositories, and would like to allow the following steps programatically: List repositories of my team create a new repo pipeline (for each microservice I have a seperate repo) Edit pipeline steps scripts for newly created project (and for multiple projects "at once") Customize a project's environment variables delete an existing project Is there any way to do it? How do I authenticate? Even if I have to do by recording network curls - is there a better way to authenticate other than pasting an existing cookie I copy from my own browsing? OAuth, user-password as header, etc.? I'm trying to write a python bot to do it, but will take any example code available!
false
44,869,606
0
1
0
0
Just noticed your question here on SO and figured I'd answer it here as well as your support ticket (so others can see the general answer). We're actively working on a brand new API that will allow access to both Basic and Pro projects. The target for general availability is currently at the beginning of Aug '17. We'll be having a closed beta before then (where you're on the list)
0
30
0
0
2017-07-02T10:05:00.000
python,web-scraping,codeship
configuring codeship via code
1
1
1
45,087,672
0
1
0
I can import BeautifulSoup using Python 2.7 but not when I try using Python 3.6, even though BeautifulSoup claims to work on both? Sorry this is my first question so apologies if it's trivial or if I haven't used the proper conventions. from BeautifulSoup import * Traceback (most recent call last): File "", line 1, in File "/Users/tobiecusson/Desktop/Python/Ch3-PythonForWebData/AssignmentWeek4/BeautifulSoup.py", line 448 raise AttributeError, "'%s' object has no attribute '%s'" % (self.class.name, attr) ^ SyntaxError: invalid syntax from BS4 import BeautifulSoup Traceback (most recent call last): File "", line 1, in File "/Users/tobiecusson/Desktop/Python/Ch3-PythonForWebData/AssignmentWeek4/BS4.py", line 448 raise AttributeError, "'%s' object has no attribute '%s'" % (self.class.name, attr) ^ SyntaxError: invalid syntax
false
44,870,168
0
0
0
0
Just noticed - you're importing from BeautifulSoup. Try from bs4 import *
0
289
0
0
2017-07-02T11:11:00.000
beautifulsoup,python-3.6
Python 3.6 Beautiful Soup - Attribute Error
1
1
1
44,872,494
0
0
0
I am trying to get selenium to use chromedriver on mac. I have downloaded the mac version of chromedriver and added it to the same folder as my python file. I am then using: driver = webdriver.Chrome() however it doesn't seem to be opening. This works fine in windows but just not working on mac. anyone got any ideas? Thanks
false
44,870,294
1
0
0
6
You might need to install it with : brew cask install chromedriver or brew install chromedriver and then do which chromedriver You will get the relevant path.
0
15,055
0
2
2017-07-02T11:26:00.000
python,macos,selenium,webdriver,selenium-chromedriver
Selenium chromedriver in relative path for mac and python
1
1
2
53,889,254
0
0
0
I want to use Selenium on Python but I have an alert message: driver-webdriver.Chrome("D:\Selenium\Chrome\chromedriver.exe") NameError: name 'driver' is not defined I have installed the Chrome Driver, what else must I do ?
false
44,944,636
0.066568
0
0
1
chromedriver.exe must be in python path, probably now python expects that driver exists in "D:\Selenium\Chrome\chromedriver.exe" but it does not. You could try add chromedriver.exe path to windows enviroment path variable, or add path to os.path in python, or add driver to folder of python script.
0
20,826
0
0
2017-07-06T09:14:00.000
python,selenium,selenium-webdriver,selenium-chromedriver
"Driver is not defined" Python/Selenium
1
1
3
44,944,883
0
1
0
I have a web app built by Django, front-end is built by React. I tried to test bdd with behave and selenium. I run a test with Chrome web driver and phantomjs one but the tests only passed using chrome. I captured a screenshot when it runs on phantom and saw that the page is not fully rendered. Please give some suggestions about this issue. Do I need do further configuration to test with phantomjs. Thank you.
false
44,967,663
0
0
0
0
You have 3 options: implicitly_wait: Implicitly wait means that the it will wait for a maximum of x seconds to search for an element so if you element will appear after 4 seconds you will get it when it will appear second if you set the implicitly wait greater than 4seconds. Use driver.implicitly_wait(x) after creating the instance. explicitly_wait: Explicitly wait means the it will wait for x seconds and it will search for the element after the seconds passed so if you set 10 seconds and the element will appear after 4 seconds the element will be located with 6 seconds delay. Use driver.explicitly_wait(x) after creating the instance time.sleep You can put your program to sleep and wait for the page to fully load after submitting some actions. Use time.sleep(x) after submitting a form, clicking a button or loading a page.
0
105
0
0
2017-07-07T09:45:00.000
python,selenium,phantomjs,bdd
BDD - Test pass on chrome but not on Phantomjs
1
3
3
44,968,416
0
1
0
I have a web app built by Django, front-end is built by React. I tried to test bdd with behave and selenium. I run a test with Chrome web driver and phantomjs one but the tests only passed using chrome. I captured a screenshot when it runs on phantom and saw that the page is not fully rendered. Please give some suggestions about this issue. Do I need do further configuration to test with phantomjs. Thank you.
false
44,967,663
0
0
0
0
Try adding an explicit wait on a locator (waitForElementToBePresent) that exists on the part of page that is not rendered on Phantomjs.
0
105
0
0
2017-07-07T09:45:00.000
python,selenium,phantomjs,bdd
BDD - Test pass on chrome but not on Phantomjs
1
3
3
44,968,229
0
1
0
I have a web app built by Django, front-end is built by React. I tried to test bdd with behave and selenium. I run a test with Chrome web driver and phantomjs one but the tests only passed using chrome. I captured a screenshot when it runs on phantom and saw that the page is not fully rendered. Please give some suggestions about this issue. Do I need do further configuration to test with phantomjs. Thank you.
false
44,967,663
0.066568
0
0
1
This is a common problem with PhantomJS (the page not being fully rendered), and often isn't something that can be remedied with explicit/implicit waits. Add a long (5 second) sleep to your code and take another screenshot. If the page is fully rendered, follow @Alex Lucaci's instructions for adding (ideally) explicit waits. If the page still isn't fully rendered, PhantomJS just wont work for you in this case. Personally, I would advise against using PhantomJS at all, as it is problematic in a myriad of ways, but also because why would you test on a browser that literally no one uses as their actual browser?
0
105
0
0
2017-07-07T09:45:00.000
python,selenium,phantomjs,bdd
BDD - Test pass on chrome but not on Phantomjs
1
3
3
44,972,125
0
1
0
how to add trigger s3 bucket to lambda function with boto3, then I want attach that lambda function to dynamically created s3 buckets using programmatically(boto3)
true
44,982,302
1.2
0
0
1
Three steps I followed 1) connected to aws lambda with boto3 used add_permission API 2)also applyed get_policy 3)connected to S3 with boto resource to configuring BucketNotification API,put LambdaFunctionConfigurations
0
3,218
0
2
2017-07-08T03:50:00.000
python-2.7,amazon-web-services,amazon-s3,aws-lambda,aws-sdk
how to add the trigger s3 bucket to lambda function dynamically(python boto3 API)
1
1
3
45,005,925
0
1
0
I have a large application and I am using Headless Chrome, Selenium and Python to test each module. I want to go through each module and get all the JS console errors produced while inside that specific module. However, since each module is inside a different test case and each case executes in a separate session, the script first has to login on every test. The login process itself produces a number of errors that show up in the console. When testing each module I don't want the unrelated login errors to appear in the log. Basically, clear anything that is in the logs right now -> go to the module and do something -> get logs that have been added to the console. Is this not possible? I tried doing driver.execute_script("console.clear()") but the messages in the console were not removed and the login-related messages were still showing after doing something and printing the logs.
false
44,991,009
0.099668
0
0
1
This thread is a few years old, but in case anyone else finds themselves here trying to solve a similar problem: I also tried using driver.execute_script('console.clear()') to clear the console log between my login process and the page I wanted to check to no avail. It turns out that calling driver.get_log('browser') returns the browser log and also clears it. After navigating through pages for which you want to ignore the console logs, you can clear them with something like _ = driver.get_log('browser')
0
3,566
0
2
2017-07-08T21:44:00.000
javascript,python,google-chrome,selenium
Clear Chrome browser logs in Selenium/Python
1
1
2
70,308,976
0
1
0
I am a beginner trying to find a way to stream audio on a local server. I have a Python script that creates some binary data from a robot's microphone, and I want to send this data to be displayed on a local Go server I created. I read somewhere that web sockets could be a solution. But what's the simplest way to upload the audio buffers from the Python script? And how would I retrieve this raw binary data so that it can be streamed from the web app? Many many thanks.
false
45,014,245
0
0
0
0
There is no single "Best" way. If the protocol has to go over ports 80/443 on the open internet, you could use web-sockets. You could also POST base64 encoded chunks of data from python back to your server. If the robot and server are on the same network, you could send UDP packets from the robot to your server. (Usually a missing packet or two on audio is not a problem). Even if you have a web based Go server, you can still fire off a go routine to listen on UDP for incoming packets. If you could be more specific, maybe I or someone else could give a better answer?
0
268
0
0
2017-07-10T14:08:00.000
python,go,stream,buffer,transfer
Transfer audio buffers from a Python script to a Go server?
1
1
1
45,017,089
0
1
0
I built a program to go on a website and click a link which automatically downloads a file. It works when I run it on my mac (Chrome), but when I use the exact same code on AWS, nothing gets downloaded. If it helps at all, I tried to expedite the process and found the raw link, I could download that via wget but not through python (on any computer).
false
45,022,643
0
0
0
0
There are many things that could happen. You sould check your logs and add them to the question. One thing I am sure in is that you don't use any virtual display, and you don't have a display on AWS. So you should google for "python running selenium headless"
0
39
0
1
2017-07-10T22:32:00.000
python,google-chrome,amazon-web-services,selenium,selenium-chromedriver
Link doesn't work on AWS with Selenium
1
1
1
45,022,691
0
0
0
I just recently bought an Acer Chromebook 11 and I would like to do some Python programming on it. How do I run Python from a USB stick on an Acer Chromebook 11? (Also, I don't have access to wifi at the place I want to use it.)
false
45,037,029
0
0
0
0
On some chromebooks, like the one I'm using now, there is a Linux(beta) option in the options menu. Alternatively, you can use repl.it instead, although be aware that playing sound and using geocoder will work server-side instead, so the sound will not play, and the ip adress will be in New York.
1
455
0
0
2017-07-11T14:13:00.000
python,google-chrome-os
Programming with Python using Chrome OS
1
1
2
58,956,274
0
0
0
So I've decided to learn Python and after getting a handle on the basic syntax of the language, decided to write a "practice" program that utilizes various modules. I have a basic curses interface made already, but before I get too far I want to make sure that I can redirect standard input and output over a network connection. In effect, I want to be able to "serve" this curses application over a TCP/IP connection. Is this possible and if so, how can I redirect the input and output of curses over a network socket?
true
45,062,817
1.2
0
0
2
This probably won't work well. curses has to know what sort of terminal (or terminal emulator, these days) it's talking to, in order to choose the appropriate control characters for working with it. If you simply redirect stdin/stdout, it's going to have no way of knowing what's at the other end of the connection. The normal way of doing something like this is to leave the program's stdin/stdout alone, and just run it over a remote login. The remote access software (telnet, ssh, or whatever) will take care of identifying the remote terminal type, and letting the program know about it via environment variables.
0
779
1
2
2017-07-12T16:09:00.000
python,sockets,curses
Python3 - curses input/output over a network socket?
1
1
1
45,063,103
0
0
0
I want to know how to send custom header (or metadata) using Python gRPC. I looked into documents and I couldn't find anything.
false
45,071,567
0
1
0
0
If you metadata has one key/value you can only use list(eg: [(key, value)]) ,If you metadata has Mult k/v you should use list(eg: [(key1, value1), (key2,value2)]) or tuple(eg: ((key1, value1), (key2,value2))
0
12,988
0
9
2017-07-13T04:43:00.000
python,grpc
How to send custom header (metadata) with Python gRPC?
1
1
3
70,484,074
0
0
0
Using Qpid for python, I am using the Container to connect to ActiveMq with the connector URL as: username:password@hostname:5672/topicName. In the web console i can see that for AMQP the connection is up . But instead of subscribing to existing topic it create a new queue with that name . Can someone help me in the format which has to be given to subscribe for a topic. Or if i am missing something please point me in right direction. Thank You.
false
45,087,840
0
0
0
0
Found out that the issue was , In On_start method we have to use event.container.create_receiver() and the URL has to be in format topic://
0
278
0
0
2017-07-13T17:53:00.000
python,activemq,qpid,jms-topic
Qpid for python , Not able to subscribe for a topic
1
2
2
45,198,167
0
0
0
Using Qpid for python, I am using the Container to connect to ActiveMq with the connector URL as: username:password@hostname:5672/topicName. In the web console i can see that for AMQP the connection is up . But instead of subscribing to existing topic it create a new queue with that name . Can someone help me in the format which has to be given to subscribe for a topic. Or if i am missing something please point me in right direction. Thank You.
false
45,087,840
0
0
0
0
I'm not entirely sure of the Qpid for python URI syntax but from the ActiveMQ side a destination is addressed directly by using a destination prefix. For a topic the prefix is topic:// and for queue it is queue:// unsurprisingly. In the absence of a prefix the broker defaults the address in question to a Queue type as that is generally the preference. So to fix your issue you need to construct a URI that uses the correct prefix which in your case would be something using topic://some-name and that should get you the results you expect.
0
278
0
0
2017-07-13T17:53:00.000
python,activemq,qpid,jms-topic
Qpid for python , Not able to subscribe for a topic
1
2
2
45,104,859
0
0
0
Attempting to count how many servers/guilds my bot is in. I've check a few forums before, and seems like for do it, I need to use the len(). I tried making it, by doing the follow command: Guilds = len([s] for s in self.servers) When doing it, I get the following error: "TypeError: object of type 'generator' has no len()" I'm not sure of what I'm doing wrong. Could someone help me?
false
45,089,263
0.197375
0
0
1
You are doing a comprehension that results in a generator. You can probably fix it by doing len([s for s in self.servers]). EDIT: Generator is an object that does not hold elements in memory but you can still loop over them. Since it doesn't create a list from which to ask the length from you can't perform len().
0
706
0
0
2017-07-13T19:15:00.000
python,bots,discord
Couting Discord Bot's Guilds(In Python)?
1
1
1
45,089,303
0
0
0
I'm writing this application where the user can perform a web search to obtain some information from a particular website. Everything works well except when I'm connected to the Internet via Proxy (it's a corporate proxy). The thing is, it works sometimes. By sometimes I mean that if it stops working, all I have to do is to use any web browser (Chrome, IE, etc.) to surf the internet and then python's requests start working as before. The error I get is: OSError('Tunnel connection failed: 407 Proxy Authentication Required',) My guess is that some sort of credentials are validated and the proxy tunnel is up again. I tried with the proxies handlers but it remains the same. My doubts are: How do I know if the proxy need authentication, and if so, how to do it without hardcoding the username and password since this application will be used by others? Is there a way to use the Windows default proxy configuration so it will work like the browsers do? What do you think that happens when I surf the internet and then the python requests start working again? I tried with requests and urllib.request Any help is appreciated. Thank you!
false
45,156,592
0
0
0
0
Check if there is any proxy setting in chrome
0
1,617
0
0
2017-07-18T02:39:00.000
python,proxy,python-requests,urllib
Python URL Request under corporate proxy
1
1
1
45,156,637
0
0
0
I am trying to figure out a way to get a users access key age through an aws lambda function using Python 3.6 and Boto 3. My issue is that I can't seem to find the right api call to use if any exists for this purpose. The two closest that I can seem to find are list_access_keys which I can use to find the creation date of the key. And get_access_key_last_used which can give me the day the key was last used. However neither or others I can seem to find give simply the access key age like is shown in the AWS IAM console users view. Does a way exist to get simply the Access key age?
false
45,156,934
0.099668
0
0
2
Upon further testing, I've come up with the following which runs in Lambda. This function in python3.6 will email users if their IAM keys are 90 days or older. Pre-requisites all IAM users have an email tag with a proper email address as the value. Example; IAM user tag key: email IAM user tag value: [email protected] every email used, needs to be confirmed in SES import boto3, os, time, datetime, sys, json from datetime import date from botocore.exceptions import ClientError iam = boto3.client('iam') email_list = [] def lambda_handler(event, context): print("All IAM user emails that have AccessKeys 90 days or older") for userlist in iam.list_users()['Users']: userKeys = iam.list_access_keys(UserName=userlist['UserName']) for keyValue in userKeys['AccessKeyMetadata']: if keyValue['Status'] == 'Active': currentdate = date.today() active_days = currentdate - \ keyValue['CreateDate'].date() if active_days >= datetime.timedelta(days=90): userTags = iam.list_user_tags( UserName=keyValue['UserName']) email_tag = list(filter(lambda tag: tag['Key'] == 'email', userTags['Tags'])) if(len(email_tag) == 1): email = email_tag[0]['Value'] email_list.append(email) print(email) email_unique = list(set(email_list)) print(email_unique) RECIPIENTS = email_unique SENDER = "AWS SECURITY " AWS_REGION = os.environ['region'] SUBJECT = "IAM Access Key Rotation" BODY_TEXT = ("Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older.\r\n" "Log into AWS and go to your IAM user to fix: https://console.aws.amazon.com/iam/home?#security_credential" ) BODY_HTML = """ AWS Security: IAM Access Key Rotation: Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older. Log into AWS and go to your https://console.aws.amazon.com/iam/home?#security_credential to create a new set of keys. Ensure to disable / remove your previous key pair. """ CHARSET = "UTF-8" client = boto3.client('ses',region_name=AWS_REGION) try: response = client.send_email( Destination={ 'ToAddresses': RECIPIENTS, }, Message={ 'Body': { 'Html': { 'Charset': CHARSET, 'Data': BODY_HTML, }, 'Text': { 'Charset': CHARSET, 'Data': BODY_TEXT, }, }, 'Subject': { 'Charset': CHARSET, 'Data': SUBJECT, }, }, Source=SENDER, ) except ClientError as e: print(e.response['Error']['Message']) else: print("Email sent! Message ID:"), print(response['MessageId'])
0
5,998
0
2
2017-07-18T03:23:00.000
python-3.x,amazon-web-services,boto3,amazon-iam
Getting access key age AWS Boto3
1
1
4
57,577,221
0
1
0
I have existing REST APIs, written using Django Rest Framework and now due to some client requirements I have to expose some of them as SOAP web services. I want to know how to go about writing a wrapper in python so that I can expose some of my REST APIs as SOAP web services. OR should I make SOAP web services separately and reuse code ? I know this is an odd situation but any help would be greatly appreciated.
false
45,279,148
0
0
0
0
Lets Discuss both the Approaches and their pros and cons Seperate SOAP Service Reusing Same Code - if you are sure the code changes will not impact the two code flow ,it is good to go. Extension of Features - if you are sure that new feature extension will not impact other parts it is again best to go. Scalablity - if new API are part of same application and you are sure that it will be scalable with more load ,it is again a good option. Extension - if you are sure in future adding more API will not create a mess of code, it is again good to go for. Soap Wrapper Using Python (my favourate and suggested way to go) Seperation of Concern with this you can make sure ,what ever code you write is sperate from main logic and you can easly plug in and plug out new things. Answer for all the above question in case of this is YES. Your Call , Comments and critisicsm are most welcome
0
3,292
0
15
2017-07-24T11:13:00.000
django,python-3.x,rest,soap,django-rest-framework
Write a wrapper to expose existing REST APIs as SOAP web services?
1
1
2
45,458,723
0
0
0
I'm building a web scraper that makes multiple requests concurrently. I'm currently using the multiprocessing module to do so, but since it's running on a Digital Ocean droplet, I'm running into processor/memory bottlenecks. Since this is a web scraper and most of the time spent on the script is likely waiting for the network, isn't it more efficient to use threading instead in order to reduce resource usage? Does threading detect a blocking network call and release locks? Is it feasible to intertwine multiprocessing and multithreading?
true
45,314,387
1.2
0
0
2
Since the multiprocessing module was developed to be largely compatible with the threading model that pre-dates it, you should hopefully not find it too difficult to move to threaded operations in a single process. Any blocking calls (I/O, mostly) will cause the calling thread to be suspended (become non-runnable) and other threads will therefore get chance to use the CPU. While it's possible to use multi-threading in multiple processes, it isn't usual to do so.
1
384
0
1
2017-07-25T22:13:00.000
python,multithreading,web-scraping,multiprocessing
Multiprocess vs multithreading for network operations
1
1
1
45,314,501
0
0
0
I just started using vk api in python and I am looking for a way to get more than 200 videos (possibly by using multiple api calls) for a specific query. To be more specific, each api call to video.search returns the number of videos that the search yields (the same number can be seen when searching from the website). Is there a way to get let's say the next videos in that list? thanks! :-)
false
45,349,838
0
0
0
0
The answer is to use the 'offset' parameter. I was not aware of it when I asked the question... I was able to get max of about 4K videos but it take more than 20 API calls as some of the results are repeated i.e., if on the first call I got 200 videos on the second API call with offset 200 I am getting a list of 200 videos but some of them already appeared in the previous list. In addition, you don't always get 200 videos. Sometimes you will get less (and still repetitions are possible).
0
2,021
0
0
2017-07-27T11:42:00.000
python,vk
Get more than 200 results on vk vides.search api
1
1
2
46,219,457
0
0
0
What does the parameter of 1 mean in the listen(1) method of socket. I am using the socket module in python 2.7 and I have created a basic server that I want to connect to multiple clients (all on a local machine) and transmit data between them. I know there a simpler ways of doing this but I want practice for when the clients would not all be on the same machine and may need to retrieve something from the server first so could not bypass it. I was wondering if the 1 in listen referred to the amount of connections the server would make at a single time and if not what it did mean. I really want to understand in detail how parts of the process work so any help would be appreciated.
true
45,370,731
1.2
0
0
4
It defines the length of the backlog queue, which is the number of incoming connections that have been completed by the TCP/IP stack but not yet accepted by the application. It has nothing whatsoever to do with the number of concurrent connections that the server can handle.
0
6,274
0
1
2017-07-28T10:11:00.000
python-2.7,sockets,tcp,server,client
What does the parameter of 1 mean in `listen(1)` method of socket module in python?
1
1
1
45,371,518
0
1
0
I am fairly new to Python, but i was wondering if i could utilize Python and its modules. To retrieve a href from page 1, and then the first paragraph in page 2. Q2: Also, how could I scrape the first 10 link hrefs with the same div class on page one, and then scrape the first 10 paragraphs, while looping?
false
45,373,514
0.066568
0
0
1
Yes, I believe you should be able to. Try to lookup the requests and beautifulsoup python modules.
0
460
0
0
2017-07-28T12:26:00.000
python,html,screen-scraping
Can Python get a Href link on page one, and then get a paragraph from page 2?
1
1
3
45,373,618
0
0
0
I'm playing around writing a simple multi-threaded web crawler. I see a lot of sources talk about web crawlers as obviously parallel because you can start crawling from different URLs, but I never see them discuss how web crawlers handle URLs that they've already seen before. It seems that some sort of global map would be essential to avoid re-crawling the same pages over and over, but how would the critical section be structured? How fine grained can the locks be to maximize performance? I just want to see a good example that's not too dense and not too simplistic.
false
45,405,321
0
0
0
0
for crawler don't use ConcurrentHashMap, rather use Databse The number of visisted URL's will grow very fast, so it is not a good thing to store them in memory, better use a databese, store the URL and the date it was last crawled, then just check the URL if it already exists in DB or is eligible for refreshing. I use for example a Derby DB in embedded mode, and it works perfectly for my web crawler. I don't advise to use in memory DB like H2, because with the number of crawled pages you eventually will get OutOfMemoryException. You will rather rarely have the case of crawling the same page more than once in the same time, so checking in DB if it was already crawled recently is enough to not waste significant resources on "re-crawling the same pages over and over". I belive this is "a good solution that's not too dense and not too simplistic" Also, using Databse with the "last visit date" for url, you can stop and continue the work when you want, with ConcurrentHashMap you will loose all the results when app exit. You can use "last visit date" for url to determine if it needs recrawling or not.
1
875
0
1
2017-07-30T22:26:00.000
java,python,multithreading,concurrency,web-crawler
Do concurrent web crawlers typically store visited URLs in a concurrent map, or use synchronization to avoid crawling the same pages twice?
1
2
4
45,405,432
0
0
0
I'm playing around writing a simple multi-threaded web crawler. I see a lot of sources talk about web crawlers as obviously parallel because you can start crawling from different URLs, but I never see them discuss how web crawlers handle URLs that they've already seen before. It seems that some sort of global map would be essential to avoid re-crawling the same pages over and over, but how would the critical section be structured? How fine grained can the locks be to maximize performance? I just want to see a good example that's not too dense and not too simplistic.
false
45,405,321
0.099668
0
0
2
Specific domain use case : Use in memory If it is specific domain say abc.com then it is better to have vistedURL set or Concurrent hash map in memory, in memory will be faster to check visited status, memory consumption will be comparatively less. DB will have IO overhead and it is costly and visited status check will be very frequent. It will hit your performance drastically. As per your use case, you can use in memory or DB. My use case was specific to domain where visited URL will not be again visited so I used Concurrent hash map.
1
875
0
1
2017-07-30T22:26:00.000
java,python,multithreading,concurrency,web-crawler
Do concurrent web crawlers typically store visited URLs in a concurrent map, or use synchronization to avoid crawling the same pages twice?
1
2
4
45,409,820
0
1
0
I can get html of a web site using lxml module if authentication is not required. However, when it required, how do I input 'User Name' and 'Password' using python?
true
45,406,471
1.2
0
0
0
It very much depends on the method of authentication used. If it's HTTP Basic Auth, then you should be able to pass those headers along with the request. If it's using a web page-based login, you'll need to automate that request and pass back the cookies or whatever session token is used with the next request.
0
29
0
0
2017-07-31T01:51:00.000
python,authentication,login,web-crawler,lxml
How to get html using python when the site requires authenticasion?
1
1
1
45,406,522
0
0
0
I'm trying to make game server emulator (with uses probably SSLv3 for communicating) And I'm trying to make SSL socket with SSLv3 support Here is the line with causes problem:context = SSL.Context(SSL.SSLv3_METHOD) Is resulting with this: ValueError: No such protocol Additionally i tryed to use SSL.SSLv23_METHOD - works but while client is trying to connect i'm getting this error: OpenSSL.SSL.Error: [('SSL routines', 'tls_process_client_hello', 'version too low')] As you can see I'm getting the 'version too low' error, that's why I'm trying to make the SSLv3 server. Is there any way to fix that?
false
45,408,669
0.197375
0
0
1
SSLv3 is considered insecure and should no longer be used. Because of this many current installation of OpenSSL come without support for SSLv3, i.e. it is not compiled into the library. In this case you get the error about unsupported method if you try to explicitly use it and you get a similar error if the SSL handshake fails because the peer tries to use this locally unsupported TLS version. Is there any way to fix that? Don't try to enforce use of SSLv3. Instead use the sane and secure default protocol setting.
0
1,451
0
1
2017-07-31T06:17:00.000
python,sockets,ssl
Python: SSL.Context(SSL.SSLv3_METHOD) = No such protocol
1
1
1
45,409,087
0
1
0
With python social auth, when a user is logged in when he clicks 'Login with Facebook' or similar. The request.user is not the newly logged in facebook user but the old logged in user. log in with email [email protected] log in with facebook email [email protected] Logged in user (request.user) is still [email protected] Is this intended behavior? Is there a way to fix this or should I not present log-in unless he's not logged out?
true
45,412,902
1.2
1
0
0
If there's a user already logged in in your app, then python-social-auth will associate the social account with that user, if it should be another user, then the first must be logged out. Basically python-social-auth will create new users if there's none logged in at the moment, otherwise it will associate the the social account.
0
49
0
0
2017-07-31T10:05:00.000
django,python-social-auth
python social auth, log in without logging out
1
1
1
45,424,674
0
0
0
I'm currently working on a stockdashboard for myself using the alpha vantage REST API. My problem is that I want to get many stockprices from a list of tickers that I have, without using many requests to get all the prices from all the stocks. And also limiting the information I get from each stock to just being the stockprice for each stock. How would I query the alpha vantage api to not overload their servers with requests?
true
45,464,559
1.2
0
0
-2
I soon found out that Alpha Vantage doesn't support this, so I created a scraper instead to use on another website to get the information with one request, it isn't really fast, but that doesn't really matter that much right now since it will be rendered with ajax on a frontend framework, but later it should be optimized.
0
1,394
0
0
2017-08-02T15:07:00.000
python,rest,alpha-vantage
Alpha vantage Multiple stockprices, few requests
1
1
1
45,515,124
0
0
0
I use read_edgelist function of networkx to read a graph's edges from a file(500Mb), G(nodes= 2.3M, edges=33M), it uses the whole memory of machine and seems nothing it does after not finding more memory to load whole graph. Is there any way to handle this problem like sparse graph solution or using other libraries?
false
45,464,760
0
0
0
0
NetworkX uses a sparse representation; and read_edgelist reads the file line by line (ie. it does not load the whole file at once). So if NetworkX uses too much memory, it means this is actually what it takes to represent the whole graph in memory. A possible solution is to read the file yourself and discard as much edges as possible before feeding it to NetworkX.
1
128
0
0
2017-08-02T15:16:00.000
python,sparse-matrix,networkx
Is there any way to handle memory usage in read_edgelist of networkx in python?
1
1
1
45,505,004
0
0
0
I am using a package in python , i am trying to access methods with simple print i come up with this error ConnectionError: HTTPSConnectionPool(host='xxx.xxxxx.xxx', port=443): Max retries exceeded with url: /?gfe_rd=cr&ei=DeCCWZWAKajv8werhIGAAw (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)'),)) is it python error or os level error please help me in it thanks in advance
true
45,479,246
1.2
0
0
2
I have a pretty much of an idea what you are trying to do with your code. The error you are getting is not neither from the python module nor it's an OS error - it says that the website does not use CA certificates which requests module use to request and fetch data The requests modules fails to make successful handshake - it can be possible that you are not allowed to access that website or the website has its own custom security and bots that not working Fortunately you can create connection pools (proxies) or use httpadapters. requests module can also be used to verify certificates so you can try a bunch of things to make it work.
0
3,035
0
1
2017-08-03T08:47:00.000
python,ssl,connection
ConnectionError: HTTPSConnectionPool SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)'),))
1
1
1
53,810,560
0
1
0
In my Django application, I call an external API, which returns XML. I would like to display this "minified" response as an indented multiline string on the page (a plus would be syntax highlighting). I tried to process the string in Python with toprettyxml() from xml.dom.minidom, and a few things with ElementTree, but it does not play along the Jinja2 rendering well (line breaks disappear and I only get a one line string, displayed inside <pre> tags). What's the recommended way to display such code excerpt? Should I use client-side rendering? Then, which library should I use? Django version: 1.11.2 Python 3.6.1
false
45,504,829
0
0
0
0
This isn't anything to do with Python or Jinja2, but just down to how browsers render text within HTML. If you want to preserve spacing and indentation, you need to wrap your content with <pre>...</pre> tags.
0
425
0
0
2017-08-04T10:45:00.000
python,xml,django,jinja2
How to display indented xml inside a django webpage?
1
1
1
45,504,945
0
1
0
I am not talking about the softwares like surf online, HTtrack, or any other 'save page' feature of browsers but I need to know how actually it happens in the background. I am interested in making my own program to do that. Also is it possible to do in JavaScript. If yes what are the libraries I should look into or any other APIs that could be helpful. Please give me any kind of information about the topic I couldn't find any relevant thing to contribute my research.
true
45,517,437
1.2
0
0
0
Download entire HTML content of the web page For JS files, look for the <script> tags and download the js file using the src attribute or any inline scripts inside the tag For CSS, use <link> tag to download the CSS file and also look for any <style> tags for inline CSS styling For Images, scan for <img> tag and download the image using src attribute Same approach can be used for audio/video etc
0
188
0
1
2017-08-05T01:40:00.000
javascript,php,python,angularjs,web
How to get all files like: js,css,images from a website to save it for offline use?
1
1
1
45,517,697
0
0
0
I'm using a ROUTER-to-DEALER design and in-between I'm doing some processing, mainly unserialized my data. But sometimes I need to re-send a payload in backend so to a Worker destination. Re-send a payload is easy but I need to re-send it to a new worker ( a free one or at least not the same worker ). I noticed that *_multipart function holds three fields with the first one an address ( to a worker ? ). Q: Is there a way to find address of a free worker?
false
45,527,740
0
0
0
0
ZeroMQ provides feature-rich tools, all the rest above is Designer's job: Given the task description above, the possible approach is to create an application-level logic ( a distributed Finite State Automaton ( dFSA ) with certain depth of process-state memory ), where task-phases' processors report ( guess how -- again by a parallel ZeroMQ signalling / messaging dFSA infrastructure ) achieving ( { enter- | exit- }-triggered state-change ) any of the recognised states and the dFSA logic will thus on the global scale permit to orchestrate and operate the above requested "side"-steps, "branches" and/or "re-submit"-s and similar créme a'la créme tricks. Each ( free )-worker simply always notifies the dFSA-infrastructure it's ( task-unit )-exit-triggered state-change and your dFSA-infrastructure thus always knows + does not search ad-hoc for an address of any such worker, as it explicitly continuously keeps records on free-state dFSA-nodes ( to pick from ). Re-discovery, watch-dogs, heart-beats and re-confirmation handshaking schemes are also possible within the dFSA-infrastructure signalling. That simple.
0
56
0
1
2017-08-06T01:10:00.000
python,zeromq,pyzmq
How to re-send a payload to a free worker in ZeroMQ?
1
1
2
45,527,870
0
0
0
I've been struggling with this for a while and I'm just trying to get the payload that returns after someone clicks a quick reply. Has anyone dealt with this in python for messenger?
true
45,538,613
1.2
0
0
0
While postbacks return a payload, all a quick reply does is allow the users to send text by clicking a button. You can handle a quick reply the same way you would handle text input.
0
352
0
0
2017-08-07T02:29:00.000
python,chatbot,facebook-messenger-bot,facebook-chatbot
Getting quick replies from response in messenger bot
1
1
1
45,578,431
0
0
0
I have a python file that is taking websocket data and constantly updating a giant list. It updates somewhere between 2 to 10 times a second. This file runs constantly. I want to be able to call that list from a different file so this file can process that data and do something else with it. Basically file 1 is a worker that keeps the current state in a list, I need to be able to get this state from file 2. I have 2 questions: Is there any way of doing this easily? I guess the most obvious answers are storing the list in a file or a DB, which leads me to my second question; Given that the list is updating somewhere between 2 and 10 times a second, which would be better? a file or a db? can these IO functions handle these types of update speeds?
true
45,540,633
1.2
0
0
1
DB is the best bet for your use case This gives the flexibility to know which part of data you have already processed by having a status flag. Persistence data (you can also have data replication) You can scale easily if in future your applications pulls more and more data 2 -10 times is a good use case for heavy write application with DB as you will gather tons of data in short duration.
1
25
0
0
2017-08-07T06:31:00.000
python,variables
return data from another python file that's constantly updating
1
1
1
45,540,781
0
1
0
I want to create a table in a browser that's been created with python. That part can be done by using DataTable of the bokeh library. The problem is that I want to extract data from the table when a user gives his/her input in the table itself. Any library of python I could use to do this? It would better if I could do this with bokeh though.
false
45,540,638
0
0
0
0
If your requirement is to create an application using Python and users will access via browser and update some data into a table? Use Django or any web framework, basically, you are trying to build a web app!! or if you are looking for something else do mention your requirement thoroughly.
0
1,099
0
1
2017-08-07T06:31:00.000
python,html,bokeh
Extract user input to python from a table created in browser?
1
1
3
45,540,958
0
0
0
I'm creating a Facebook Messenger Bot thats powered by a python backend. I'm promoting my bot via FB ads and am trying to figure out if theres any possible way to use Pixel's Conversion tracking to improve metrics (pretty sure facebook's ad placement trys to optimize based on conversion results) Anyone know if this is possible? Everything I'm finding so far is javascript code that you need to put on your website, and I don't have or need a website for my bot. Thanks!
true
45,557,554
1.2
1
0
0
Think I figured it out.. User clicks ad -> opens up a website to get the pixel conversion hit -> immediate re-direct back to messenger.
0
632
0
0
2017-08-08T00:33:00.000
python,facebook,facebook-graph-api,bots,messenger
Facebook Ad Pixel Conversion tracking for Messenger Bot
1
2
2
45,558,293
0
0
0
I'm creating a Facebook Messenger Bot thats powered by a python backend. I'm promoting my bot via FB ads and am trying to figure out if theres any possible way to use Pixel's Conversion tracking to improve metrics (pretty sure facebook's ad placement trys to optimize based on conversion results) Anyone know if this is possible? Everything I'm finding so far is javascript code that you need to put on your website, and I don't have or need a website for my bot. Thanks!
false
45,557,554
0
1
0
0
"User clicks ad -> opens up a website to get the pixel conversion hit -> immediate re-direct back to messenger" I don't see how this is helping, as the pixel conversion hit happens before any action happens within the chat. If you want to track a certain action within the chat, couldn't you redirect to a website and re-direct back to the chat from within the chat, say in the middle of the question flow?
0
632
0
0
2017-08-08T00:33:00.000
python,facebook,facebook-graph-api,bots,messenger
Facebook Ad Pixel Conversion tracking for Messenger Bot
1
2
2
45,763,816
0
0
0
I have to write a simple tcp forwarder in python which will run for ever. I'll get data in 1 min intervals. So do I have to run the socket.read() in while true? Is there any better way to avoid all those unnecessary cpu-cycles? And one more thing, the socket.read() is in a thread.
true
45,557,769
1.2
0
0
2
do I have to run the socket.read() in while true? Is there any better way to avoid all those unnecessary cpu-cycles? It's a blocking read(). So your process (thread) is essentially sleeping while awaiting the next network communication, rather than consuming CPU cycles.
1
244
0
1
2017-08-08T01:06:00.000
python-3.x,network-programming,ubuntu-14.04,python-3.5
python3 socket reading, avoid while True or best practice
1
1
2
45,557,803
0
1
0
Imagine that I've written a celery task, and put the code to the server, however, when I want to send the task to the server, I need to reuse the code written before. So my question is that are there any methods to seperate the code between server and client.
false
45,608,490
0
0
0
0
Try a web server like flask that forwards requests to the celery workers. Or try a server that reads from a queue (SQS, AMQP,...) and does the same. No matter the solution you choose, you end up with 2 services: the celery worker itself and the "server" that calls the celery tasks. They both share the same code but are launched with different command lines. Alternately, if the task code is small enough, you could just import the git repository in your code and call it from there
0
222
1
1
2017-08-10T08:36:00.000
python,celery
how to seperate celery code into server and client side?
1
1
1
45,608,632
0
0
0
So I need a simple way to pull ten words from before and after a search term in a paragraph, and have it extract all of it into a sentence. example: paragraph = 'The domestic dog (Canis lupus familiaris or Canis familiaris) is a member of genus Canis (canines) that forms part of the wolf-like canids, and is the most widely abundant carnivore. The dog and the extant gray wolf are sister taxa, with modern wolves not closely related to the wolves that were first domesticated, which implies that the direct ancestor of the dog is extinct. The dog was the first domesticated species and has been selectively bred over millennia for various behaviors, sensory capabilities, and physical attributes.' input wolf output most widely abundant carnivore. The dog and the extant gray wolf are sister taxa, with modern wolves not closely related to
false
45,615,840
0
0
0
0
You can try using substring after finding the position of the target word. Have you tried to code anything so far?
1
336
0
0
2017-08-10T13:57:00.000
python
How do I pull a number of words around a specific word in python?
1
1
3
45,615,917
0
0
0
socket.gethostbyname("vidzi.tv") giving '104.20.87.139' ping vidzi.tv gives '104.20.86.139' socket.gethostbyname("www.vidzi.tv") giving '104.20.87.139' ping www.vidzi.tv gives '104.20.86.139' Why socket.gethostbyname is giving wrong IP for this website? It is giving right IP for other websites?
true
45,617,095
1.2
0
0
2
I don't see any "wrong" IPs in your question. A DNS server is allowed to return multiple IP addresses for the same host. The client generally just picks one of them. A lot of servers use this as a part of their load balancing, as clients select any available server and since they generally would pick different ones the traffic gets split up evenly. Your ping command and your gethostbyname command are just selecting different available IPs, but neither is "wrong". You can see all the IPs that are returned for a given hostname with a tool like nslookup or dig.
0
1,420
0
0
2017-08-10T14:53:00.000
python,sockets,ip,gethostbyname
socket.gethostbyname giving wrong IP
1
1
1
45,618,059
0
0
0
I've been working with selenium in Python recently. I was curious if anyone has had experience with recording an instance of a headless browser? I tried finding a way to do this, but didn't find any solutions in Python - a code example would be excellent. Some tips would be helpful.
false
45,655,223
0
0
0
0
I don't think they have any built in way to do this with any of the browsers. Your best bet would be to connected to the same instance of the browser (this is easier if you use the grid server) from another program then just take screenshots at short intervals.
0
454
0
0
2017-08-12T21:50:00.000
python,selenium
Recording video of headless selenium browser
1
1
1
45,656,035
0
0
0
This might be a naive question but I've really tried searching multiple resources: multiprocessing and ipyparallel but these seem to be lack of appropriate information for my task. What I have is a large directed graph G with 9 million edges and 6 million nodes. My goal is to, for a list of target nodes (50k, along with their direct neighbours (both in/out), extract subgraphs from G. I am currently using networkx to do this. I tried to use ipyparallel but I could not find tutorial on how to share an object (in my case, G) across processors for subgraph function. Is there an easy way to parallelize this across different cpu cores (there are 56 available so I really want to make full use of it)? Thank you!
false
45,657,132
0
0
0
0
Try treating G as a database - so instead that it will be shared by all the sub-processes - they will be able to get info from it and do what they need
1
280
0
0
2017-08-13T04:40:00.000
python,parallel-processing,multiprocessing,networkx,ipython-parallel
Parallelizing subgraph tasks in Python
1
1
1
45,657,197
0
0
0
I am working on a web based application where users will use their webcams for realtime Video call on web browser (kinda Webx). Is their any way that the web application can identify if any (tcp/udp/https) application/service is using the video service and consuming the network bandwidth so that I can show a message to the user - "Please close skype or gtalk and then to proceed with the video call". In short; how to identify skype like service which is holding the webcam via web application and alert user to close that app first.
false
45,672,563
0
0
0
0
Your web application can not know if there is a video service like Skype that is running on the same host than the browser, but your application can simply run a speed test, to check for the currently available bandwidth between the browser and your server. For instance, you can do that easily writing some AJAX code that automatically starts downloading a big file (using XMLHttpRequest()) and stops after 10 seconds (using XMLHttpRequest.abort()). You can monitor the throughput using progress events. Then, if your app find that there is not enough bandwidth, you may ask the user to stop network applications.
0
48
0
0
2017-08-14T10:30:00.000
php,python,web,network-programming
Way to identify tcp/udp/https service holding the webcam and port
1
1
1
45,712,038
0