Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I am using python 2.7 and networkx.
I have a quite large network and I need to find all the paths (not only the shortest path) between an origin and destination. Since my network is large, I would like to speed up with some constraints, such as path length, cost, etc..
I am using networkx. I don't want to use all_simple_paths because with all_simple_paths, I have to filter all the paths later based on path length (number of nodes in it) or cost of the path (based on arc costs). Filtering all the paths is very expensive for the large network.
I would really appreciate any help. | false | 39,500,131 | 0 | 0 | 0 | 0 | It really depends what paths you are looking for.
To begin with, the shortest path gives you the lowest bound c_min on the length constraint. Then given a length constraint c>=c_min, for each node n, you know the shortest path P_s_n and distance c_n from start to this node. Choose those nodes that satisfy c_n <c. Then you can extend P_s_n arbitrarily by any path from n to goal, which will satisfy your length constraint. | 0 | 363 | 0 | 0 | 2016-09-14T21:46:00.000 | python-2.7,networkx | Find all paths between origin destination with path length constraint | 1 | 1 | 1 | 39,629,000 | 0 |
1 | 0 | I can't find any results when searching Google for this response.
I'm using the current Google Python API Client to make requests against the Gmail API. I can successfully insert a label, I can successfully retrieve a user's SendAs settings, but I cannot update, patch, or create a SendAS without receiving this error.
Here's a brief snippit of my code:
sendAsResource = {"sendAsEmail": "[email protected]",
"isDefault": True,
"replyToAddress": "[email protected]",
"displayName": "Test Sendas",
"isPrimary": False,
"treatAsAlias": False
}
self.service.users().settings().sendAs().create(userId = "me", body=sendAsResource).execute()
The response I get is:
<HttpError 400 when requesting https://www.googleapis.com/gmail/v1/users/me/settings/sendAs?alt=json returned "Custom display name disallowed">
I've tried userId="me" as well as the user i'm authenticated with, both result in this error. I am using a service account with domain wide delegation. Since adding a label works fine, I'm confused why this doesn't.
All pip modules are up to date as of this morning (google-api-python-client==1.5.3)
Edit: After hours of testing I decided to try on another user and this worked fine. There is something unique about my initial test account. | false | 39,517,707 | 0 | 0 | 0 | 0 | This was a bug in the Gmail API. It is fixed now. | 0 | 964 | 0 | 1 | 2016-09-15T18:11:00.000 | python,gmail-api,google-api-python-client | Setting the SendAs via python gmail api returns "Custom display name disallowed" | 1 | 1 | 1 | 39,777,352 | 0 |
0 | 0 | Since google trends require you to login, can I still use an IP rotator such as crawlera to download the csv files? If so, is there any example code with python (i.e python + crawlera to download files on google).
Thanks in advance. | false | 39,536,762 | 0 | 0 | 0 | 0 | No one is going to write code for you.
But I can leave some comments because I have been using Crawlera proxies for the past few months.
With crawlera you can scrape Google Trends with new IP each time, or even you can use a same IP each time(its called session management in crawlera).
You can send a header 'X-Crawlera-Session':'create' along with your request and Crawlera on their end will create a session, and in response, they will return 'X-Crawlera-Session': ['123123123'] ... And if you think that you are not blocked from Google,
You can send 'X-Crawlera-Session': '123123123' with each of your request so Crawlera will use same IP each time. | 0 | 1,606 | 0 | 1 | 2016-09-16T17:06:00.000 | python,proxy,web-crawler,google-trends | Is it possible to use a proxy rotator such as crawlera with google trends? | 1 | 1 | 3 | 41,170,050 | 0 |
0 | 0 | Exception in XML-RPC listener loop (java.net.SocketException: Socket closed).
When I run PyCharm from bash , I get this error..As result: I cant't use python-console in pycharm Anybody know how to fix it ?
OS: ubuntu 16.04 | false | 39,537,700 | -0.197375 | 0 | 0 | -1 | Hi I had the same problem as you. I solved the problem by making the line 127.0.0.1 localhost as the first line in /etc/hosts. The reason python console does not run is that python console tries to connect to localhost:pycharm-port, but localhost was resolved to the IPv6 addess of ::1, and the connection is refused. | 1 | 602 | 0 | 4 | 2016-09-16T18:10:00.000 | python,pycharm,xml-rpc | How to fix python console error in pycharm? | 1 | 1 | 1 | 44,059,972 | 0 |
0 | 0 | I am trying to make a video-streaming application, in which i'll be able to both stream my webcam and my desktop. Up until now I've done so with TCP communication in order to make sure everything works, and it does, but very slowly.
I know that usually in live streams like these you would use UDP, but I can't get it to work. I have created a basic UDP client and a server, and it works with sending shorts string, but when it comes to sending a whole image i can't find a solution to that. I have also looked it up online but found only posts about sending images through sockets in general, and they used TCP.
I'm using Python 2.7, pygame to show the images, PIL + VideoCapture to save them, and StringIO + base64 in order to send them as string. | false | 39,547,386 | 0 | 0 | 0 | 0 | My psychic powers tell me that are hitting the size limit for a UDP packet, which is just under 64KB. You will likely need to split your image bytes up into multiple packets when sending and have some logic to put them back together on the receiving end. You will likely need to roll out your own header format.
Not sure why you would need to base64 encode your image bytes, that just add 33% of network overhead for no reason.
While UDP has less network overhead than TCP, it generally relies on you, the developer, to come up with your own mechanisms for flow control, fragmentation handling, lost packets, etc... | 0 | 992 | 0 | 0 | 2016-09-17T13:29:00.000 | python,image,sockets,stream,udp | Sending an image through UDP communication | 1 | 1 | 1 | 39,556,790 | 0 |
1 | 0 | I want to post data from html to another html
I know how to post data html->python and python-> html
I have dictionary in the html (I get it from python - return render_to_response('page.html', locals())
how can I use with the dictionary in the second html file? | false | 39,571,659 | 0 | 0 | 0 | 0 | If you are not gonna use any sensitive data like password you can use localStorage or Url Hash . | 0 | 95 | 0 | 0 | 2016-09-19T11:05:00.000 | javascript,python,html | Post data from html to another html | 1 | 1 | 2 | 69,865,240 | 0 |
0 | 0 | I've try before sending message from single producer to 2 different consumer with DIFFERENT consumer group id. The result is both consumer able to read the complete message (both consumers getting the same message). But I would like to ask is it possible for these 2 consumers read different messages while setting them under a SAME consumer group name? | true | 39,611,124 | 1.2 | 1 | 0 | 0 | I found the answer already, just make sure the partition number is not equal to one while creating new topic. | 0 | 112 | 0 | 0 | 2016-09-21T08:21:00.000 | python,producer-consumer,kafka-python | Single producer to multi consumers (Same consumer group) | 1 | 1 | 1 | 39,652,029 | 0 |
1 | 0 | Is that a way to create an EC2 instance with tags(I mean, adding tag as parameter when creating instance)
I can't find this function in boto APIs. According to the document, we can only add tags after creating.
However, when creating on the browser, we can configure the tags when creating. So can we do the same thing in boto? (In our course we are required to tag our resource when creating, which is for bill monitor purpose, so adding tags after creating is not allowed.....) | false | 39,628,128 | 0 | 0 | 0 | 0 | At the time of writing, there is no way to do this in a single operation. | 0 | 66 | 0 | 0 | 2016-09-21T23:44:00.000 | python,amazon-web-services,amazon-ec2,boto | Is there a way to set a tag on an EC2 instance while creating it? | 1 | 1 | 1 | 39,649,819 | 0 |
0 | 0 | Other people have asked this question and there are some answers but they do not clarify one moment. Implicit wait will wait for a specified amount of time if element is not found right away and then will run an error after waiting for the specified amount of time. Does it mean that implicit wait checks for the element the very first second and then waits for the specified time and checks at the last second again?
I know that explicit wait polls the DOM every 500ms. What is the practical use of implicit wait if tests take longer with it? | false | 39,628,191 | 0 | 0 | 0 | 0 | In case of implicit wait driver waits till elements appears in DOM but at the same time it does not guarantee that elements are usable. Elements might not be enabled to be used ( like button click ) or elements might not have shape defined at that time.
We are not interested with all the elements on the page as far as we are using selenium. All element might not have shape even.But presence of all the element in DOM is important to have other element working correctly. So implicit wait.
When working with any element, we use explicit wait ( WebDriverwait ) or FluentWait. | 0 | 1,773 | 0 | 2 | 2016-09-21T23:54:00.000 | python,selenium,selenium-webdriver | Selenium Webdriver Python - implicit wait is not clear to me | 1 | 1 | 2 | 39,629,897 | 0 |
0 | 0 | SORRY FOR BAD ENGLISH
Why if I have two send()-s on the server, and two recv()-s on the client, sometimes the first recv() will get the content of the 2nd send() from the server, without taking just the content of the first one and let the other recv() to take the "due and proper" content of the other send()?
How can I get this work in an other way? | false | 39,687,586 | 0 | 0 | 0 | 0 | Most probably you use SOCK_STREAM type socket. This is a TCP socket and that means that you push data to one side and it gets from the other side in the same order and without missing chunks, but there are no delimiters. So send() just sends data and recv() receives all the data available to the current moment.
You can use SOCK_DGRAM and then UDP will be used. But in such case every send() will send a datagram and recv() will receive it. But you are not guaranteed that your datagrams will not be shuffled or lost, so you will have to deal with such problems yourself. There is also a limit on maximal datagram size.
Or you can stick to TCP connection but then you have to send delimiters yourself. | 0 | 66 | 0 | 1 | 2016-09-25T13:53:00.000 | python | Weird behavior of send() and recv() | 1 | 1 | 2 | 39,687,703 | 0 |
0 | 0 | I want to read and write custom data to TCP options field using Scapy. I know how to use TCP options field in Scapy in "normal" way as dictionary, but is it possible to write to it byte per byte? | true | 39,713,540 | 1.2 | 0 | 0 | 1 | You can not directly write the TCP options field byte per byte, however you can either:
write your entire TCP segment byte per byte: TCP("\x01...\x0n")
add an option to Scapy's code manually in scapy/layers/inet.py TCPOptions structure
These are workarounds and a definitive solution to this would be to implement a byte per byte TCP options field and commit on Scapy's github of course. | 0 | 402 | 0 | 0 | 2016-09-26T22:46:00.000 | python,tcp,scapy | Read/Write TCP options field | 1 | 1 | 1 | 40,023,525 | 0 |
0 | 0 | i have made a python desktop app which takes voice commands now i want to present it via Skype to someone and so i want the people to hear the response , is there a way to do it, so that everyone on the call can hear the response and give voice command to it.. | false | 39,716,983 | 0 | 0 | 0 | 0 | Currently there is no way to present content through the Skype Web SDK. This might be something we add in a future release. | 0 | 439 | 0 | 0 | 2016-09-27T05:55:00.000 | python-2.7,skype-for-business,skypedeveloper | Skype integration with python desktop app | 1 | 1 | 1 | 39,847,739 | 0 |
0 | 0 | Need to download image from the tableau server using python script. Tableau Rest API doesn't provide any option to do so.I like to know what is proper way of downloading high resolution/full size image from tableau server using python or any other server scripting language. | true | 39,717,464 | 1.2 | 0 | 0 | 3 | The simplest approach is to issue an HTTP GET request from Python to your Tableau Server and append a format string to the URL such as ".png" or ".pdf".
There are size options you can experiment with as well -- press the Share button to see the syntax.
You can also pass filter settings in the URL as query parameters | 0 | 6,844 | 0 | 2 | 2016-09-27T06:27:00.000 | python,tableau-api | Tableau download/export images using Rest api python | 1 | 1 | 2 | 39,732,617 | 0 |
0 | 0 | I'm working with a networking appliance that has vague API documentation. I'm able to execute PATCH and GET requests fine, but POST isn't working. I receive HTTP status error 422 as a response, I'm missing a field in the JSON request, but I am providing the required fields as specified in the documentation. I have tried the Python Requests module and the vendor-provided PyCurl module in their sample code, but have encountered the same error.
Does the REST API have a debug method that returns the required fields, and its value types, for a specific POST? I'm speaking more of what the template is configured to see in the request (such as JSON {str(ServerName) : int(ServerID)}, not what the API developer may have created. | true | 39,726,577 | 1.2 | 0 | 0 | 1 | No this does not exist in general. Some services support an OPTIONS request to the route in question, which should return you documentation about the route. If you are lucky this is machine generated from the same source code that implements the route, so is more accurate than static documentation. However, it may just return a very simple summary, such as which HTTP verbs are supported, which you already know.
Even better, some services may support a machine description of the API using WSDL or WADL, although you probably will only find that if the service also supports XML. This can be better because you will be able to find a library that can parse the description and generate a local object model of the service to use to interact with the API.
However, even if you have OPTIONS or WADL file, the kind of error you are facing could still happen. If the documents are not helping, you probably need to contact the service support team with a demonstration of your problem and request assistance. | 1 | 2,845 | 0 | 3 | 2016-09-27T13:54:00.000 | python,rest,pycurl,http-status-code-422 | How to determine what fields are required by a REST API, from the API? | 1 | 1 | 1 | 39,726,781 | 0 |
0 | 0 | I wrote a little Python script that parses a website.
I got a "ä" character in form of \u00e4 in a url from a link like http://foo.com/h\u00e4ppo, and I need http://foo.com/häppo. | false | 39,778,909 | 0 | 0 | 0 | 0 | Unluckily this depends heavily on the encoding of the site you parsed, as well as your local IO encoding.
I'm not really sure if you can translate it after parsing, and if it's really worth the work. If you have the chance to parse it again you can try using python's decode() function, like:
text.decode('utf8')
Besides that, check that the encoding used above is the same that in your local environment. This is specially important on Windows environments, since they use cp1252 as their standard encoding.
In Mac and Linux: export PYTHONIOENCODING=utf8
In Windows: set PYTHONIOENCODING=utf8
It's not much, but I hope it helps. | 1 | 552 | 0 | 1 | 2016-09-29T19:55:00.000 | python,url,encoding | UTF8 Character in URL String | 1 | 1 | 2 | 39,779,578 | 0 |
1 | 0 | I want to scrape a lot (a few hundred) of sites, which are basically like bulletin boards. Some of these are very large (up to 1.5 million) and also growing very quickly. What I want to achieve is:
scrape all the existing entries
scrape all the new entries near real-time (ideally around 1 hour intervals or less)
For this we are using scrapy and save the items in a postresql database. The problem right now is, how can I make sure I got all the records without scraping the complete site every time? (Which would not be very agressive traffic-wise, but also not possible to complete within 1 hour.)
For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made.
I am not sure if I got my point accross, but tl;dr: How to fetch fast growing BBS in an incremental way? So with getting all the records, but only fetching new records each time. I looked at scrapy's resume function and also at scrapinghubs deltafetch middleware, but I don't know if (and how) they can help to overcome this problem. | false | 39,805,237 | 0.197375 | 0 | 0 | 1 | For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made.
Usually each record has a unique link (permalink) e.g. the above question can be accessed by just entering https://stackoverflow.com/questions/39805237/ & ignoring the text beyond that. You'll have to store the unique URL for each record and when you scrape next time, ignore the ones that you already have.
If you take the example of tag python on Stackoverflow, you can view the questions here : https://stackoverflow.com/questions/tagged/python but the sorting order can't be relied upon for ensuring unique entries. One way to scrape would be to sort by newest questions and keep ignoring duplicate ones by their URL.
You can have an algorithm that scrapes first 'n' pages every 'x' minutes until it hits an existing record. The whole flow is a bit site specific, but as you scrape more sites, your algorithm will become more generic and robust to handle edge cases and new sites.
Another approach is to not run scrapy yourself, but use a distributed spider service. They generally have multiple IPs and can spider large sites within minutes. Just make sure you respect the site's robots.txt file and don't accidentally DDoS them. | 0 | 265 | 0 | 0 | 2016-10-01T09:56:00.000 | python,postgresql,web-scraping,scrapy | How to go about incremental scraping large sites near-realtime | 1 | 1 | 1 | 39,805,342 | 0 |
0 | 0 | So I'm currently using chromedriver for selenium with Python, responses are quite slow, so I'm trying to reduce how much chromedriver loads..
is there anyway I can remove the address bar, tool bar and most of the gui from chrome its self using chrome arguments? | false | 39,807,281 | 0.197375 | 0 | 0 | 1 | I don't think hiding the address bar and other GUI elements will have any effect. I would like to suggest using PhantomJS, a headless browser without a GUI at all. This will certainly speed up your tests. | 0 | 206 | 0 | 0 | 2016-10-01T13:35:00.000 | python,selenium,selenium-chromedriver | Python Selenium CHROMEDRIVER | 1 | 1 | 1 | 39,807,531 | 0 |
0 | 0 | Im using python.
I dont understand the purpose of empty string in IP to connect to if its not to connect between two Computers in the same LAN router.
My knowledge in network is close to zero, so when I reading in the internet somthing like this:
empty string represents INADDR_ANY, and the string ''
represents INADDR_BROADCAST
So if you please will be able to explain me, like you explain to a baby that dont know nothing - what is the purpose of any of the follows in the IP location in socket object:
broadcast
''
localhost
and if there is more, so I will be glad to know about them too. Tanks. | true | 39,815,633 | 1.2 | 0 | 0 | 2 | 'localhost' (or '127.0.0.1') is used to connect with program on the same computer - ie. database viewer <-> local database server, Doom client <-> local Doom server. This way you don't have to write different method to connect to local server.
Computer can have more then one network card (NIC) and every NIC has own IP address. You can use this IP in program and then program will use only this one NIC to receive requests/connections. This way you may have server which receives requests only from LAN but not from Internet - it is very popular for databases used by web servers.
Empty string means '0.0.0.0' which means that program will receive requests from all NICs. | 0 | 1,707 | 0 | 2 | 2016-10-02T09:30:00.000 | python,sockets | I have get really confused in IP types with sockets (empty string, 'local host', etc...) | 1 | 1 | 1 | 39,815,976 | 0 |
1 | 0 | I have a .csv file with a list of URLs I need to extract data from. I need to automate the following process: (1) Go to a URL in the file. (2) Click the chrome extension that will redirect me to another page which displays some of the URL's stats. (3) Click the link in the stats page that enables me to download the data as a .csv file. (4) Save the .csv. (5) Repeat for the next n URLs.
Any idea how to do this? Any help greatly appreciated! | false | 39,836,893 | 0 | 0 | 0 | 0 | There is a python package called mechanize. It helps you automate the processes that can be done on a browser. So check it out.I think mechanize should give you all the tools required to solve the problem. | 0 | 282 | 0 | 0 | 2016-10-03T17:11:00.000 | python,automation,imacros | Automate file downloading using a chrome extension | 1 | 1 | 1 | 39,837,450 | 0 |
1 | 0 | I have a webservice (web.py+cx_Oracle) and now I will call it with localhost:8080/...!
On the local pc it is working. But after installation on a second pc for testing purposes it is not working there. All versions are the same!
On the second pc the browser is asking for a username and password from XDB. What is XDB and why is he asking only on the second pc?
On the first pc everything works fine and he is not asking for username and password...Can someone explain to me what is going on? | false | 39,869,000 | 0 | 0 | 0 | 0 | XDB is an Oracle database component. It would appear that on your first PC, you're able to automatically log on to the database which is why you're not prompted. However, the second PC isn't able to, so you're prompted.
Compare using SQL*Plus (or other oracle client) from your two PCs & configure PC #2 so that it won't require a login (or modify your cx_oracle connect() call to provide the correct connection parameters (user, password, dsn, etc.) | 0 | 809 | 0 | 0 | 2016-10-05T08:30:00.000 | python,web.py,cx-oracle | Asking for username and password from XDB | 1 | 1 | 1 | 40,708,891 | 0 |
0 | 0 | I have a repeatable problem with my laptop (an HP G4 250 that came with windows 10). I can be happily on the Internet, but opening Spyder causes the Internet to immediately die. Now, the system does something rather unusual. I am not disconnected from the router, and the wireless icon still says I am connected and have Internet access. But streams crash, webpages refuse to load and say there is no internet connection, and I can;t even access my router's config page.
Closing Spyder fixes the problem. Not instantly, but when Spyder is open, it creates several pythonw.exe network requests (seen from resource manager) and the Internet is restored when those processes close themselves upon exiting Spyder (typically 10 seconds to 2 minutes, depending on system load).
I have added Spyder to my firewall, but that has done nothing. I haven't added (nor found) pythonw.exe, but it's not Spyder that has the problem with connecting, it's my entire machine.
It's not coincidental. It's happened now, 2 days in a row, and is highly repeatable. After a while with Spyder being open, I can sometimes receive intermittent Internet function, but it frequently drops until I close the program.
After experiencing it last night, I purged my driver and reinstalled it fresh, and that has fixed nothing. I am running the latest wireless driver provided by HP for my machine. As this problem only occurs when running Spyder, I doubt it's a driver or hardware issue.
Any ideas? | false | 39,902,458 | 0.664037 | 0 | 0 | 4 | I had the same problem when Spyder was open, with all of my Internet browsers, on both Windows 7 and Windows 10. The newest update of Spyder has fixed most of this for me. Try opening up the command prompt and typing:
conda update spyder.
Hope this helps! | 0 | 1,453 | 0 | 3 | 2016-10-06T17:42:00.000 | python,windows-10,wireless,spyder | Anaconda 3 Spyder appears to be causing internet outages on Windows 10 | 1 | 1 | 1 | 40,372,092 | 0 |
1 | 0 | I have a Java process which interacts with its REST API called from my program's UI. When I receive the API call, I end up calling the (non-REST based) Python script(s) which do a bunch of work and return me back the results which are returned back as API response.
- I wanted to convert this interaction of UI API -> JAVA -> calling python scripts to become end to end a REST one, so that in coming times it becomes immaterial which language I am using instead of Python.
- Any inputs on whats the best way of making the call end-to-end a REST based ? | false | 39,906,167 | 0 | 0 | 0 | 0 | Furthermore, in the future you might want to separate them from the same machine and use network to communicate.
You can use http requests.
Make a contract in java of which output you will provide to your python script (or any other language you will use) send the output as a json to your python script, so in that way you can easily change the language as long as you send the same json. | 0 | 2,327 | 0 | 1 | 2016-10-06T21:51:00.000 | java,python,rest,api | Inputs on how to achieve REST based interaction between Java and Python? | 1 | 1 | 2 | 39,906,371 | 0 |
0 | 0 | Using proxy connection (HTTP Proxy : 10.3.100.207, Port 8080).
Using python's request module's get function, getting following error:
"Unable to determine SOCKS version from socks://10.3.100.207:8080/" | false | 39,906,836 | 0.197375 | 0 | 0 | 2 | I resolved this problem by removing "socks:" in_all_proxy. | 0 | 9,897 | 0 | 3 | 2016-10-06T22:53:00.000 | proxy,python-requests,socks | Unable to determine SOCKS version from socks | 1 | 2 | 2 | 46,545,628 | 0 |
0 | 0 | Using proxy connection (HTTP Proxy : 10.3.100.207, Port 8080).
Using python's request module's get function, getting following error:
"Unable to determine SOCKS version from socks://10.3.100.207:8080/" | true | 39,906,836 | 1.2 | 0 | 0 | 9 | Try export all_proxy="socks5://10.3.100.207:8080" if you want to use socks proxy.
Else export all_proxy="" for no proxy.
Hope This works. :D | 0 | 9,897 | 0 | 3 | 2016-10-06T22:53:00.000 | proxy,python-requests,socks | Unable to determine SOCKS version from socks | 1 | 2 | 2 | 40,343,534 | 0 |
1 | 0 | I don't want to use selenium since I dont want to open any browsers.
The button triggers a Javascript method that changes something in the page.
I want to simulate a button click so I can get the "output" from it.
Example (not what the button actually do) :
I enter a name such as "John", press the button and it changes "John" to "nhoJ".
so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output.
Thanks. | false | 39,963,972 | 0.291313 | 0 | 0 | 3 | You can't do what you want. Beautiful soup is a text processor which has no way to run JavaScript. | 0 | 5,581 | 0 | 0 | 2016-10-10T17:50:00.000 | python | Python: How to simulate a click using BeautifulSoup | 1 | 2 | 2 | 39,964,037 | 0 |
1 | 0 | I don't want to use selenium since I dont want to open any browsers.
The button triggers a Javascript method that changes something in the page.
I want to simulate a button click so I can get the "output" from it.
Example (not what the button actually do) :
I enter a name such as "John", press the button and it changes "John" to "nhoJ".
so I already managed to change the value of the input to John but I have no clue how I could simulate a button click so I can get the output.
Thanks. | false | 39,963,972 | 0 | 0 | 0 | 0 | BeautifulSoup is an HtmlParser you can't do such thing. Buf if that button calls an API, you could make a request to that api and I guess that would simulate clicking the button. | 0 | 5,581 | 0 | 0 | 2016-10-10T17:50:00.000 | python | Python: How to simulate a click using BeautifulSoup | 1 | 2 | 2 | 39,964,061 | 0 |
0 | 0 | I am now trying the python module google which only return the url from the search result. And I want to have the snippets as information as well, how could I do that?(Since the google web search API is deprecated) | false | 39,989,680 | 0 | 0 | 0 | 0 | I think you're going to have to extract your own snippets by opening and reading the url in the search result. | 0 | 167 | 0 | 1 | 2016-10-12T02:41:00.000 | python | How can I get the google search snippets using Python? | 1 | 1 | 1 | 39,989,752 | 0 |
0 | 0 | I'm wondering, I'd love to find or write condition to check if some element exists. If it does than I want to execute body of IF condition. If it doesn't exist than to execute body of ELSE.
Is there some condition like this or is it necessary to write by myself somehow? | true | 39,997,176 | 1.2 | 0 | 0 | 9 | By locating the element using xpath, I assume that you're using Sselenium2Library. In that lib there is a keyword named:
Page Should Contain Element which requires an argument, which is a selector, for example the xpath that defines your element.
The keyword failes, if the page does not contain the specified element.
For the condition, use this:
${Result}= Page Should Contain Element ${Xpath}
Run Keyword Unless '${RESULT}'=='PASS' Keyword args*
You can also use an other keyword: Xpath Should Match X Times | 0 | 22,789 | 0 | 4 | 2016-10-12T11:14:00.000 | python,testing,xpath,automated-tests,robotframework | Robot Framework - check if element defined by xpath exists | 1 | 1 | 3 | 40,018,399 | 0 |
0 | 0 | I'm using the Python requests library to send a POST request. The part of the program that produces the POST data can write into an arbitrary file-like object (output stream).
How can I make these two parts fit?
I would have expected that requests provides a streaming interface for this use case, but it seems it doesn't. It only accepts as data argument a file-like object from which it reads. It doesn't provide a file-like object into which I can write.
Is this a fundamental issue with the Python HTTP libraries?
Ideas so far:
It seems that the simplest solution is to fork() and to let the requests library communicate with the POST data producer throgh a pipe.
Is there a better way?
Alternatively, I could try to complicate the POST data producer. However, that one is parsing one XML stream (from stdin) and producing a new XML stream to used as POST data. Then I have the same problem in reverse: The XML serializer libraries want to write into a file-like object, I'm not aware of any possibility that an XML serializer provides a file-like object from which other can read.
I'm also aware that the cleanest, classic solution to this is coroutines, which are somewhat available in Python through generators (yield). The POST data could be streamed through (yield) instead of a file-like object and use a pull-parser.
However, is possible to make requests accept an iterator for POST data? And is there an XML serializer that can readily be used in combination with yield?
Or, are there any wrapper objects that turn writing into a file-like object into a generator, and/or provide a file-like object that wraps an iterator? | false | 40,015,869 | -0.099668 | 0 | 0 | -1 | The only way of connecting a data producer that requires a push interface for its data sink with a data consumer that requires a pull interface for its data source is through an intermediate buffer. Such a system can be operated only by running the producer and the consumer in "parallel" - the producer fills the buffer and the consumer reads from it, each of them being suspended as necessary. Such a parallelism can be simulated with cooperative multitasking, where the producer yields the control to the consumer when the buffer is full, and the consumer returns the control to the producer when the buffer gets empty. By taking the generator approach you will be building a custom-tailored cooperative multitasking solution for your case, which will hardly end up being simpler compared to the easy pipe-based approach, where the responsibility of scheduling the producer and the consumer is entirely with the OS. | 0 | 10,296 | 0 | 6 | 2016-10-13T08:25:00.000 | python,xml,http,python-requests,generator | How to stream POST data into Python requests? | 1 | 1 | 2 | 40,018,118 | 0 |
0 | 0 | Elasticsearch has a very useful feature called profile API. That is you can get very useful information on the queries performed. I am using the python elasticsearch library to perform the queries and want to be able to get those information back but I don't see anywhere in the docs that this is possible. Have you managed to do it someway? | true | 40,023,005 | 1.2 | 0 | 0 | 0 | adding profile="true" to the body did the trick. In my opinion this should be an argument like size etc in the search method of the Elasticsearch class | 0 | 223 | 0 | 1 | 2016-10-13T13:54:00.000 | python,elasticsearch,profiler | Elasticsearch profile api in python library | 1 | 1 | 1 | 40,023,342 | 0 |
1 | 0 | So I want to implement async file upload for a website. It uses python and javascript for frontend. After googling, there are a few great posts on them. However, the posts use different methods and I don't understand which one is the right one.
Method 1:
Use ajax post to the backend.
Comment: does it make a difference? I thought async has to be in the backend not the front? So when the backend is writing files to disk, it will still be single threaded.
Method 2:
Use celery or asyncio to upload file in python.
Method 3:
use background thread to upload file in python.
Any advice would be thankful. | true | 40,034,010 | 1.2 | 0 | 0 | 1 | Asynchronous behavior applies to either side independently. Either side can take advantage of the capability to take care of several tasks as they become ready rather than blocking on a single task and doing nothing in the meantime. For example, servers do things asynchronously (or at least they should) while clients usually don't need to (though there can be benefits if they do and modern programming practices encourage that they do). | 1 | 786 | 0 | 0 | 2016-10-14T02:33:00.000 | python,ajax,asynchronous | Confusion on async file upload in python | 1 | 1 | 1 | 40,034,220 | 0 |
0 | 0 | streamer.filter(locations=[-180, -90, 180, 90], languages=['en'], async=True)
I am trying to extract the tweets which have been geotagged from the twitter streaming API using the above call. However, I guess tweepy is not able to handle the requests and quickly falls behind the twitter rate. Is there a suggested workaround the problem ? | false | 40,071,459 | 0 | 1 | 0 | 0 | There is no workaround to rate limits other than polling for the rate limit status and waiting for the rate limit to be over. you can also use the flag 'wait_on_rate_limit=True'. This way tweepy will poll for rate limit by itself and sleep until the rate limit period is over.
You can also use the flag 'monitor_rate_limit=True' if you want to handle the rate limit "Exception" by yourself.
That being said, you should really devise some smaller geo range, since your rate limit will be reached every 0.000000001 seconds (or less... it's still twitter). | 0 | 74 | 0 | 0 | 2016-10-16T14:34:00.000 | python,twitter,tweepy | Prevent Tweepy from falling behind | 1 | 1 | 1 | 40,072,173 | 0 |
0 | 0 | Here's the scenario: a main thread spawns upto N worker threads that each will update a counter (say they are counting number of requests handled by each of them).
The total counter also needs to be read by the main thread on an API request.
I was thinking of designing it like so:
1) Global hashmap/array/linked-list of counters.
2) Each worker thread accesses this global structure using the thread-ID as the key, so that there's no mutex required to protect one worker thread from another.
3) However, here's the tough part: no example I could find online handles this: I want the main thread to be able to read and sum up all counter values on demand, say to serve an API request. I will NEED a mutex here, right?
So, effectively, I will need a per-worker-thread mutex that will lock the mutex before updating the global array -- given each worker thread only contends with main thread, the mutex will fail only when main thread is serving the API request.
The main thread: when it receives API request, it will have to lock each of the worker-thread-specific mutex one by one, read that thread's counter to get the total count.
Am I overcomplicating this? I don't like requiring per-worker-thread mutex in this design.
Thanks for any inputs. | true | 40,076,481 | 1.2 | 0 | 0 | 0 | Your design sounds like the correct approach. Don't think of them as per-thread mutexes: think of them as per-counter mutexes (each element of your array should probably be a mutex/counter pair).
In the main thread there may be no need to lock all of the mutexes and then read all of the counters: you might be able to do the lock/read/unlock for each counter in sequence, if the value is something like your example (number of requests handled by each thread) where reading all the counters together doesn't give a "more correct" answer than reading them in sequence.
Alternatively, you could use atomic variables for the counters instead of locks if your language/environment offers that. | 1 | 89 | 0 | 0 | 2016-10-16T23:08:00.000 | java,python,c++,multithreading,pthreads | Global array storing counters updated by each thread; main thread to read counters on demand? | 1 | 2 | 2 | 40,077,181 | 0 |
0 | 0 | Here's the scenario: a main thread spawns upto N worker threads that each will update a counter (say they are counting number of requests handled by each of them).
The total counter also needs to be read by the main thread on an API request.
I was thinking of designing it like so:
1) Global hashmap/array/linked-list of counters.
2) Each worker thread accesses this global structure using the thread-ID as the key, so that there's no mutex required to protect one worker thread from another.
3) However, here's the tough part: no example I could find online handles this: I want the main thread to be able to read and sum up all counter values on demand, say to serve an API request. I will NEED a mutex here, right?
So, effectively, I will need a per-worker-thread mutex that will lock the mutex before updating the global array -- given each worker thread only contends with main thread, the mutex will fail only when main thread is serving the API request.
The main thread: when it receives API request, it will have to lock each of the worker-thread-specific mutex one by one, read that thread's counter to get the total count.
Am I overcomplicating this? I don't like requiring per-worker-thread mutex in this design.
Thanks for any inputs. | false | 40,076,481 | 0 | 0 | 0 | 0 | Just use an std::atomic<int> to keep a running count. When any thread updates its counter it also updates the running count. When the main thread needs the count it reads the running count. The result may be less than the actual total at any given moment, but whenever things settle down, the total will be right. | 1 | 89 | 0 | 0 | 2016-10-16T23:08:00.000 | java,python,c++,multithreading,pthreads | Global array storing counters updated by each thread; main thread to read counters on demand? | 1 | 2 | 2 | 40,076,739 | 0 |
1 | 0 | I'm using Robotframework selenium2Library with python base and Firefox browser for automating our web application. Having below issue when ever a Click event is about occur,
Header in the web application is immovable during page scroll(ie., whenever page scroll happens header would always be available for user view, only the contents get's scrolled) now the issue is, when a element about to get clicked is not available in page view, click event tries to scroll page to bring the element on top of the webpage,which is exactly below the header(overlap) and click event never occurs, getting below exception.
WebDriverException: Message: Element is not clickable at point (1362.63330078125, 15.5). Other element would receive the click: https://url/url/chat/chat.asp','popup','height=600, width=680, scrollbars=no, resizable=yes, directories=no, menubar=no, status=no, toolbar=no'));">
I have tried Wait Until Page is Visible keyword, but still this doesn't help, as the next statement, Click event(Click Element, Click Link etc) is again scrolling up to the header.
Header being visible all time is a feature in our web application and due this scrips are failing, Can some one please help to over come this issue and make the click event to get executed successfully? | false | 40,100,528 | 0 | 0 | 0 | 0 | If you know the element is clickable and just want to click anyway, try using Click Element At Coordinates with a 0,0 offset. It'll ignore the fact that it's obscured and will just click. | 0 | 1,072 | 0 | 1 | 2016-10-18T05:53:00.000 | python-2.7,robotframework,selenium2library | Robotframework Selenium2Library header overlay on element to be clicked during page scroll | 1 | 1 | 3 | 61,486,889 | 0 |
1 | 0 | Firefox can display '囧' in gb2312 encoded HTML. But u'囧'.encode('gb2312') throws UnicodeEncodeError.
1.Is there a map, so firefox can lookup gb2312 encoded characters in that map, find 01 display matrix and display 囧.
2.Is there a map for tranlating unicode to gb2312 but u'囧' is not in that map? | false | 40,100,596 | 0.291313 | 0 | 0 | 3 | 囧 not in gb2312, use gb18030 instead. I guess firefox may extends encode method when she face unknown characters. | 0 | 211 | 0 | 1 | 2016-10-18T05:58:00.000 | python,unicode,encode,gb2312 | u'囧'.encode('gb2312') throws UnicodeEncodeError | 1 | 1 | 2 | 40,100,834 | 0 |
1 | 0 | If your seeing this I guess you are looking to run chromium on a raspberry pi with selenium.
like this Driver = webdriver.Chrome("path/to/chomedriver") or like this webdriver.Chrome() | false | 40,141,260 | 0.761594 | 0 | 0 | 5 | I have concluded, after hours and a whole night of debugging that you can't install it, because there is no chromedriver compatible with a raspberry pi processor. Even if you download the linux 32bit. You can confirm it by running this line in a terminal window path/to/chromedriver it will give you this error
cannot execute binary file: Exec format error
Hope this helps anyone that wanted to do this :) | 0 | 729 | 0 | 2 | 2016-10-19T20:48:00.000 | python,selenium,raspberry-pi | selenium run chrome on raspberry pi | 1 | 1 | 1 | 40,141,261 | 0 |
1 | 0 | Is it possible with the Google DoubleClick Bid Manager API to create campaigns, set bids and buy adds?, I have checked the documentation and it seems that there are limited endpoints.
These are all the available endpoints according to the documentation:
doubleclickbidmanager.lineitems.downloadlineitems Retrieves line items in CSV format.
doubleclickbidmanager.lineitems.uploadlineitems Uploads line items in
CSV format.
doubleclickbidmanager.queries.createquery Creates a query.
doubleclickbidmanager.queries.deletequery Deletes a stored query as
well as the associated stored reports.
doubleclickbidmanager.queries.getquery Retrieves a stored query.
doubleclickbidmanager.queries.listqueries Retrieves stored queries.
doubleclickbidmanager.queries.runquery Runs a stored query to
generate a report.
doubleclickbidmanager.reports.listreports Retrieves stored reports.
doubleclickbidmanager.sdf.download Retrieves entities in SDF format.
None of these endpoints can do tasks as buy ads, set bids or create campaigns, so I think those tasks can only be done through the UI and not with the API.
Thanks in advance for your help. | true | 40,180,601 | 1.2 | 0 | 0 | 1 | I found the way to solve this problem. The actual API v1 has this capabilities but the documentation is not very clear about it.
You need to download your Line Items file as CSV or any other supported format, then from that downloaded file you must edit it with any script you want, so you must edit the columns of Status to perform this operation. Also, if you want to create a new campaign, you will need to do the same for new Line Items. After editing the CSV or created one, you must uploaded back to google with the relative endpoint: uploadlineitems.
Google will answer to the owner of the Bid Manager account what changes were accepted from that file that you sent.
I have confirmed that this is the same behaviour that Google uses for other products where they consume their own API:
Download or Create Line Items file as CSV or any other supported format.
Edit Line Items.
Upload Line Items.
So basically you only need to create a script that edits CSV files and another to authenticate with the API. | 0 | 1,015 | 0 | 3 | 2016-10-21T15:38:00.000 | google-api-python-client,double-click-advertising | Create Campaigns, set bids and buy adds from DoubleClick Bid Manager API | 1 | 1 | 2 | 40,370,299 | 0 |
0 | 0 | I have a script that I found on the internet that worked in Python 3.4, but not Python 3.5. I'm not too familiar in python, but it has the
#!/usr/bin/env python3
schlebang at the top of the file. And it also throws this exception when I try to run it:
Traceback (most recent call last):
File "/home/username/folder/script.py", line 18, in
doc = opener.open(url)
File "/usr/lib/python3.5/urllib/request.py", line 472, in open
response = meth(req, response)
File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.5/urllib/request.py", line 504, in error
result = self._call_chain(*args)
File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain
result = func(*args)
File "/usr/lib/python3.5/urllib/request.py", line 968, in http_error_401
url, req, headers)
File "/usr/lib/python3.5/urllib/request.py", line 921, in http_error_auth_reqed
return self.retry_http_basic_auth(host, req, realm)
File "/usr/lib/python3.5/urllib/request.py", line 931, in retry_http_basic_auth
return self.parent.open(req, timeout=req.timeout)
File "/usr/lib/python3.5/urllib/request.py", line 472, in open
response = meth(req, response)
File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.5/urllib/request.py", line 510, in error
return self._call_chain(*args)
File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain
result = func(*args)
File "/usr/lib/python3.5/urllib/request.py", line 590, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
Python isn't really my preferred langage, so I don't know what to do. This is a script that's supposed to access my Gmail account and pull new mails from it. Do you guys have any suggestions? I'm using Arch Linux, if that helps. | true | 40,204,380 | 1.2 | 0 | 0 | 2 | Variant A:
Run this script as python3.4 /path/to/script
Variant B:
Change the schebang to #!/usr/bin/python3.4 | 0 | 585 | 1 | 0 | 2016-10-23T15:03:00.000 | python-3.x,python-3.4,python-3.5 | use python 3.4 instead of python 3.5 | 1 | 1 | 1 | 49,835,752 | 0 |
0 | 0 | I've created tool, that runs as a server, and allow clients to connect to it through TCP, and run some commands. It's written on python 3
Now I'm going to build package and upload it to Pypi, and have conceptual problem.
This tool have python client library inside, so, after installation of the package, it'll be possible to just import library into python script, and use for connection to the daemon without dealing with raw TCP/IP.
Also, I have PHP library, for connection to me server, and the problem is - I don't know how to include it into my python package the right way.
Variants, that I found and can't choose the right one:
Just include library.php file into package, and after running "pip install my_package", I would write "require('/usr/lib/python3/dist-packages/my_package/library.php')" into my php file. This way allows to distribute library with the server, and update it synchronously, but add long ugly paths to php require instruction.
As library.php in placed on github repository, I could just publish it's url in the docs, and it'll be possible to just clone repository. It makes possible to clone repo, and update library by git pull.
Create separate package with my library.php, upload it into packagist, and use composer to download it when it's needed. Good for all composer users, and allow manual update, but doens't update with server's package.
Maybe I've missed some other variants.
I want to know what would be true python'ic and php'ic way to do this. Thanks. | true | 40,236,281 | 1.2 | 1 | 0 | 0 | I've decided to create separate PHP package for my PHP library, and upload it to a packagist.org, so, user could get it using php composer, but not forced to, as it would be in case of including library.php into python package. | 0 | 605 | 0 | 0 | 2016-10-25T09:29:00.000 | php,python,python-3.x,pip,composer-php | How to include PHP library in Python package (on not do it) | 1 | 1 | 1 | 40,252,615 | 0 |
1 | 0 | I am using a proxy (from proxymesh) to run a spider written in scrapy python, the script is running normally when I don't use the proxy, but when I use it, I am having the following error message:
Could not open CONNECT tunnel with proxy fr.proxymesh.com:31280 [{'status': 408, 'reason': 'request timeout'}]
Any clue about how to figure out?
Thanks in advance. | true | 40,266,219 | 1.2 | 0 | 0 | 1 | Thanks.. I figure out here.. the problem is that some proxy location doesn't work with https.. so I just changed it and now it is working. | 0 | 497 | 0 | 0 | 2016-10-26T15:24:00.000 | python,proxy,web-scraping,scrapy,web-crawler | Proxy Error 408 when running a script written in Scrapy Python | 1 | 1 | 1 | 40,296,417 | 0 |
1 | 0 | Is there a way to upload questions to "Retrieve and Rank" (R&R) using cURL and have them be visible in the web tool?
I started testing R&R using web tool (which I find very intuitive). Now, I have started testing the command line interface (CLI) for more efficient uploading of question-and-answer pairs using train.py. However, I would still like to have the questions visible in web tool so that other people can enter the collection and perform training there as well. Is it possible in the present status of R&R? | false | 40,293,466 | 0.197375 | 1 | 0 | 1 | Sorry, no - there isn't a public supported API for submitting questions for use in the tool.
(That wouldn't stop you looking to see how the web tool does it and copying that, but I wouldn't encourage that as the auth step alone would make that fairly messy). | 0 | 85 | 0 | 0 | 2016-10-27T20:14:00.000 | python,curl,ibm-watson,retrieve-and-rank | Upload questions to Retrieve and Rank using cURL, visible in webtool | 1 | 1 | 1 | 40,302,783 | 0 |
0 | 0 | How do I find (in Python 3.x) the default location where flash drives automatically mount when plugged in on a computer that I happen to be using at the time? (It could be any of various non-specific Linux distributions and older/new versions. Depending on which one it is, it may mount at such locations as /media/driveLabel, /media/userName/driveLabel, /mnt/driveLabel, etc.)
I was content just assuming /media/driveLabel until Ubuntu updated its default mount location to include the username (so, now I can't use a static location for bookmarked file settings of a portable app I made across my computers, since I use multiple usernames). So, the paths for the bookmarked files need to be updated every time I use a new computer or user. Note that files on the hard drives are also bookmarked (so, those don't need to be changed; they're set not to load if you're not on the right computer for them).
Anyway, now I'm not content just going with /media mounts, if there's a solution here. I would prefer to be able to find this location without having to mount something and find a path's mount location first, if possible (even though that may help me with the problem that sparked the question). It seems like there should be some provision for this, whether in Python, or otherwise.
In other words, I want to be able to know where my flash drive is going to mount (sans the drive label part)—not where it's already mounted.
EDIT: If /media/username/drivelabel is pretty standard for the automatic mounts across all the major distributions that support automatic mounting (the latest versions, at least, as I seem to recall that Ubuntu didn't always include the username), feel free to let me know, as that pretty much answers the question. Or, you could just tell me a list of automatic flash drive mount locations specific to which major distributions. I guess that could work (though I'd have to update it if they changed things).
FYI EDIT: For my problem I'll probably just save the mount location with the bookmark (so my program knows what part of the bookmark path it was when I open it), and replace that in the bookmark path with the new current mount location when a user loads the bookmark. | false | 40,306,972 | 0.066568 | 0 | 0 | 1 | Why don't you use the Udev to force the location by your self, simply you can create a UDEV script that keep listening on the drives insertion and map the inserted USB drive to specific location on the machine | 0 | 607 | 0 | 0 | 2016-10-28T14:11:00.000 | linux,python-3.x | How do I find the location where flash drives automatically mount in Python? | 1 | 1 | 3 | 40,307,020 | 0 |
0 | 0 | I am developing a new rest service , lets call serviceA which will internally invoke another rest service ,lets call it serviceB and do some data manipulation and return the response. I am trying to determine what http error status codes returned in below scenarios when client invokes serviceA
serviceB is down
serviceB returns the exception to serviceA because data does not exist as per the request.
serviceA gets the correct response from serviceB , but fails to complete the internal processing and errors out.
Thanks, any comments are appreciated. | false | 40,312,526 | 0 | 0 | 0 | 0 | For the client which is calling serviceA , serviceB doesn't exists. serviceB is for an internal mechanism of serviceA. So in my opinion either point 1 or point 3, it should just be 500 internal server error.
For point 2, I think serivceA should catch the serviceB exception for no data and return 204 No content found.
Now, additional points. If you have some logic on your client side when serviceB is down and you must know that, you can return 503 or 504 for point 1. | 0 | 28 | 0 | 0 | 2016-10-28T20:08:00.000 | python,rest,http,error-handling | Error Scenarios in rest service | 1 | 1 | 1 | 40,322,779 | 0 |
0 | 0 | I wrote some testing scripts with selenium, and they were working fine as long as I started them from my account, on a Windows 7 machine. But when a colleague started it from his account, on the same machine, some of the tests had a NoSuchElementException. What can cause that difference, maybe something the graphic-settings like the display resolution?
The scripts are written in Python, they are using Selenium-Webdriver with Firefox. The PC has Windows 7 Enterprise, 64 Bit, Service Pack 1, with Python 2.7.12 installed. | false | 40,314,215 | 0 | 0 | 0 | 0 | It looks like I’ve found the explanation: It’s the screen resolution. My colleague and I were connecting to the PC using RDP, and his connection had a smaller screen resolution than mine.
When I was starting a connection, explicitly setting the resolution to his values, the tests produced errors, and when I then started a new connection with my original settings, the tests were running fine again. And this was reproducible.
I think the background is that on the page are many elements, which are moving dynamically with the page size, and some elements, which are fix or have a minimum size. So it’s possible that an element is covered by another, if the screen is too small, and with it the browser-window. Also a fix element could be partly out of the viewport. | 0 | 44 | 0 | 0 | 2016-10-28T22:48:00.000 | python,windows,selenium,selenium-webdriver | What difference can the windows-user-settings make, when running a selenium-script? | 1 | 1 | 1 | 40,421,062 | 0 |
0 | 0 | I'm using chromedriver on selenium by python scripts.
When fire the scripts,
Remote end closed connection without response was raised.
does anyone solve this?
chrome: 55.0.2883.28
chromedriver: 2.25 | false | 40,331,375 | -0.197375 | 0 | 0 | -1 | I had issues with connections closing unexpectedly after updating to selenium 3.8.1, using Chrome and Java. I was able to resolve the issue by re-trying the driver setup when it quit unexpectedly. | 0 | 2,445 | 0 | 6 | 2016-10-30T16:52:00.000 | python,google-chrome,selenium,selenium-chromedriver | Remote end closed connection without response chromedriver | 1 | 1 | 1 | 52,170,277 | 0 |
0 | 0 | I am having an issue with Regular expression, I need the most efficient regex that
match IP address and in range of 255 only.
I tried this one "ip_pattern = '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'" , but it does match even numbers over 255, such as 321.222.11.4 | false | 40,370,552 | 0.099668 | 0 | 0 | 1 | Use this Regex. It will match and check the IP range within 255.
\b(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9]).(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9]).(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9]).(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]?[0-9])\b | 1 | 1,682 | 0 | 2 | 2016-11-02T00:14:00.000 | python,regex,ip,analysis | IP address regex python | 1 | 1 | 2 | 53,222,067 | 0 |
0 | 0 | I am developing a load balancing between multiple controllers in sdn. Once a load is calculated on a controller-1 I need to migrate some part of that to controller-2. I have created the topology using mininet and running 2 remote pox controllers one on 127.0.0.1:6633 and other on 127.0.0.1:6634.How do I communicate between these controllers? How can I send load information of controller-1 to controller-2 and migrate some flows there? | false | 40,384,775 | 0 | 0 | 0 | 0 | POX is not a distributed controller. I would really recommend you to migrate immediately to ONOS or opendaylight. You would implement your solution on top of ONOS. | 0 | 1,009 | 1 | 0 | 2016-11-02T16:17:00.000 | python,sdn,pox | Communicating between multiple pox controllers | 1 | 1 | 2 | 43,265,462 | 0 |
0 | 1 | I know this question has been asked before but those answers seem to revolve around Hadoop. For Spark you don't really need all the extra Hadoop cruft. With the spark-ec2 script (available via GitHub for 2.0) your environment is prepared for Spark. Are there any compelling use cases (other than a far superior boto3 sdk interface) for running with EMR over EC2? | true | 40,410,975 | 1.2 | 0 | 0 | 3 | This question boils down to the value of managed services, IMHO.
Running Spark as a standalone in local mode only requires you get the latest Spark, untar it, cd to its bin path and then running spark-submit, etc
However, creating a multi-node cluster that runs in cluster mode requires that you actually do real networking, configuring, tuning, etc. This means you've got to deal with IAM roles, Security groups, and there are subnet considerations within your VPC.
When you use EMR, you get a turnkey cluster in which you can 1-click install many popular applications (spark included), and all of the Security Groups are already configured properly for network communication between nodes, you've got logging already setup and pointing at S3, you've got easy SSH instructions, you've got an already-installed apparatus for tunneling and viewing the various UI's, you've got visual usage metrics at the IO level, node level, and job submission level, you also have the ability to create and run Steps -- which are jobs that can be run in the command line of the drive node or as Spark applications that leverage the whole cluster. Then, on top of that, you can export that whole cluster, steps included, and copy paste the CLI script into a recurring job via DataPipeline and literally create an ETL pipeline in 60 seconds flat.
You wouldn't get any of that if you built it yourself in EC2. I know which one I would choose... EMR. But that's just me. | 0 | 129 | 0 | 3 | 2016-11-03T20:48:00.000 | python-3.x,apache-spark,amazon-ec2 | Does EMR still have any advantages over EC2 for Spark? | 1 | 1 | 1 | 40,413,120 | 0 |
0 | 0 | Whats the difference between getting text and innerHTML when using selenium. Even though we have text under particular element, when we perform .text we get empty values. But doing .get_attribute("innerHTML") works fine.
Can someone point out the difference between two? When someone should use '.get_attribute("innerHTML")' over .text? | false | 40,416,048 | 0.119427 | 0 | 0 | 3 | .text will retrieve an empty string of the text in not present in the view port, so you can sroll the object into the viewport and try .text it should retrive the value.
On the contrary innerhtml can get the value even of it is present out side the view port | 0 | 13,511 | 0 | 11 | 2016-11-04T05:46:00.000 | python,selenium,web-scraping,properties,attributes | Difference between text and innerHTML using Selenium | 1 | 2 | 5 | 40,416,749 | 0 |
0 | 0 | Whats the difference between getting text and innerHTML when using selenium. Even though we have text under particular element, when we perform .text we get empty values. But doing .get_attribute("innerHTML") works fine.
Can someone point out the difference between two? When someone should use '.get_attribute("innerHTML")' over .text? | false | 40,416,048 | 0.119427 | 0 | 0 | 3 | For instance, <div><span>Example Text</span></div>
.get_attribute("innerHTML") gives you the actual HTML inside the current element. So theDivElement.get_attribute("innerHTML") returns "<span>Example Text</span>"
.text gives you only text, not include HTML node. So theDivElement.text returns "Example Text"
Please note that the algorithm for .text depends on webdriver of each browser. In some cases such as element is hidden, you might get different text when you use different webdriver.
I usually get text from .get_attribute("innerText") instead of .text so I can handle the all the case. | 0 | 13,511 | 0 | 11 | 2016-11-04T05:46:00.000 | python,selenium,web-scraping,properties,attributes | Difference between text and innerHTML using Selenium | 1 | 2 | 5 | 40,416,415 | 0 |
1 | 0 | I have web application which dynamically deployed on EC2 instances (scalable). Also I have RDS mysql instance which dynamically created by python with boto3. Now port 3306 of RDS is public, but I want to allow connection only from my EC2's from specific VPC. Can I create RDS on specific VPC (same one with EC2 instances)? What is best practice to create such set EC2 + RDS ? | false | 40,426,863 | 0.099668 | 0 | 1 | 1 | It is certainly best practice to have your Amazon EC2 instances in the same VPC as the Amazon RDS database. Recommended security is:
Create a Security Group for your web application EC2 instances (Web-SG)
Launch your Amazon RDS instance in a private subnet in the same VPC
Configure the Security Group on the RDS instance to allow incoming MySQL (3306) traffic from the Web-SG security group
If your RDS instance is currently in a different VPC, you can take a snapshot and then create a new database from the snapshot.
If you are using an Elastic Load Balancer, you could even put your Amazon EC2 instances in a private subnet since all access will be via the Load Balancer. | 0 | 214 | 0 | 0 | 2016-11-04T15:50:00.000 | python,amazon-web-services,deployment,amazon-ec2,boto3 | Create AWS RDS on specific VPC | 1 | 1 | 2 | 40,433,014 | 0 |
0 | 0 | I can run my python file with imported functionalities from GraphLab from the Terminal (first use the source activate gl-env and then run the file). So the file and installations are alright in that sense.
However, I can't figure out how to run the file directly in Spyder IDE. I only get ImportError: No module named 'graphlab'. The Spyder runs with python3.5 and I've tried to change to 2.7 as the GraphLap seems to, but it doesn't work either (I redirected to the same python2.7 'scientific_startup.py' used in GraphLab lib ).
I wonder if anyone knows how to run the file directly from Spyder?? | true | 40,430,960 | 1.2 | 0 | 0 | 2 | Following method will solve this:
Open Spyder --> tools --> preferences --> python interpreter --> change from default to custom and select the python executable under gl-env environment.
Restart spyder. It will work. | 1 | 391 | 0 | 2 | 2016-11-04T20:07:00.000 | python-2.7,python-3.x,spyder,graphlab | run graphlab from Spyder | 1 | 1 | 1 | 40,952,297 | 0 |
0 | 0 | I have a second laptop running kali linux which is not used at all, meaning it can be running anytime as a server for my application. So what I actually want to do is connect from my application to my server and send some data, on the server run a python program that uses this code and return some data back. I never tried to work with servers, can I even turn my computer into a server for my application? does this cost any money? can I run a python code on the server and return the results?
I know I haven't published any code but I actually don't know how to start this project and I can use some help so can someone refer me to something to start with? Thanks.. | false | 40,438,629 | 0 | 0 | 0 | 0 | Your problem is not Android-related definitely.
You simply need to educated yourself about networking. Yes, it will cost you some money - you spend them buying few books and some hardware for building home network.
After about 3-6-12 months of playing with your home network you will find your question rather simple to answer. | 0 | 259 | 0 | 0 | 2016-11-05T13:10:00.000 | android,python | Using a server in android to run code | 1 | 1 | 2 | 40,438,672 | 0 |
0 | 0 | I want to make a simple python program, which controls my laptop's usb hubs. Nothing extra, just put first usb port's DATA+ channel into HIGH (aka 5V) or LOW (aka 0 V) state. | false | 40,442,917 | 0 | 0 | 0 | 0 | Python is way to highlevel for this problem, this behavior would require you to rewrite the USB Driver of your OS. | 0 | 651 | 0 | 1 | 2016-11-05T20:32:00.000 | python,usb | Python - Low Level USB Port Control | 1 | 1 | 2 | 40,442,964 | 0 |
1 | 0 | I am doing a challenge for Google FooBar, and am having trouble submitting my code. My code is correct, and I have checked my program output against the answers provided by Google, and my output is correct. However, when I try and submit, I get a Error 403: Permission denied message. I cannot submit feedback either because I receive the same error message. Does any one have any advice? | false | 40,456,337 | 0 | 0 | 0 | 0 | you have to sign in and associate your foobar to your Gmail then you should be able to request a new challenge. | 0 | 604 | 0 | 1 | 2016-11-07T00:55:00.000 | python | foobar Google - Error 403 permission denied - programming challenge | 1 | 2 | 2 | 60,907,357 | 0 |
1 | 0 | I am doing a challenge for Google FooBar, and am having trouble submitting my code. My code is correct, and I have checked my program output against the answers provided by Google, and my output is correct. However, when I try and submit, I get a Error 403: Permission denied message. I cannot submit feedback either because I receive the same error message. Does any one have any advice? | false | 40,456,337 | 0 | 0 | 0 | 0 | I also faced the same issue. You can solve this by closing the current foobar session and opening a new in another tab.
This will definitely solve this problem. | 0 | 604 | 0 | 1 | 2016-11-07T00:55:00.000 | python | foobar Google - Error 403 permission denied - programming challenge | 1 | 2 | 2 | 44,912,746 | 0 |
0 | 0 | I am using Exchange Web Services (EWS) with python,
I used "UpdateItem" (soap request) to update message (IPF.Note) body and subject,
i can see the changes in OWA but outlook not fetching the updated message under any circumstances,
is there any property or another method i need to use to make outlook notice the change and download the message again?
I tried to use the Update Folder button and still nothing.
I am using outlook 2016 with Exchange online (Office 365). | false | 40,469,724 | 0 | 0 | 0 | 0 | OK,
I have found kind of solution,
outlook only pulling the message again if the message moved to different folder,
so i moved the message to another folder (junk) and back to the original folder
and then outlook fetched the updated message.
I know it's not the best solution though | 0 | 77 | 0 | 0 | 2016-11-07T16:17:00.000 | python,outlook,vsto,exchangewebservices | EWS updating message body not triggering outlook redownload of the message | 1 | 1 | 1 | 40,484,743 | 0 |
0 | 0 | Which is better analogy for describing the communication channel between two INET sockets:
one two-directional "pipe"
two unidirectional "pipes"
If I'm sending something to a two-directional "pipe" and then right away try to receive something from there, I'm expecting to get back what I just sent (unless other end managed to consume it in the meanwhile).
If there are two unidirectional pipes, one for sending and other for receiving (and vice versa for the other end), then I expect writes in one end don't affect the reads in the same end.
I'm new to sockets and after reading Python Socket HOWTO I wasn't able to tell which model is being used. I tried to deduce it by an experiment, but I'm not sure I set it up correctly.
So, can sending in one end affect receiving in the same end, or are these directions separated as if there were two "pipes"? | false | 40,513,969 | 0.099668 | 0 | 0 | 1 | A socket is like two unidirectional pipes. You won't ever read back data that you wrote. You'll only get data written by the other side. | 0 | 98 | 0 | 1 | 2016-11-09T18:48:00.000 | python,sockets | Sockets analogy: a pipe or two pipes? | 1 | 1 | 2 | 40,514,101 | 0 |
1 | 0 | I run a python program that uses selenium and phantomjs and got these errors 2) and 3) then when I run pip install selenium i got error 1):
1) The program 'pip' is currently not installed.
2) ImportError: No module named 'selenium'
3) selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
All done on Ubuntu 14.04 x64 | false | 40,514,084 | 0.761594 | 0 | 0 | 5 | Here are the answers:
1) sudo apt-get install python-pip
2) sudo pip install selenium
3) sudo apt-get install phantomjs
tested working. i hope it helps you. | 1 | 4,788 | 0 | 0 | 2016-11-09T18:57:00.000 | python,selenium,phantomjs,pip | How to install pip and selenium and phantomjs on ubuntu | 1 | 1 | 1 | 40,514,085 | 0 |
1 | 0 | Is there a way to scrape Facebook comments and IDs from a Facebook page like nytimes or the guardian for analytical purposes !? | false | 40,557,678 | -0.099668 | 0 | 0 | -1 | for using their API, you'll need to "verify" your app to get access to their "pages_read_user_content" or "Page Public Content Access"
at first using the API you might "GET" the page id / page post id / the permalink to the post in the page your own but to scrape the comments with API you'll need to verify a business account. | 0 | 827 | 0 | 1 | 2016-11-11T23:16:00.000 | python,web-scraping,facebook-apps | Is there a way to scrape Facebook comments and IDs from a Facebook page like nytimes or the guardian for analytical purposes? | 1 | 1 | 2 | 68,362,541 | 0 |
1 | 0 | i'm relatively new to python, hence the perhaps low level of my question. Anyway, i am trying to create a basic program for just displaying a couple of key statistics for different stocks (beta-value, 30-day high/low, p/e, p/s etc...). I have the GUI finished, but i'm not sure how to proceed with my project. Have been researching for a few hours but can't seem to decide which way to go.
Would you recommend HTML-scraping or yahoo/google finance API or anything else for downloading the data? After i have it downloaded i am pretty much just going to print it on the GUI. | false | 40,565,660 | 0 | 0 | 0 | 0 | It's always best to use the provided API if you get all the information you need from it. If the API doesn't exist or is not good enough, then you go on the scraping path and it usually is more work than using API.
So I would definitely use try using APIs first. | 0 | 204 | 0 | 0 | 2016-11-12T17:31:00.000 | python,html,web-scraping,yahoo-finance,google-finance | Creation of basic "stock-program" | 1 | 1 | 2 | 40,565,682 | 0 |
0 | 0 | I want to read an existing SVG file, traverse all elements and remove them if they match certain conditions (e.g. remove all objects with red border).
There is the svgwrite library for Python2/3 but the tutorials/documentation I found only show how to add some lines and save the file.
Can I also manipulate/remove existing elements inside an SVG document with svgwrite? If not - is there an alternative for Python? | false | 40,574,548 | 0 | 0 | 0 | 0 | The svgwrite package only creates svg. It does not read a svg file. I have not tried any packages to read and process svg files. | 0 | 414 | 0 | 1 | 2016-11-13T13:49:00.000 | python,svg | manipulating SVGs with python | 1 | 1 | 1 | 41,472,508 | 0 |
0 | 0 | i'm using pycharm community edition 2.2 with python 2.7
i have installed selenium web driver through pip install selenium command, but whenever importing (from selenium import web driver) selenium module i'm hitting this error : "from selenium import web driver
ImportError: No module named selenium"
please help me.. | false | 40,597,058 | 0.197375 | 0 | 0 | 1 | Try installing it through PyCharm:
File -> Settings -> Project:your_project -> Project Interpreter -> green '+' -> find 'selenium' -> install | 0 | 1,262 | 0 | 0 | 2016-11-14T20:15:00.000 | python,selenium | i have getting this error 'from selenium import webdriver ImportError: No module named selenium" even though i have installed selenium module | 1 | 1 | 1 | 40,597,145 | 0 |
1 | 0 | I am clearly confused but not sure if I am screwing up the code, or curl
I would like to use a rest api to pass a schemaname, a queryname, and a number of rows. I've written the python code using a simple -s schemaname -q queryname -r rows structure. Thats seems easy enough. But I am having trouble finding a good example of passing multiple arguments in a restapi. No matter which version of the todos example I choose as a model, I just cannot figure out how to extend for the second and 3rd argument. If it uses a different structure (JSON) for input, I am fine. The only requirement is that it run from CURL. I can find examples of passing lists, but not multiple arguments.
If there is a code example that does it and i have missed it, please send me along. As long as it has a curl example I am good.
Thank you | false | 40,629,548 | 0 | 0 | 0 | 0 | I am nominally embarrassed. The issue was NOT the python code at all, it was within Curl. So I both switched to HTTPie and changed the format to Schema=LONGSCHEMANAME
All of my tests started working so clearly I was not specifying the right string in curl. The -d option was beating me. So I apologize for wasting time. Thanks | 0 | 171 | 0 | 0 | 2016-11-16T10:19:00.000 | python,rest,curl | Passing and receiving multiple arguments with Python in Flask RestAPI | 1 | 1 | 1 | 40,684,339 | 0 |
0 | 0 | I'm using lxml to extract a long string value.
I pass this value to a def located in another module for string.split('|').
When i got the list[] its len() was 0.
The problem was somehow the pipe was interpreted as '\n\t\t\t\t'.
When i do string.split('\n\t\t\t\t'), problem solved, crisis avoided.
I know its a representation of escape sequences.
But why?
I know i didn't voluntarily changed this in any of my code.
!!! EDIT !!!
Sorry for the trouble, someone kept editing my xml file from the network...
I guess they thought it was funny... | true | 40,633,748 | 1.2 | 0 | 0 | 0 | The changes were not introduced by the interpreter but rather by a prankster in the network, I debuuged him. So no problem. | 1 | 48 | 0 | 0 | 2016-11-16T13:45:00.000 | string,visual-studio,python-3.x,split | string.split() value passed to python3 module interprets '|' like \n\t\t\t\t. Why? | 1 | 1 | 1 | 40,672,518 | 0 |
1 | 0 | I'm trying to figure out how to scrape dynamic AEM sign-in forms using python.
The thing is I've been trying to figure out which module would be best to use for a sign-in form field that dynamically pops up over a webpage.
I've been told Selenium is a good choice, but so is BeautifulSoup.
Any pointers to which one would be best to use for dynamically scraping these? | true | 40,650,154 | 1.2 | 0 | 0 | 1 | I would recommend Selenium as it provides complete browser interface and is mostly used for automation. Selenium will make it more easy to implement and most importantly maintain. | 0 | 85 | 0 | 0 | 2016-11-17T08:42:00.000 | python,selenium,beautifulsoup,aem | How to scrape AEM forms? | 1 | 1 | 1 | 40,650,411 | 0 |
1 | 0 | I have created a testsuite which has 2 testcases that are recorded using selenium in firefox. Both of those test cases are in separate classes with their own setup and teardown functions, because of which each test case opens the browser and closes it during its execution.
I am not able to use the same web browser instance for every testcase called from my test suite. Is there a way to achieve this? | false | 40,651,064 | 0.197375 | 0 | 0 | 1 | This is how is suppose to work.
Tests should be independent else they can influence each other.
I think you would want to have a clean browser each time and not having to clean session/cookies each time, maybe now not, but when you will have a larger suite you will for sure.
Each scenario has will start the browser and it will close it at the end, you will have to research which methods are doing this and do some overriding, this is not recommended at all. | 0 | 35 | 0 | 1 | 2016-11-17T09:26:00.000 | python,unit-testing,selenium,selenium-webdriver | Using same webInstance which executing a testsuite | 1 | 1 | 1 | 40,651,174 | 0 |
0 | 0 | I can't find a way to retrieve the HTTP error code and the error message from a call to a Google API using the Google API Client (in Python).
I know the call can raise a HttpError (if it's not a network problem) but that's all I know.
Hope you can help | false | 40,659,253 | 0.066568 | 0 | 0 | 1 | Actually, found out that e.resp.status is where the HTTP error code is stored (e being the caught exception). Still don't know how to isolate the error message. | 0 | 1,914 | 0 | 1 | 2016-11-17T15:48:00.000 | python-2.7,google-api-client | Retrieve error code and message from Google Api Client in Python | 1 | 1 | 3 | 40,680,753 | 0 |
0 | 0 | When I run the api.py the default IP address is 127.0.0.1:5000 which is the local host. I am running the eve scripts on the server side. Am I able to change that IP address to server's address? or Am I just access it using server's address.
For example,
if the server's address is 11.5.254.12, then I run the api.py.
Am I able to access it outside of the server using 11.5.254.12:5000 or is there any way to change it from 127.0.0.1 to 11.5.254.12? | true | 40,681,689 | 1.2 | 0 | 0 | 6 | Add a parameter to your app.run(). By default it runs on localhost, change it to app.run(host= '0.0.0.0') to run on your machine IP address. | 0 | 1,424 | 0 | 2 | 2016-11-18T16:19:00.000 | python,mongodb,ip,eve | How to change eve's IP address? | 1 | 1 | 1 | 40,690,062 | 0 |
0 | 0 | I want to say to clients to start my chat bot and send me username and password, then I store chat_id of them, and use it whenever I want to send a message to one of them.
Is it possible? or chat_id will be expire? | false | 40,689,819 | 0.066568 | 1 | 0 | 1 | When a user register on telegram, server choose a unique chat_id for that user! it means the server do this automatically. thus if the user send /start message to your bot for the first time, this chat_id will store on bot database (if you code webhook which demonstrates users statastics)
The answer is if the user doesnt blocked your bot you can successfully send him/her a message. on the other hand if the user had delete accounted no ways suggest for send message to new chat id!
i hope you got it | 0 | 2,478 | 0 | 1 | 2016-11-19T06:14:00.000 | telegram,telegram-bot,python-telegram-bot,php-telegram-bot,lua-telegram-bot | Can I use chat_id to send message to clients in Telegram bot after a period of time? | 1 | 1 | 3 | 41,173,517 | 0 |
0 | 0 | I have trouble using impalib to search email that contain more than two subjects, for example:
import imaplib
m = imaplib.IMAP4_SSL("imap.gmail.com")
m.login('myname', 'mypwd')
m.select("Inbox")
resp, items = m.uid('search', None, "(SUBJECT baseball SUBJECT basketball)")
will have no problem getting data from searching those subject. However, if i search more than two subjects
resp, items = m.uid('search', None, "(SUBJECT baseball SUBJECT basketball SUBJECT football)")
it won't have data come back. also, subject like ""space jam" or "matchbox 20" will have trouble of parsing in the field | false | 40,695,276 | 0.197375 | 0 | 0 | 1 | you can search like this.
m.uid('search', None, "(OR (SUBJECT baseball) (SUBJECT basketball))") | 1 | 2,629 | 0 | 2 | 2016-11-19T16:34:00.000 | python | Python Imaplib search multiple SUBJECT criteria and special characters | 1 | 1 | 1 | 50,421,462 | 0 |
0 | 0 | Okay so i know that you can route web-requests through a proxy in Python, is there any way to route ALL traffic from your system through a server. Much like a VPN client such as Hotspot Shield or CyberGhost, but a custom-build client using the Python language?
Any links/help is greatly appreciated.
Thanks. | true | 40,708,834 | 1.2 | 0 | 0 | 2 | The short answer is no.
The long answer is: network routing is managed by OS and could be managed by other utilities, like iptables. Adding such capabilities to standard libraries is out of the scope of programming language. So, what you are probably looking for is a binding of a VPN library (e.g. libpptp) or making syscalls in Cython, which is not much different than writing in C. | 0 | 3,290 | 0 | 1 | 2016-11-20T20:10:00.000 | python,proxy,routing,vpn | How to connect to a VPN/Proxy server via Python? | 1 | 1 | 1 | 40,709,023 | 0 |
1 | 0 | I have been exploring ways to use python to log into a secure website (eg. Salesforce), navigate to a certain page and print (save) the page as pdf at a prescribed location.
I have tried using:
pdfkit.from_url: Use Request to get a session cookie, parse it then pass it as cookie into the wkhtmltopdf's options settings. This method does not work due to pdfkit not being able to recognise the cookie I passed.
pdfkit.from_file: Use Request.get to get the html of the page I want to print, then use pdfkit to convert the html file to pdf. This works but the page format and images are all missing.
Selenium: Use a webdriver to log in then navigate to the wanted page, call the windows.print function. This does not work because I can't pass any arguments to the window's SaveAs dialog.
Does anyone have any idea to get around? | false | 40,731,567 | 0 | 0 | 0 | 0 | log in using requests
use requests session mechanism to keep track of the cookie
use session to retrieve the HTML page
parse the HTML (use beautifulsoup)
identify img tags and css links
download locally the images and css documents
rewrite the img src attributes to point to the locally downloaded images
rewrite the css links to point to the locally downloaded css
serialize the new HTML tree to a local .html file
use whatever "HTML to PDF" solution to render the local .html file | 0 | 237 | 0 | 1 | 2016-11-21T23:53:00.000 | python,selenium,pdf,salesforce,pdfkit | Log into secured website, automatically print page as pdf | 1 | 1 | 1 | 40,732,242 | 0 |
1 | 0 | I am currently begining to use beautifulsoup to scrape websites, I think I got the basics even though I lack theoretical knowledge about webpages, I will do my best to formulate my question.
What I mean with dynamical webpage is the following: a site whose HTML changes based on user action, in my case its collapsible tables.
I want to obtain the data inside some "div" tag but when you load the page, the data seems unavalible in the html code, when you click on the table it expands, and the "class" of this "div" changes from something like "something blabla collapsible" to "something blabla collapsible active" and this I can scrape with my knowledge.
Can I get this data using beautifulsoup? In case I can't, I thought of using something like selenium to click on all the tables and then download the html, which I could scrape, is there an easier way?
Thank you very much. | false | 40,732,906 | 0 | 0 | 0 | 0 | It depends. If the data is already loaded when the page loads, then the data is available to scrape, it's just in a different element, or being hidden. If the click event triggers loading of the data in some way, then no, you will need Selenium or another headless browser to automate this.
Beautiful soup is only an HTML parser, so whatever data you get by requesting the page is the only data that beautiful soup can access. | 0 | 278 | 0 | 4 | 2016-11-22T02:35:00.000 | python,html,selenium,beautifulsoup | Is it possible to scrape a "dynamical webpage" with beautifulsoup? | 1 | 1 | 1 | 40,733,402 | 0 |
0 | 0 | I want to make sure what is the fast in node to parse a url query value. For example, hello.org/post.html?action=newthread&fid=32&fpage=1, if I want to get fid value, I have 3 choice:
1 str.match(/[?/&]fid=(.*?)($|[&#])/)
2 req.query.fid in express, which I found is actually calling https://github.com/ljharb/qs/blob/master/lib/parse.js, which is I found is using str.split('&') in behind
3 str.split('/[&/#?]/') and then use for loop to determine which is start with fid
I'm guessing 1st is the slowest, and the 2nd is the fast. But I don't know if it's correct (though I can make a test), but I do want to know some deep reason, thanks. | true | 40,736,600 | 1.2 | 0 | 0 | 1 | 1) regexp is advanced string operations. For every character encountered while parsing, it has to match with each token in the entire regexp string. The complexity is a non-linear function of the length of source and the length of the regexp string.
2) whereas string tokenizer (split) on single char, the task is clearly cut out, as you sequentially traverse the source string, 'cut' and tokenize the word when encountered the pattern char, and move forward. The complexity is as good as order of n, where n is the number of chars in the string.
3) is actually a variant of (2), but with more chars in the splitter. So in case if the first char matches, there is additional work involved to match the subsequent chars etc. So the complexity increases, and move towards regexp. The performance is still better than regexp, as the regexp require further interpretation of its own tokens.
Hope this helps. | 1 | 48 | 0 | 1 | 2016-11-22T08:02:00.000 | javascript,python,node.js | Comparing speed of regexp and split when parsing url query | 1 | 1 | 1 | 40,736,699 | 0 |
0 | 0 | So I'm trying to web crawl clothing websites to build a list of great deals/products to look out for, but I notice that some of the websites that I try to load, don't. How are websites able to block selenium webdriver http requests? Do they look at the header or something. Can you give me a step by step of how selenium webdriver sends requests and how the server receives them/ are able to block them? | true | 40,750,049 | 1.2 | 0 | 0 | 5 | Selenium uses a real web browser (typically Firefox or Chrome) to make its requests, so the website probably has no idea that you're using Selenium behind the scenes.
If the website is blocking you, it's probably because of your usage patterns (i.e. you're clogging up their web server by making 1000 requests every minute. That's rude. Don't do that!)
One exception would be if you're using Selenium in "headless" mode with the HtmlUnitDriver. The website can detect that. | 0 | 7,264 | 0 | 3 | 2016-11-22T19:25:00.000 | python,selenium,firefox,server,phantomjs | Some websites block selenium webdriver, how does this work? | 1 | 2 | 2 | 40,750,148 | 0 |
0 | 0 | So I'm trying to web crawl clothing websites to build a list of great deals/products to look out for, but I notice that some of the websites that I try to load, don't. How are websites able to block selenium webdriver http requests? Do they look at the header or something. Can you give me a step by step of how selenium webdriver sends requests and how the server receives them/ are able to block them? | false | 40,750,049 | 0 | 0 | 0 | 0 | It's very likely that the website is blocking you due to your AWS IP.
Not only that tells the website that somebody is likely programmatically scraping them, but most websites have a limited number of queries they will accept from any 1 IP address.
You most likely need a proxy service to pipe your requests through. | 0 | 7,264 | 0 | 3 | 2016-11-22T19:25:00.000 | python,selenium,firefox,server,phantomjs | Some websites block selenium webdriver, how does this work? | 1 | 2 | 2 | 51,425,585 | 0 |
0 | 0 | I have a server program in python which sends and receives data packets to and from android client. I am using TCP with sockets in android client to communicate with python program.
Every thing is working fine but when the server is shutdown with an NSF power failure it disconnected from client.
My Question is how I check for the availability of the server all the time using android services or any BroadcastReceiver or inside my MainActivity when socket closed or disconnected.
I don't want to restart my client application again after server power failure. | false | 40,782,240 | 0 | 0 | 0 | 0 | You can send ping packets to check if the server is alive. | 0 | 698 | 0 | 2 | 2016-11-24T09:18:00.000 | java,android,python,sockets | How to reconnect with socket after server power failure from Android Client | 1 | 1 | 3 | 40,782,291 | 0 |
0 | 0 | Every time I open up Chrome driver in my python script, it says "chromedriver.exe has stopped working" and crashes my script with the error: [Errno 10054] An existing connection was forcibly closed by the remote host.
I read the other forum posts on this error, but I'm very new to this and a lot of it was jargon that I didn't understand. One said something about graceful termination, and one guy said "running the request again" solved his issue, but I have no idea how to do that. Can someone explain to me in more detail how to fix this? | false | 40,782,641 | 0 | 0 | 0 | 0 | Fixed. It was a compatibility error. Just needed to downloaded the latest chrome driver version and it worked. | 0 | 298 | 0 | 0 | 2016-11-24T09:36:00.000 | python-2.7,selenium-webdriver,selenium-chromedriver | [Errno 10054], selenium chromedriver crashing each time | 1 | 1 | 1 | 40,795,935 | 0 |
0 | 0 | I need to import an existing svg picture and add elements like circle and square on it. My file is 'test.svg' so i tryed dwg = svgwrite.Drawing('test.svg') but it create a new svg file without anything.
I use the python lib svgwrite, do you have any idea for me?
Thank you, and sorry for my english... I do my best! | true | 40,793,649 | 1.2 | 0 | 0 | 4 | svgwrite will only create svg files. svgwrite does not read svg files. If you want to read, modify and then write the svg, the svgwrite package is not the package to use but I do not know of an appropriate package for you.
It might be possible to create svg which uses a reference to another image. That is there is one svg which you then put a second svg on top of the first. I have not done this and do not know if it would actually work. | 0 | 2,288 | 0 | 9 | 2016-11-24T19:47:00.000 | python,svg,svgwrite | How to import an existing svg with svgwrite - Python | 1 | 1 | 1 | 41,472,623 | 0 |
1 | 0 | I have a use case where i want to invoke my lambda function whenever a object has been pushed in S3 and then push this notification to slack.
I know this is vague but how can i start doing so ? How can i basically achieve this ? I need to see the structure | false | 40,800,757 | 0 | 1 | 0 | 0 | You can use S3 Event Notifications to trigger the lambda function.
In bucket's properties, create a new event notification for an event type of s3:ObjectCreated:Put and set the destination to a Lambda function.
Then for the lambda function, write a code either in Python or NodeJS (or whatever you like) and parse the received event and send it to Slack webhook URL. | 0 | 1,589 | 0 | 3 | 2016-11-25T08:40:00.000 | python,amazon-web-services,lambda | How to write a AWS lambda function with S3 and Slack integration | 1 | 1 | 2 | 65,574,478 | 0 |
1 | 0 | Front-end part
I have an AJAX request which is trying to GET data from my back-end handle every second.
If there is any data, I get this data, add it to my HTML page (without reloading), and continue pulling data every second waiting for further changes.
Back-end part
I parse web-pages every minute with Celery.
Extract data from them and pass it to an array (that is a trigger for AJAX request that there is new data).
Question
It seems to me that there is another solution for this issue.
I don't want to ask for data from JS to back-end. I want to pass data from back-end to JS, when there are any changes. But without page reload.
How can I do this? | false | 40,806,743 | 0 | 0 | 0 | 0 | try to use socketio, the backend create an event with data on socketio and your frontend receives the event and download the data
i resolve a similar problem in this way. i call the backend only when a socketio event was create from the backend.
you must setup a socketio server with nodejs,somewhere | 0 | 1,931 | 0 | 1 | 2016-11-25T14:02:00.000 | javascript,python | Push data from backend (python) to JS | 1 | 1 | 4 | 40,806,824 | 0 |
0 | 0 | So, I am trying to import win32com.client and when I am running the script in Windows Server 2012 using python 3.5 I get the next error :
import win32api, sys, os
ImportError: DLL load failed: The specified module could not be found.
I've tried the next things:
-Copied the pywintypes35.dll and pythoncom35.dll to Python35\Lib\site-packages\win32 and win32com
-Run the Python35\Scripts\pywin32_postinstall.py
-Copied the file from step 1 into an virtualenv
None of this seems to work. It is a problem with python 3.5 in Windows Server 2012? | false | 40,822,073 | 0 | 0 | 0 | 0 | You likely installed incompatible versions like Python 32bit but win32com 64bit. | 0 | 589 | 0 | 0 | 2016-11-26T19:10:00.000 | python-3.x,windows-server-2012-r2,win32com | python win32com importing issue windows server 2012 r2 | 1 | 1 | 1 | 40,835,895 | 0 |
0 | 0 | This question has been asked a few times, but the remedy appears to complicated enough that I'm still searching for a user specific solution. I recently re-installed anaconda; now, after entering
"pip install splinter"
in the Terminal on my Mac I get the response:
"Requirement already satisfied: splinter in /usr/local/lib/python2.7/site-packages
Requirement already satisfied: selenium>=2.53.6 in /usr/local/lib/python2.7/site-packages (from splinter)"
But, I get the following error in python (Anaconda) after entering import splinter
Traceback (most recent call last):
File "", line 1, in
import splinter
ImportError: No module named splinter"
When I enter which python in the terminal, this is the output: "/usr/local/bin/python"
I am editing the question here to add the solution: ~/anaconda2/bin/pip install splinter | false | 40,829,645 | 0 | 0 | 0 | 0 | I had the same issue, I uninstalled and reinstalled splinter many times but that didn't work. Then I typed source activate (name of my conda environment) and then did pip install splinter. It worked for me. | 1 | 5,785 | 1 | 0 | 2016-11-27T13:44:00.000 | python,pip,splinter | Module not found in python after installing in terminal | 1 | 1 | 3 | 53,305,576 | 0 |
1 | 0 | I have been trying to get this web scraping script working properly, and am not sure what to try next. Hoping someone here knows what I should do.
I am using BS4 and the problem is whenever a URL takes a long time to load it skips over that URL (leaving an output file with fewer inputs in times of high page load times). I have been trying to add on a timer so that it only skips over the url if it doesn't load in x seconds.
Can anyone point me in the right direction?
Thanks! | false | 40,857,813 | 0 | 0 | 0 | 0 | Try using multi threading or multiprocessing to spawn threads, i think it will spawn a thread for every request and it won't skip over the url if it's taking too long. | 0 | 243 | 0 | 0 | 2016-11-29T04:26:00.000 | python,web-scraping,beautifulsoup,bs4 | Python BS4 Scraping Script Timer | 1 | 1 | 1 | 40,860,108 | 0 |
1 | 0 | I've made simple web-crawler with Python. So far everything it does it creates set of urls that should be visited, set of urls that was already visited. While parsing page it adds all the links on that page to the should be visited set and page url to the already visited set and so on while length of should_be_visited is > 0. So far it does everything in one thread.
Now I want to add parallelism to this application, so I need to have same kind of set of links and few threads / processes, where each will pop one url from should_be_visited and update already_visited. I'm really lost at threading and multiprocessing, which I should use, do I need some Pools, Queues? | true | 40,894,487 | 1.2 | 0 | 0 | 3 | The rule of thumb when deciding whether to use threads in Python or not is to ask the question, whether the task that the threads will be doing, is that CPU intensive or I/O intensive. If the answer is I/O intensive, then you can go with threads.
Because of the GIL, the Python interpreter will run only one thread at a time. If a thread is doing some I/O, it will block waiting for the data to become available (from the network connection or the disk, for example), and in the meanwhile the interpreter will context switch to another thread. On the other hand, if the thread is doing a CPU intensive task, the other threads will have to wait till the interpreter decides to run them.
Web crawling is mostly an I/O oriented task, you need to make an HTTP connection, send a request, wait for response. Yes, after you get the response you need to spend some CPU to parse it, but besides that it is mostly I/O work. So, I believe, threads are a suitable choice in this case.
(And of course, respect the robots.txt, and don't storm the servers with too many requests :-) | 1 | 1,352 | 0 | 0 | 2016-11-30T17:23:00.000 | python,multithreading,web-crawler,python-multithreading | Python threading or multiprocessing for web-crawler? | 1 | 1 | 2 | 40,894,613 | 0 |
0 | 0 | Right, i have a bot that has 2 shards, each on their own server. I need a way to share data between the two, preferably as files, but im unsure how to achieve this.
The bot is completely python3.5 based
The servers are both running Headless Debian Jessie
The two servers arent connected via LAN, so this has to be sharing data over the internet
The data dosent need to be encrypted, as no sensitive data is shared | true | 40,898,087 | 1.2 | 1 | 0 | 0 | Probably the easiest to achive, that is also secure is to use sshfs between the servers. | 0 | 27 | 0 | 0 | 2016-11-30T20:58:00.000 | debian,python-3.5,file-sharing | Share data between two scripts on different servers | 1 | 1 | 1 | 40,911,268 | 0 |
1 | 0 | I have a web system where staff can log in with a username and password, then enter data. Is there a way to add the option for users to seamlessly log in just by swiping the card against an NFC scanner? The idea is to have multiple communal PCs people can walk up to and quickly authenticate.
It's important that the usual text login form works too for people using the site on PCs or phones without the NFC option.
The web client PCs with an NFC scanner could be linux or windows.
(The web system is a bootstrap/jquery site which gets supplied with JSON data from a python web.py backend. I'm able to modify the server and the client PCs.) | true | 40,927,573 | 1.2 | 0 | 0 | 0 | The progress of log in: The user starts the browser and go to your website and instead of manually entering credentials he clicks on the "log in via NFC." Server retains identification session from that IP and date (and maybe other info about client hardware for safe) to the database and "expects" NFC incoming data.
On the client PC / Phone you'll have to install your application/service, which will be able to receive data from the NFC scanner (who usually works as a keyboard) and sends them to your server, eg. Via ASP.NET WebAPI or other REST ...
The server will accept data from the IP and find a record in database of that IP perform log in (+ a time limit? + checking client hardware for safe?). Then the server side you have confirmed logon and the user can proceed (you can redirect him to our secure site).
Note. 1 The critical point is to pair the correct and safe Identification client browser and PC/mobile application which reads NFC tags.
Note. 2 You will need to select the appropriate NFC scanner, which will ideally have a standardized drivers built-in Win / Linux OS (otherwise you often solve the problem of missing / non-functional NFC drivers). | 0 | 1,332 | 0 | 0 | 2016-12-02T08:17:00.000 | jquery,python,json,linux,nfc | Optional NFC login to web based system | 1 | 1 | 1 | 40,928,045 | 0 |
0 | 0 | I am trying to get pictures from photo type posts on Facebook. I am using Python. I tried to access post_id/picture, but I keep getting:
facebook.GraphAPIError: (#12) picture edge for this type is deprecated for versions v2.3 and higher
Is there any alternative to the picture edge in v2.8? The documentation still lists picture as an option.
Thanks. | false | 40,939,604 | 0 | 0 | 0 | 0 | I faced a similar problem. Now to get the picture of a post you can call
{post-id}?fields=picture,full_picture
where picture returns the url of a thumbnail of the image, while full_picture returns the url of the real image. | 0 | 298 | 0 | 0 | 2016-12-02T19:28:00.000 | python,facebook | Facebook API get picture from post | 1 | 1 | 1 | 45,242,035 | 0 |
0 | 0 | Imagine you have two python processes, one server and one client, that interact with each other.
Both processes/programs run on the same host and communicate via TCP, eg. by using the AMP protocol of the twisted framework.
Could you think of an efficient and smart way how both python programs can authenticate each other?
What I want to achieve is, that for instance the server only accepts a connection from an authentic client and where not allowed third party processes can connect to the server.
I want to avoid things like public-key cryptography or SSL-connections because of the huge overhead. | false | 40,964,465 | 0 | 0 | 0 | 0 | If you do not want to use SSL - there are a few options:
Client must send some authentication token (you may call it password) to server as a one of the first bunch of data sent through the socket. This is the simplest way. Also this way is cross-platform.
Client must send id of his process (OS-specific). Then server must make some system calls to determine path to executable file of this client process. If it is a valid path - client will be approved. For example valid path should be '/bin/my_client' or "C:\Program Files\MyClient\my_client.exe" and if some another client (let's say with path '/bin/some_another_app' will try to communicate with your server - it will be rejected. But I think it is also overhead. Also implementation is OS-specific. | 0 | 79 | 0 | 0 | 2016-12-04T22:28:00.000 | python,authentication | Mutual authentication of python processes | 1 | 1 | 1 | 40,964,673 | 0 |
0 | 0 | I am now studying and developing a CANopen client with a python stack and i'm struggling to find out how to communicate with a slave Modbus through a gateway.
Since the gateway address is the one present in the Object Dictionary of the CANopen, and the Gateway has addresses of modbus Slaves I/O, how to specify the address of the modbus input ?
As i can see it CANopen uses the node-ID to select the server and an address to select the property to read/write, but in this case i need to go farther than that and point an input.
just to be clear i'm in the "studying" phase i have no CANopen/Modbus gateway in mind.
Regards. | true | 40,974,077 | 1.2 | 0 | 0 | 0 | This will be the gateway's business to fix. There is no general answer, nor is there a standard for how such gateways work. Gateways have some manner of software that allows you to map data between the two field buses. In this case I suppose it would be either a specific CANopen PDO or a specific CAN id that you map to a Modbus address.
In case you are just writing a CANopen client, neither you or the firmware should need to worry about Modbus. Just make a CANopen node that is standard compliant and let the gateway deal with the actual protocol conversion.
You may however have to do the PDO mapping in order to let your client and the gateway know how to speak with each other, but that should preferably be a user-level configuration of the finished product, rather than some hard-coded mapping. | 0 | 339 | 0 | 0 | 2016-12-05T12:15:00.000 | python,modbus,can-bus,canopen | How does CANopen client communicate with Modbus slave through CANopen/Modbus gateway ? | 1 | 1 | 1 | 41,042,839 | 0 |
0 | 0 | We have a back end that exposes 50-60 Rest APIs. These will largely be consumed by standalone applications like a Python script or a Java program.
One issue we have is the APIs are at a very granular level, they do not match the business use case. For example to perform a business use case end user might have to call 4 to 5 APIs.
I want to develop a DSL or some solution that will help provide a high level abstraction that will enable end users to implement business use cases with ease. This can either be a standalone abstraction or a "library" for Python or or some much high level programming language.
For the specific purpose of combining multiple Rest API calls to create a business use case transaction, what are the approaches available.
Thanks | true | 40,978,516 | 1.2 | 0 | 0 | 1 | I think this is a nice idea. To determine what kind of solution you could build you should consider different aspects:
Who would write these API combinations?
What kind of tool support would be appropriate? I mean validation, syntax highlighting, autocompletion, typesystem checks, etc
How much time would make sense to invest on it?
Depending on these answers you could consider different options. The simplest one is to build a DSL using ANTLR. You get a parser, then you build some program to process the AST and generate the code to call the APIs. Your user will just have to edit these programs in a text editor with not support. The benefit of this is that the cost of implementing this is reduced and your user could write these programs using a simple text editor.
Alternatively you could use a Language Workbench like Xtext or Jetbrains MPS to build some specific editors for your language and provide a better editing experience to your users. | 0 | 190 | 0 | 1 | 2016-12-05T16:11:00.000 | python,web-services,rest,dsl,mps | Meta language for rest client | 1 | 1 | 1 | 40,981,915 | 0 |
0 | 0 | I'm new to AWS Lambda and pretty new to Python.
I wanted to write a python lambda that uses the AWS API.
boto is the most popular python module to do this so I wanted to include it.
Looking at examples online I put import boto3 at the top of my Lambda and it just worked- I was able to use boto in my Lambda.
How does AWS know about boto? It's a community module. Are there a list of supported modules for Lambdas? Does AWS cache its own copy of community modules? | false | 40,981,908 | 0.197375 | 1 | 0 | 3 | AWS Lambda's Python environment comes pre-installed with boto3. Any other libraries you want need to be part of the zip you upload. You can install them locally with pip install whatever -t mysrcfolder. | 0 | 70 | 0 | 1 | 2016-12-05T19:33:00.000 | python,amazon-web-services,aws-lambda | How does AWS know where my imports are? | 1 | 1 | 3 | 40,981,986 | 0 |
0 | 0 | I am wondering if there is any way to wirelessly connect to a computer/server using python's socket library. The dir(socket) brought up a lot of stuff and I wanted help sorting it out. | false | 40,987,412 | 0.099668 | 0 | 0 | 1 | but one question. Is the socket server specific to python, or can
another language host and python connect or vise-versa?
As long as you are using sockets - you can connect to any socket-based server (made with any language). And vice-versa: any socket-based client will be able to connect to your server. Moreover it's cross-platform: socket-based client from any OS can connect to any socket-based server (from any OS). | 0 | 102 | 0 | 1 | 2016-12-06T03:38:00.000 | python,sockets,connect,wireless | How do I wirelessly connect to a computer using python | 1 | 1 | 2 | 40,987,772 | 0 |
0 | 0 | Years ago for a masters project my friend took a bunch of data from an excel sheet and used them in a powerpoint graph. He told me he made the graph in excel then copied it into powerpoint. Now, when I hover over the graph I see the points associated to where my mouse hovers. My friend lost the original excel sheet and is asking me to help pull the data from the powerpoint graph and put it in an excel sheet.
How would I go about doing this? If theres away to get the points into a json file I can do the rest. I just know nothing about powerpoint graphs. | false | 41,044,395 | 0.132549 | 0 | 0 | 2 | Right click the chart, choose Edit Data.
If it's an embedded chart, the chart and its workbook will open in Excel.
From there you can File | Save As and save your new Excel file. | 0 | 150 | 0 | 0 | 2016-12-08T16:35:00.000 | python,json,excel,powerpoint | Take data points from my power point graph and put them into an excel sheet | 1 | 1 | 3 | 41,045,163 | 0 |
0 | 0 | I know how to send email through Outlook/Gmail using the Python SMTP library. However, I was wondering if it was possible to receive replys from those automated emails sent from Python.
For example, if I sent an automated email from Python (Outlook/Gmail) and I wanted the user to be able to reply "ok" or "quit" to the automated email to either continue the script or kick off another job or something, how would I go about doing that in Python?
Thanks | false | 41,107,643 | 0 | 1 | 0 | 0 | SMTP is only for sending. To receive (read) emails, you will need to use other protocols, such as POP3, IMAP4, etc. | 0 | 308 | 0 | 0 | 2016-12-12T18:57:00.000 | python,email,outlook | Python SMTP Send Email and Receive Reply | 1 | 1 | 1 | 41,108,481 | 0 |
0 | 0 | I'm trying to make a usable tests for my package, but using Flask.test_client is so different from the requests API that I found it hard to use.
I have tried to make requests.adapters.HTTPAdapter wrap the response, but it looks like werkzeug doesn't use httplib (or urllib for that matter) to build it own Response object.
Any idea how it can be done? Reference to existing code will be the best (googling werkzeug + requests doesn't give any useful results)
Many thanks!! | false | 41,108,551 | 0.066568 | 0 | 0 | 1 | A PyPI package now exists for this so you can use pip install requests-flask-adapter. | 0 | 1,360 | 0 | 11 | 2016-12-12T19:57:00.000 | python,python-requests,werkzeug | requests-like wrapper for flask's test_client | 1 | 1 | 3 | 59,821,311 | 0 |
0 | 0 | I am currently having trouble using requests. I use the import requests command yet I get the import error that says no module named 'requests'.
To install it I first installed SetupTools, then pip and finally used the pip install requests command. This didn't work so I ended up uninstalling and reinstalling (with pip3 and pip3.5 commands) yet it still doesn't work.
I am using python 3.5 which is installed directly to my c:\ drive.
Thank you in advance. | true | 41,111,623 | 1.2 | 0 | 0 | 0 | I resolved this issue by just reinstalling requests to the c: drive (which didn't fully solve it) and then just moving the requests folder to c:\Lib which now works fine and allows me to import it properly. | 1 | 44 | 0 | 0 | 2016-12-12T23:54:00.000 | python,python-3.x,pip,python-requests,importerror | Resolution for Import Error when trying to import 'Requests' | 1 | 1 | 1 | 41,123,054 | 0 |
1 | 0 | We have two servers (client-facing, and back-end database) between which we would like to transfer PDFs. Here's the data flow:
User requests PDF from website.
Site sends request to client-server.
Client server requests PDF from back-end server (different IP).
Back-end server sends PDF to client server.
Client server sends PDF to website.
1-3 and 5 are all good, but #4 is the issue.
We're currently using Flask requests for our API calls and can transfer text and .csv easily, but binary files such as PDF are not working.
And no, I don't have any code, so take it easy on me. Just looking for a suggestion from someone who may have come across this issue. | false | 41,154,360 | 0.099668 | 0 | 0 | 1 | I wanted to share my solution to this, but give credit to @CoolqB for the answer. The key was including 'rb' to properly read the binary file and including the codecs library. Here are the final code snippets:
Client request:
response = requests.get('https://www.mywebsite.com/_api_call')
Server response:
f = codecs.open(file_name, 'rb').read()
return f
Client handle:
with codecs.open(file_to_write, 'w') as f:
f.write(response.content)
f.close()
And all is right with the world. | 0 | 1,341 | 0 | 0 | 2016-12-15T00:10:00.000 | python,api,pdf | Transfer PDF files between servers in python | 1 | 2 | 2 | 41,170,783 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.