Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | I have a graph and want to calculate its indegree and outdegree centralization. I tried to do this by using python networkx, but there I can only find a method to calculate indegree and outdegree centrality for each node. Is there a way to calculate in- and outdegree centralization of a graph in networkx? | false | 35,243,795 | 0 | 0 | 0 | 0 | This answer has been taken from a Google Groups on the issue (in the context of using R) that helps clarify the maths taken along with the above answer:
Freeman's approach measures "the average difference in centrality
between the most central actor and all others".
This 'centralization' is exactly captured in the mathematical formula
sum(max(x)-x)/(length(x)-1)
x refers to any centrality measure! That is, if you want to calculate
the degree centralization of a network, x has simply to capture the
vector of all degree values in the network. To compare various
centralization measures, it is best to use standardized centrality
measures, i.e. the centrality values should always be smaller than 1
(best position in any possible network) and greater than 0 (worst
position)... if you do so, the centralization will also be in the
range of [0,1].
For degree, e.g., the 'best position' is to have an edge to all other
nodes (i.e. incident edges = number of nodes minus 1) and the 'worst
position' is to have no incident edge at all. | 0 | 2,646 | 0 | 4 | 2016-02-06T17:03:00.000 | python,networkx,graph-theory | calculate indegree centralization of graph with python networkx | 1 | 1 | 3 | 45,540,101 | 0 |
0 | 0 | I have written a python selenium script to log me into a website. This website is where I need to book a court at precisely 7am. I cannot run the script using cron scheduler because that only runs the script and by the time selenium has logged in 7am will have passed. I've tried time() and Webdriverwait but these only allow me to delay hitting a web page button. I need to synchronise the click of a button at a precise time from within the python script. | false | 35,254,240 | 0.099668 | 0 | 0 | 1 | It has to be in combination with a cron job.
You can start the cron 1-2 minutes earlier, open the login page and, in your python script, sleep until 7am then just login. | 0 | 705 | 0 | 1 | 2016-02-07T13:44:00.000 | java,python,selenium | How can I run code in selenium to execute a function at a specific hour of the day within the python script not cron | 1 | 1 | 2 | 35,255,120 | 0 |
0 | 0 | I am getting this error while web scraping
http protocol error', 0, 'got a bad status line
What does it mean? And
How can I avoid this? | false | 35,264,655 | 0 | 0 | 0 | 0 | First line of HTTP (>1.0) Response is status line.
This error says that your status line is broken. It can be if server response starts with non-RFC status line (i don't think it can be), and if response is version http/0.9 (no status line and headers). Server answer with HTTP/0.9 if your request (for example header lines) was broken
Check response using wireshark (or something else) and give us a bit more info about your error. | 0 | 181 | 0 | 0 | 2016-02-08T07:49:00.000 | python,web-scraping,beautifulsoup | How can I avoid Http protocol error 0 while WebScraping | 1 | 1 | 1 | 35,265,134 | 0 |
0 | 0 | I am trying to insert millios of rows to redis .
I gone through the redis massinsertion tutorials and tried
cat data.txt | python redis_proto.py | redis-cli -p 6321 -a "myPassword" --pipe
here the redis_proto.py is the python script which reads the data.txt and convert to redis protocol.
I got some error like as below
All data transferred. Waiting for the last reply...
NOAUTH Authentication required.
NOAUTH Authentication required.
any help or suggestions would be appreciated ? | true | 35,267,280 | 1.2 | 1 | 0 | 1 | I guess in your password there will be "$". If it is remove that It will work. | 0 | 3,200 | 0 | 0 | 2016-02-08T10:23:00.000 | python,redis,redis-py | No Auth "Authentication required" Error in redis Mass Insertion? | 1 | 1 | 2 | 35,913,988 | 0 |
0 | 0 | I would like to create a GUI that pops up asking where to download a file using python. I would like it to be similar to the interface that Google Chrome uses when downloading a file as that looks pretty standard. Is there a default module or add on that I can use to create thus GUI? or will I have to create myself? any help would be appreciated. | false | 35,282,363 | 0 | 0 | 0 | 0 | There are a number of GUI toolkits you could use, including:
Kivy (modern touch-enabled)
Tkinter (bundled with Python)
These have file chooser widgets, which you could use that would provide standard-looking interfaces to your file system.
How do you want to run this program? | 0 | 60 | 0 | 0 | 2016-02-09T01:18:00.000 | python,user-interface,python-3.x,download,tkinter | python 3 create a GUI to set directory in a program | 1 | 1 | 2 | 35,282,454 | 0 |
1 | 0 | I am writing a scrapy spider that takes as input many urls and classifies them into categories (returned as items). These URLs are fed to the spider via my crawler's start_requests() method.
Some URLs can be classified without downloading them, so I would like to yield directly an Item for them in start_requests(), which is forbidden by scrapy. How can I circumvent this?
I have thought about catching these requests in a custom middleware that would turn them into spurious Response objects, that I could then convert into Item objects in the request callback, but any cleaner solution would be welcome. | true | 35,300,052 | 1.2 | 0 | 0 | 1 | I think using a spider middleware and overwriting the start_requests() would be a good start.
In your middleware, you should loop over all urls in start_urls, and could use conditional statements to deal with different types of urls.
For your special URLs which do not require a request, you can
directly call your pipeline's process_item(), do not forget to import your pipeline and create a scrapy.item from your url for this
as you mentioned, pass the url as meta in a Request, and have a separate parse function which would only return the url
For all remaining URLs, your can launch a "normal" Request as you probably already have defined | 0 | 1,541 | 0 | 5 | 2016-02-09T18:57:00.000 | python,scrapy | Returning Items in scrapy's start_requests() | 1 | 1 | 2 | 35,317,246 | 0 |
0 | 0 | I've been trying to figure this all day and I've reached many dead-ends so I thought I reach out to the fine people here @ stackoverflow.
Here's what I'm working against. I've had Python 3.5.1 installed into a Linux (Linux [xxx] 2.6.9-42.0.2.ELsmp #1 SMP Thu Aug 17 17:57:31 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux) server I don't, or didn't at the time, have root access to. For what ever reason PIP was not included in the installation of Python (even though ever posting I've found about installing PIP for Python >3.4 insists it's installed by default).
I've tried installing installing PIP by using GET-PIP.py, but attempts to run get-ip.py gives a long run of errors (I can provide the errors, if it makes difference).
I've tried installing PIP by using ensurepip, but I'm blocked by the following error:
python -m ensurepip
Ignoring ensurepip failure: pip 7.1.2 requires SSL/TLS
even though I have OpenSSL installed,
openssl version
OpenSSL 0.9.7a Feb 19 2003
Unfortunately, I am stuck here. I don't know why PIP wasn't included in the Python 3.5.1 build, but I need to correct this. Any advise would be appreciated.
Dan | false | 35,301,369 | -0.197375 | 0 | 0 | -1 | I get the exact same probelm and searching for solutions here.
You can try yum install pip | 0 | 790 | 0 | 1 | 2016-02-09T20:17:00.000 | pip,python-3.5 | python 3.5.1 install pip | 1 | 1 | 1 | 43,424,008 | 0 |
0 | 0 | I am writing a script in Python that establishes more than 100 parallel SSH connections, starts a script on each machine (the output is 10-50 lines), then waits for the results of the bash script and processes it. However it is run on a web server and I don't know, whether it would be better to first store the output in a file on the remote machine (that way, I suppose, I can start more remote scripts at once) and later establish another SSH connection (1 command / 1 connection) and read from those files? Now I am just reading the output but the CPU usage is really high, and I suppose the problem is, that a lot of data comes to the server at once. | false | 35,307,829 | 0 | 1 | 0 | 0 | For creating lots of parallel SSH connections there is already a tool called pssh. You should use that instead.
But if we're really talking about 100 machines or more, you should really use a dedicated cluster management and automation tool such as Salt, Ansible, Puppet or Chef. | 0 | 51 | 1 | 0 | 2016-02-10T05:52:00.000 | python,linux,bash,ssh,parallel-processing | Script on a web server that establishes a lot of parallel SSH connections, which approach is better? | 1 | 1 | 1 | 35,316,821 | 0 |
1 | 0 | I am trying to scrape a website using Scrapy framework in python. But i am getting the captchas. The server implements the bot detection using Distil netwrok bot detection. Is there anyway i can work around with it? | false | 35,314,206 | 0 | 0 | 0 | 0 | I personally drown it in proxies. 1 proxy for 4 requests before blocked, then I change proxy. I've several tens of thousands of free proxies, so it's not a big problem. But it's not very fast, so I set concurrency to 1k or about that. | 0 | 4,932 | 0 | 2 | 2016-02-10T11:40:00.000 | python,security,web-scraping | Working of the distil networks bot detection | 1 | 2 | 2 | 61,044,312 | 0 |
1 | 0 | I am trying to scrape a website using Scrapy framework in python. But i am getting the captchas. The server implements the bot detection using Distil netwrok bot detection. Is there anyway i can work around with it? | false | 35,314,206 | -1 | 0 | 0 | -8 | You can get over it by using tools like Selenium. It is a web testing framework that automatically loads the web browser to mimic a normal user. Once a page loads, you can scrape the content with tools such as Scrapy or Bs4. Continue loading the next page, then scrape. It's slower than normal scrapers but it does the job and gets through most detectors like Incapsula.
Hope that helps. | 0 | 4,932 | 0 | 2 | 2016-02-10T11:40:00.000 | python,security,web-scraping | Working of the distil networks bot detection | 1 | 2 | 2 | 35,330,104 | 0 |
0 | 0 | I am looking for some real help. I want to web scraping using Python, I need it because i want to import some database, How can we do that in Python. What libraries do we need ? | false | 35,328,278 | 0 | 0 | 0 | 0 | You can use
1) Beautiful Soup
2) Python Requests
3) Scrapy
4) Mechanize
... and many more. These are the most popular tools, and easy to learn for the beginner.
From there, you can branch out to more complex stuff such as UserAgentSpoofing, HTML Load balancing, Regex, XPATH and CSS Selectors. You will need these to scrape more difficult sites that has protection or login fields.
Hope that helps.
Cheers | 0 | 71 | 0 | 0 | 2016-02-10T23:50:00.000 | python-2.7 | Python Web scraping- Required Libraries and how to do it | 1 | 2 | 3 | 35,330,126 | 0 |
0 | 0 | I am looking for some real help. I want to web scraping using Python, I need it because i want to import some database, How can we do that in Python. What libraries do we need ? | false | 35,328,278 | 0 | 0 | 0 | 0 | As others have suggested I too would use Beautiful Soup and Python Requests, but if you get problems with websites which have to load some data with Javascript after the page has loaded and you only get the incomplete html with Request, try using Selenium and PhantomJs for the scraping. | 0 | 71 | 0 | 0 | 2016-02-10T23:50:00.000 | python-2.7 | Python Web scraping- Required Libraries and how to do it | 1 | 2 | 3 | 44,280,408 | 0 |
0 | 0 | I apologize if this question has been answered in a different thread, I have been looking everywhere for the last week but couldn't find anything that is specific to my case.
I created a .py program that is working as expected, however the moment that I try to convert it into an exe, it starts to generate the following error:
File "site-package\six.py", line82, in _import_module
ImportError: No module named urllib2
I understand that the six module was made to facilitate running the code whether using python 2 or 3 and I also understand that urllib2 has been split into request and error.
I went through the six.py file to check references of urllib2 but I am not sure what kind of modification I need to make, I am kind of new to Python.
I tried this in python 2.7.10 and python 3.4 and I really don't understand what I am missing. i also tried pyinstaller and py2exe and got the same error message.
I didn't include the code I wrote because the error is coming from the six.py file itself.
Any help that you can provide me to fix this is greatly appreciated.
Thank you!
Intidhar | false | 35,339,916 | 0 | 0 | 0 | 0 | Sometimes python can't see modules in site-packages (or dist-packages in *nix) folder. Especially when you try to use generator (like py2exe) or use python interpreter as zip-package.
Your module six in site-package folder, but urllib2 is not. I have solved my (similar) problem with copying all modules from site-packages to top-level (site-packages/..) folder.
P.S. I know, it is a bad way, but you can do it with your copy of interpreter. | 1 | 287 | 0 | 0 | 2016-02-11T12:54:00.000 | python,urllib2 | no module named urllib2 when converting from .py to .exe | 1 | 1 | 1 | 35,340,559 | 0 |
1 | 0 | I'm writing a chrome extension that injects a content script into every page the user goes to. What i want to do is to get the output of a python function for some use in the content script (can't write it in javascript, since it requires raw sockets to connect to my remote SSL server).
I've read that one might use CGI and Ajax or the like, to get output from the python code into the javascript code, but i ran into 3 problems:
I cannot allow hosting the python code on a local server, since it is security sensitive data that only the local computer should be able to know.
Chrome demands that HTTP and HTTPS can not mix- if the user goes to an HTTPS website, i can't host the python code on a HTTP server.
I don't think Chrome even supports CGI on extensions-when i try to access a local file, all it does is print out the text (the python code itself) instead of what i defined to be its output (I tried to do so using Flask). As I said in 1, I shouldn't even try this anyway, but this is just a side note.
So my question is, how do i get the output of my python functions inside a Content Script, built with javascript? | false | 35,346,456 | 0 | 0 | 0 | 0 | The only way to get the output of a Python script inside a content script built with Javascript is to call the file with XMLHttpRequest. As you noted, you will have to use an HTTPS connection if the page is served over HTTPS. A workaround for this is to make a call to your background script, which can then fetch the data in whichever protocol it likes, and return it to your content script. | 0 | 7,214 | 0 | 6 | 2016-02-11T17:46:00.000 | javascript,python,google-chrome-extension | Combining Python and Javascript in a chrome plugin | 1 | 1 | 2 | 35,349,167 | 0 |
0 | 0 | When I send a message to my Telegram Bot, it responses with no problems.
I wanna limit the access such that me and only me can send message to it.
How can I do that? | false | 35,368,557 | 1 | 1 | 0 | 11 | Filter messages by field update.message.from.id | 0 | 27,010 | 0 | 23 | 2016-02-12T17:18:00.000 | telegram-bot,python-telegram-bot | How To Limit Access To A Telegram Bot | 1 | 1 | 6 | 35,375,185 | 0 |
0 | 0 | In legacy system, We have created init module which load information and used by various module(import statement). It's big module which consume more memory and process longer time and some of information is not needed or not used till now. There is two propose solution.
Can we determine in Python who is using this module.Fox Example
LoadData.py ( init Module)
contain 100 data member
A.py
import LoadData
b = LoadData.name
B.py
import LoadData
b = LoadData.width
In above example, A.py using name and B.py is using width and rest information is not required (98 data member is not required).
is there anyway which help us to determine usage of LoadData module along with usage of data member.
In simple, we need to traverse A.py and B.py and find manually to identify usage of object.
I am trying to implement first solution as I have more than 1000 module and it will be painful to determine by traversing each module. I am open to any tool which can integrate into python | false | 35,428,278 | 0.197375 | 0 | 0 | 1 | Your question is quite broad, so I can't give you an exact answer. However, what I would generally do here is to run a linter like flake8 over the whole codebase to show you where you have unused imports and if you have references in your files to things that you haven't imported. It won't tell you if a whole file is never imported by anything, but if you remove all unused imports, you can then search your codebase for imports of a particular module and if none are found, you can (relatively) safely delete that module.
You can integrate tools like flake8 with most good text editors, so that they highlight mistakes in real time.
As you're trying to work with legacy code, you'll more than likely have many errors when you run the tool, as it looks out for style issues as well as the kinds of import/usage issues that youre mention. I would recommend fixing these as a matter of principle (as they they are non-functional in nature), and then making sure that you run flake8 as part of your continuous integration to avoid regressions. You can, however, disable particular warnings with command-line arguments, which might help you stage things.
Another thing you can start to do, though it will take a little longer to yield results, is write and run unit tests with code coverage switched on, so you can see areas of your codebase that are never executed. With a large and legacy project, however, this might be tough going! It will, however, help you gain better insight into the attribute usage you mention in point 1. Because Python is very dynamic, static analysis can only go so far in giving you information about atttribute usage.
Also, make sure you are using a version control tool (such as git) so that you can track any changes and revert them if you go wrong. | 1 | 41 | 0 | 0 | 2016-02-16T09:14:00.000 | python,performance,python-2.7,python-3.x,design-patterns | Determine usage/creation of object and data member into another module | 1 | 1 | 1 | 35,433,954 | 0 |
0 | 0 | I like to use google when I'm searching for documentation on things related to python. Many times what I am looking for turns out to be in the official python documentation on docs.python.org. Unfortunately, at time of writing, the docs for the python 2.x branch tend to rank much higher on google than the 3.x branch, and I often end up having to switch to the 3.x branch after loading the page for the 2.x documentation. The designers of docs.python.org have made it easy to switch between python versions, which is great; but I just find it annoying to have to switch python versions and wait an extra page load every time I follow a link from google.
Has anyone has done anything to combat this? I'd love to hear your solutions.
Here's what I've tried so far:
clicking on the python 3.x link farther down - this works sometimes, but often the discrepancy in ranking between 2.x and 3.x results is quite big, and the 3.x things are hard to find.
copying the url from the search result and manually replacing the 2 with 3 - this works but is also inconvenient. | false | 35,488,268 | 0.379949 | 0 | 0 | 4 | Often when searching for Python stuff, I add the search term "python" anyway because many names refer to entirely different things in the world as well. Using "python3" here appears to solve your problem. I also feel it a lot less unobtrusive than the hacks you describe. | 0 | 710 | 0 | 17 | 2016-02-18T17:22:00.000 | python-3.x,google-search | How to make google search results default to python3 docs | 1 | 1 | 2 | 35,488,561 | 0 |
0 | 0 | What is the GraphLab equivalent to the following NetworkX code?
for nodeset in nx.connected_components(G):
In GraphLab, I would like to obtain a set of Vertex IDs for each connected component. | false | 35,489,583 | 0 | 0 | 0 | 0 | Here is the first cut at porting from NetworkX to GraphLab. However, iterating appears to be very slow.
temp1 = cc['component_id']
temp1.remove_column('__id')
id_set = set()
id_set = temp1['component_id']
for item in id_set:
nodeset = cc_out[cc_out['component_id'] == item]['__id'] | 0 | 156 | 0 | 0 | 2016-02-18T18:29:00.000 | python,networkx,graphlab | NetworkX to GraphLab Connected Component Conversion | 1 | 1 | 2 | 35,491,984 | 0 |
1 | 0 | I have a django application that I want to host a form on to use as the template for an ExternalHit on Amazon's Mechanical Turk. I've been trying to figure out ways that I can make it so only mturk is authorized to view this document.
One idea I've been considering is looking at the request headers and confirming that the request came from Amazon. However, I couldn't find any documentation regarding any of these topics and I am worried that if the source of the request ever changes the page will become inaccessible to mturk.
Anyone have any suggestions or solutions that they have implemented?
Fyi, I'm using python/django/boto. | true | 35,495,874 | 1.2 | 0 | 0 | 1 | Every request from AWS will include additional URL parameters: workerId, assignmentId, hitId. That's probably the easiest way to identify a request coming from MTurk. There may be headers, as well, but they're not documented anywhere. | 0 | 59 | 0 | 0 | 2016-02-19T01:44:00.000 | python,django,authorization,mechanicalturk | What options are there for verifying that mturk is requesting my ExternalQuestion and not a 3rd party? | 1 | 1 | 1 | 35,503,013 | 0 |
1 | 0 | note: PhantomJS runs in PyCharm environment, but not IDLE
I have successfully used PhantomJS in Python in the past, but I do not know what to do to revert to that set up.
I am receiving this error in Python (2.7.11): selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
I have tried to 'symlink' phantomjs to the path (usr/local/bin [which is also in the path]), and even manually locate /usr/local/bin to place phantomjs in the bin folder. However, there is still a path error in python.
What am I missing? | false | 35,565,733 | 0.049958 | 0 | 0 | 1 | If you were able to execute in Terminal, just restart PyCharm and it will synchronize the environment variables from the system. (You can check in "RUN" => "Edit Configurations") | 1 | 8,984 | 0 | 4 | 2016-02-22T23:07:00.000 | python,selenium,phantomjs | PhantomJS was placed in path and can execute in terminal, but PATH error in Python | 1 | 1 | 4 | 46,802,328 | 0 |
0 | 0 | I've created a docker image based on Ubuntu 14.04 which runs a python websocket client to read from a 3rd party service that sends variable length JSON encoded strings down. I find that the service works well until the encoded string is longer than 8192 bytes and then the JSON is malformed, as everything past 8192 bytes has been cut off.
If I use the exact same code on my mac, I see the data come back exactly as expected.
I am 100% confident that this is an issue with my linux configuration but I am not sure how to debug this or move forward. Is this perhaps a buffer issue or something even more insidious? Can you recommend any debugging steps? | true | 35,567,020 | 1.2 | 0 | 0 | 0 | So it turns out the problem came from the provided websocket module from google cloud sdk. It has a bug where after 8192 bytes it will not continue to read from the socket. This can be fixed by supplying the websocket library maintained by Hiroki Ohtani earlier on your PYTHONPATH than the google cloud sdk. | 0 | 91 | 1 | 1 | 2016-02-23T01:14:00.000 | python,linux,sockets | Websocket client on linux cuts off response after 8192 bytes | 1 | 1 | 1 | 37,220,484 | 0 |
0 | 0 | I'm using Selenium with Python API and Firefox to do some automatic stuff, and here's my problem:
Click a link on original page, let's say on page a.com
I'm redirected to b.com/some/path?arg=value
And immediately I'm redirected again to the final address c.com
So is there a way to get the intermediate redirect URL b.com/some/path?arg=value with Selenium Python API? I tried driver.current_url but when the browser is on b.com, seems the browser is still under loading and the result returned only if the final address c.com is loaded.
Another question is that is there a way to add some event handlers to Selenium for like URL-change? Phantomjs has the capacity but I'm not sure for Selenium. | true | 35,592,602 | 1.2 | 0 | 0 | 2 | Answer my own question.
If the redirect chain is very long, consider to try the methods @alecxe and @Krishnan provided. But in this specific case, I've found a much easier workaround:
When the page finally landed c.com, use
driver.execute_script('return window.document.referrer') to get the
intermediate URL | 0 | 14,337 | 0 | 5 | 2016-02-24T03:19:00.000 | python,selenium,redirect,selenium-webdriver,event-handling | How can I get a intermediate URL from a redirect chain from Selenium using Python? | 1 | 1 | 4 | 35,671,022 | 0 |
1 | 0 | I am new to scrapping.
I have a scrapy spider that uses selenium for items interaction
I tried to run it on a digitalocean droplet but it fails to runs the phantomjs driver all the time like it's kinda blocked raising exception:
BadStatusLine: ''
and any other webdrivers are unstable according to the display issue and xvfb.
raising irregularly
Message: The browser appears to have exited before we could connect.
is there any idea what should i do where where i can deploy it ? | true | 35,600,560 | 1.2 | 0 | 0 | 0 | As for my problem with phantomjs it was solved by reinstalling it and increasing the droplet memory .. and for the other message with the other browsers I used "xvfb-run -a" it was a temporary solution but it worked ... | 0 | 146 | 0 | 0 | 2016-02-24T11:13:00.000 | python,selenium,deployment,scrapy | deployment of scrapy selenium project | 1 | 1 | 1 | 35,968,793 | 0 |
0 | 0 | Our infrastructure is using Python for everything in the backend and Javascript for our "front-end" (it's a library we serve to other sites). The communication between the different components of the infrastructure is done via JSON messages.
In Python, json.load() and json.dump() are a safe way of dealing with a JSON string. In Javascript, JSON.parse() would be use instead. But, these functions only guarantee that the string has a proper JSON format, am I right?
If I'm concerned about injection attacks, I would need to sanitize every field of the JSON by other means. Am I right in this assumption? Or just by using the previously mentioned functions we would be safe? | true | 35,630,725 | 1.2 | 0 | 0 | 0 | There is no such thing as sanitized and unsanitized data, without context.
Data is only considered unsafe if user controlled data is used in a context where it has special meaning.
e.g. ' in SQL, and <script> in HTML.
Contrary to <script> in SQL, which is completely safe.
The upshot is to encode/sanitize when the data is used, not when it is received from JSON. | 1 | 7,532 | 0 | 4 | 2016-02-25T14:54:00.000 | javascript,python,json,security | JSON parsing and security | 1 | 2 | 2 | 35,648,324 | 0 |
0 | 0 | Our infrastructure is using Python for everything in the backend and Javascript for our "front-end" (it's a library we serve to other sites). The communication between the different components of the infrastructure is done via JSON messages.
In Python, json.load() and json.dump() are a safe way of dealing with a JSON string. In Javascript, JSON.parse() would be use instead. But, these functions only guarantee that the string has a proper JSON format, am I right?
If I'm concerned about injection attacks, I would need to sanitize every field of the JSON by other means. Am I right in this assumption? Or just by using the previously mentioned functions we would be safe? | false | 35,630,725 | 0.462117 | 0 | 0 | 5 | JSON.parse will throw an exception if the input string is not in valid JSON format.
It is safe to use, I can't think of any way to attack your code with just JSON.parse. It does not work like eval.
Of course you can check the resulting json object to make sure it has the structure you're expecting. | 1 | 7,532 | 0 | 4 | 2016-02-25T14:54:00.000 | javascript,python,json,security | JSON parsing and security | 1 | 2 | 2 | 35,630,951 | 0 |
0 | 0 | I want to create a website that you can register on, and enter your Power/Gas company login info.
I will then have a Python scraper that gets your power/gas usage and creates bar charts and other information about it. The Python script will run monthly to update with the latest info.
Is this a terrible idea? I believe the passwords to the power company login will have to be saved in plaintext, there wouldnt be any other way to use them again later if they were encrypted.
I'm not sure if anyone will trust to give their logins to my website either.
Is this a bad thing to do, and should it just not be done, or is there a better way to do this? | true | 35,654,314 | 1.2 | 0 | 0 | 1 | Whatever the context, having the password stored in plain-text is not good at all, even though I like your idea.
I don't think there could be another way to achieve this, except by having an agreement with the power company. | 0 | 44 | 0 | 0 | 2016-02-26T14:30:00.000 | python | Python HTTPS Login to account to scrape data, is this bad practice? | 1 | 1 | 1 | 35,654,506 | 0 |
0 | 0 | I'm using amazon s3-boto3 with python to put some files at amazon s3 bucket
before running the code i want to make a validation for the
access token , secret , region and bucket
to ensure all the data are correct without waiting for the exception when executing my code.
i checked the documentation and make a search at google with no luck
any help ? | false | 35,681,330 | 0 | 0 | 0 | 0 | If you don't want the token credential become show stopper before a massive batch processing, you can try execute a small put_object() to test whether the access is denied.
Because many different roles can be specify inside IAM policies and ACL, this is the only feasible ways to validate the access. | 0 | 161 | 0 | 1 | 2016-02-28T10:26:00.000 | python,validation,amazon-web-services,amazon-s3,boto3 | Amazon S3 validating access token , secret , region and bucket | 1 | 1 | 1 | 36,065,806 | 0 |
0 | 0 | I'm having a lot of problems with properly processing certain special characters in my application.
Here's what I'm doing at the moment:
User enters a location, data is retrieved via the Google Geolocation API (full name, lat and lon) and is sent via ajax (as a JSON string) to a python script
The python script parses the JSON string, reads the parameters and executes a http request to an API, which runs on nodejs
Data is inserted into MongoDB
The problem is when there's a special character in the location name. The original location name I'm testing on has the character è. When the json is parsed in Python I get a \xc9 and after the process completes I always end up with a completely different (mostly invalid) character in the database than what was originally entered. I've tried all sorts of encoding and decoding (mostly from similar questions on stackoverflow), but I don't properly understand what exactly I should do.
Any tips would be appreciated. Thanks! | false | 35,686,627 | 0.197375 | 0 | 0 | 1 | Have you tried var myEncodedString = encodeURIComponent('your string or whatever'); in your js code | 1 | 290 | 0 | 2 | 2016-02-28T18:38:00.000 | javascript,python,mongodb,special-characters | encoding special characters in javascript/python | 1 | 1 | 1 | 35,686,982 | 0 |
0 | 0 | I am using python WSGI module for servicing IPv4 http requests. Does WGSI support IPv6 requests? Can it listen to IPv6 IP port combination? | false | 35,702,619 | 0.379949 | 0 | 0 | 2 | WSGI is completely unconcerned with IP versions; it is a specification for communication between a webserver and Python code. It is up to your server - Apache, nginx, gunicorn, uwsgi, whatever - to listen to the port. | 0 | 394 | 0 | 1 | 2016-02-29T14:54:00.000 | python,wsgi | Does python WSGI supports IPv6? | 1 | 1 | 1 | 35,702,780 | 0 |
0 | 0 | I'm trying to build a Twitter crawler that would crawl all the tweets of a specified user and would save them in json format. While trying to convert the Status object into json format using _json attribute of Status, I'm getting the following error :
AttributeError : 'Status' object has no attribute '_json'
Can anyone please help me with this? | false | 35,714,894 | 0 | 1 | 0 | 0 | The _json attribute started working once I upgraded my tweepy version to 3.5.0. | 0 | 1,607 | 0 | 0 | 2016-03-01T04:51:00.000 | python,json,tweepy | Tweepy : AttributeError : Status object has no attribute _json | 1 | 1 | 1 | 35,715,292 | 0 |
0 | 0 | At the moment I'm writing on a program to fill a database with product information. I've got a list of categories, from which i need the 20 best products from Amazon. Since i only got a keyword list of them and no BrowseNodes, I can't search in other SearchIndexes then "All" with the Amazon API. This leads to alot of false Products.
For example the categorie "tennis bat" gives me about 12 real tennis bat's and 8 other things for tennis, for example a damper, tennis bags or sometimes things completly unrelated.
I've also found no reliable way of getting the BrowseNode of a categorie.
I've tried to search for Items with the Keyword and getting the BrowseNode of the first Item, but not only does those products have many different BrowseNodes, but sometimes they dont relate to the keyword at all.
I also can't search for them manually, because the list of categories is over 700 by now.
Would love any input on my problem. | true | 35,721,894 | 1.2 | 0 | 0 | 0 | The only answer I've found so far, is manually choosing BrowseNode and SearchIndex | 0 | 93 | 0 | 0 | 2016-03-01T11:38:00.000 | python,amazon,amazon-product-api | Get best products of an Amazon categorie | 1 | 1 | 1 | 35,895,419 | 0 |
1 | 0 | I'm researching about WebSockets atm and just found Autobahn with Autobahn|Python.
I'm not sure that I understand the function of that toolset (?) correctly.
My intention is to use a WebSocket-Server for communication between a C program and a HTML client.
The idea is to let the C program connect via WebSocket to the Server and send the calculation progress of the C program to every HTML client that is connected to that WebSocket-Server.
Am I able to write a WebSocket Server with Autobahn|Python and then connect with an HTML5-client and a C program client? | false | 35,742,708 | 0.197375 | 1 | 0 | 1 | Autobahn|Python provides both a WebSocket implementation for Python and an implementation for a client for the WAMP protocol on top of that.
You can use the WebSocket part on its own to implement your WebSocket server. | 0 | 59 | 0 | 0 | 2016-03-02T09:08:00.000 | python,c,html,websocket,autobahn | Is the following principle the right for autobahn python? | 1 | 1 | 1 | 35,795,470 | 0 |
0 | 0 | I am trying to build a library of about 3000 IP Addresses. They will each be run through a program I have already written separately. It will scan 60 of them a day, so it needs to keep track of which is scanned and put them at the back of the queue.
I'm not looking for you to write the code, just a little bit of a push in the right direction. | false | 35,801,424 | 0 | 0 | 0 | 0 | I am not sure, but hope if this could help.
make a priority queue.
priority will be on the time-stamp on which it was accessed.
(if it was accessed sooner it will have less priority).
{"ip": "time-stamp when it was accessed"}
keep the queue sorted. On top you would have the least recently used ones.
so for 50 days you will have unique ip and after that it would repeat.
you could keep a list of time-stamp against each ip so you know when it accessed
{"ip": ["time-stamp when it was accessed", "time-stamp when it was accessed"], ...}
you can use pickle or mongodb to dump and get back the data. | 1 | 59 | 0 | 0 | 2016-03-04T16:41:00.000 | python,queue | Building a queue in Python | 1 | 1 | 1 | 35,889,996 | 0 |
0 | 0 | I run this earlier in the code
watch("httpstream")
Subsequently, any py2neo commands that triggers HTTP traffic will result in verbose logging. How can I stop the effect of watch() logging without creating a new Graph instance? | false | 35,847,399 | 0.379949 | 0 | 0 | 4 | You don't. That is, I've not written a way to do that.
The watch function is intended only as a debugging utility for an interactive console session. You shouldn't need to use it in an application. | 0 | 183 | 0 | 0 | 2016-03-07T15:20:00.000 | python,neo4j,py2neo | How do I stop py2neo watch()? | 1 | 1 | 2 | 35,849,399 | 0 |
0 | 0 | How to send an audio file as message in Cloudamqp?
I'm guessing I need its byte stream and send it as a JSON. But I'm not sure if that is possible. Or do I just send the link of the location of the audio file for download? | false | 35,869,226 | 0 | 0 | 0 | 0 | The message body is a buffer, you can put what you prefer inside.
JSON, ANS1, XML or buffer audio. | 0 | 404 | 0 | 0 | 2016-03-08T13:52:00.000 | python,rabbitmq,messages,bytestream,cloudamqp | RabbitMQ and Audiofiles | 1 | 1 | 1 | 35,869,733 | 0 |
1 | 0 | I am trying to show live webcam video stream on webpage and I have a working draft. However, I am not satisfied with the performance and looking for a better way to do the job.
I have a webcam connected to Raspberry PI and a web server which is a simple python-Flask server. Webcam images are captured by using OpenCV and formatted as JPEG. Later, those JPEGs are sent to one of the server's UDP ports. What I did up to this point is something like a homemade MJPEG(motion-jpeg) streaming.
At the server-side I have a simple python script that continuously reads UDP port and put JPEG image in the HTML5 canvas. That is fast enough to create a perception of a live stream.
Problems:
This compress the video very little. Actually it does not compress the video. It only decreases the size of a frame by formatting as JPEG.
FPS is low and also quality of the stream is not that good.
It is not a major point for now but UDP is not a secure way to stream video.
Server is busy with image picking from UDP. Needs threaded server design.
Alternatives:
I have used FFMPEG before to convert video formats and also stream pre-recorded video. I guess, it is possible to encode(let say H.264) and stream WebCam live video using ffmpeg or avconv. (Encoding)
Is this applicable on Raspberry PI ?
VLC is able to play live videos streamed on network. (Stream)
Is there any Media Player to embed on HTML/Javascript to handle
network stream like the VLC does ?
I have read about HLS (HTTP Live Stream) and MPEG-DASH.
Does these apply for this case ? If it does,how should I use them ?
Is there any other way to show live stream on webpage ?
RTSP is a secure protocol.
What is the best practice for transport layer protocol in video
streaming ? | false | 35,880,417 | 0 | 0 | 0 | 0 | You could use FFmpeg to mux the video stream in to H.264 in a mp4 container and then that can be directly used in a HTML5 video element. | 0 | 8,409 | 0 | 6 | 2016-03-08T23:56:00.000 | python,video,ffmpeg,video-streaming,http-live-streaming | Live Video Encoding and Streaming on a Webpage | 1 | 1 | 2 | 35,886,856 | 0 |
0 | 0 | I'm using py.test for REST API automation using python request library.
How to get coverage using pytest-cov tool. I'm running the automation on build server and code is executed in application server. | false | 35,910,573 | 0.291313 | 1 | 0 | 3 | The usual coverage tools are built for the much more common case of the measured code being run inside the same process as the test runner. You not only are running in a different process, but a different machine.
You can use coverage.py directly on the remote machine when you start the process running the code under test. How you would do that depends on how you start that process today. The simple rule of thumb is that wherever you had been saying, "python my_prog.py", you can say, "coverage run my_prog.py". | 0 | 1,073 | 0 | 2 | 2016-03-10T07:51:00.000 | python,pytest,coverage.py | pytest-cov get automation coverage from remote server | 1 | 1 | 2 | 35,939,160 | 0 |
0 | 0 | Another way I could ask this question is:
How do I set pages served by Apache to have higher privileges? This would be similar to me setting an Application Pool in IIS to use different credentials.
I have multiple Perl and Python scripts I am publishing through a web front end. The front end is intended to run any script I have in a database. With most of the scripts I have no issues... but anything that seems to utilize the network returns nothing. No error messages or failures reported. Running from CLI as ROOT works, run from WEB GUI as www-data calling same command fails.
I am lumping Python and Perl together in this question because the issue is the same leading me to believe it isn't a code issue, it is a permissions issue. Also why I am not including code, initially.
These are running on linux using Apache and PHP5. Python 2.7 and Perl5 I believe. Here are examples of apps I have that are failing:
Python - Connecting out to VirusTotal API
Perl - Connecting to Domains and Creating a Graph with GraphViz
Perl - Performing a Wake On LAN function on a local network segment. | false | 35,961,414 | 0.197375 | 1 | 0 | 1 | So after I posted this I looked into Handlers like I use for IIS. That led me down the path of SUEXEC and through everything I tried I couldn't get Apache to load it. Even made sure that I set the bits for SETUID and SETGID.
When I was researching that I ran across .htaccess files and how they can enable CGI scripts. I didn't want to put in .htaccess files so I just made sure the apache.conf was configured to allow CGI. That also did not help.
So finally while I was studying .htaccess they referred to ScriptAlias. I believe this is what solved my issue. I modified the ScriptAlias section in an apache configuration file to point to my directory containing the script. After some fussing with absolute directories and permissions for the script to read/write a file I got everything to work except it isn't going through the proxy set by environment http_proxy. That is a separate issue though so I think I am good to go on this issue. I will attempt the same solution on my perl LAMP. | 0 | 79 | 0 | 0 | 2016-03-12T18:07:00.000 | php,python,apache,perl,privileges | Perl/Python Scripts Fail to Access Internet/Network through Web GUI | 1 | 1 | 1 | 35,962,761 | 0 |
0 | 0 | I need to get users access matrix for Jenkins users using Python. What is endpoint for REST API ? | true | 35,987,855 | 1.2 | 0 | 0 | -1 | User matrix can be get by using {jenkins_url}/computer/(master)/config.xml request. After that you can parse it and create list of users with permissions. | 0 | 871 | 0 | 1 | 2016-03-14T12:42:00.000 | python,jenkins,jenkins-plugins,jenkins-cli | Jenkins get user access Matrix | 1 | 1 | 2 | 36,003,452 | 0 |
1 | 0 | How can I launch an EMR using spot block (AWS) using boto ? I am trying to launch it using boto but I cannot find any parameter --block-duration-minutes in boto, I am unable to find how to do this using boto3. | false | 36,029,866 | 0.099668 | 0 | 0 | 1 | According to the boto3 documentation, yes it does support spot blocks.
BlockDurationMinutes (integer) --
The defined duration for Spot instances (also known as Spot blocks) in minutes. When specified, the Spot instance does not terminate before the defined duration expires, and defined duration pricing for Spot instances applies. Valid values are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as a Spot instance receives its instance ID. At the end of the duration, Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates.
Iniside the LaunchSpecifications dictionary, you need to assign a value to BlockDurationMinutes. However, the maximum value is 360 (6 hours) for a spot block. | 0 | 454 | 0 | 4 | 2016-03-16T08:02:00.000 | python-2.7,boto,emr,boto3 | How can I launch an EMR using SPOT Block using boto? | 1 | 1 | 2 | 46,980,003 | 0 |
1 | 0 | I'm working with django-websocket-redis lib, that allow establish websockets over uwsgi in separated django loop.
By the documentation I understand well how to send data from server through websockets, but I don't understand how to receive.
Basically I have client and I want to send periodically from the client to server status. I don't understand what I need to do, to handle receiving messages from client on server side? What URL I should use on client? | false | 36,037,286 | 0 | 0 | 0 | 0 | You can achieve that by using periodically ajax calls from client to server. From documentation:
A client wishing to trigger events on the server side, shall use
XMLHttpRequests (Ajax), as they are much more suitable, rather than
messages sent via Websockets. The main purpose for Websockets is to
communicate asynchronously from the server to the client.
Unfortunately I was unable to find the way to achieve it using just websocket messages. | 0 | 312 | 0 | 0 | 2016-03-16T13:37:00.000 | python,ios,django,sockets,websocket | Receive data on server side with django-websocket-redis? | 1 | 1 | 1 | 36,880,964 | 0 |
0 | 0 | as I'm using aws I'm looking into two tutorials just to make sure I do it right. and I'm at a step to do eb init, and it asks "do you want ssh for your instance?" one tutorial says I should say yes and the other says i should put no...isn't ssh the one that connects my laptop with amazon's network shouldn't I put yes for this?but why does the tutorial say I should put no | false | 36,051,581 | 0.379949 | 0 | 0 | 2 | The advice against using ssh comes from a well-meaning bunch that wants us all to create reproducible configurations that don't require administrators to login in order to periodically tweak the config. It also becomes another interface that must be secured.
Ideally, everything you need is independent of ssh because you are providing some internet-accessible service like a webserver/database/etc.
If your process isn't that mature yet, it's acceptable to enable ssh, but you should strive towards not needing it. | 0 | 35 | 1 | 0 | 2016-03-17T04:17:00.000 | python,django,amazon-web-services,ssh | do you want ssh for your instance? | 1 | 1 | 1 | 36,051,700 | 0 |
0 | 0 | I have machine with multiple NICs that can be connected to one net, to different nets or every other possible way. Python script that uses requests module to send POST/GET requests to the server is ran on that machine.
So, the question is next: how can I know in python script from which interface requests will be sent? | true | 36,054,602 | 1.2 | 0 | 0 | 0 | Here there are two different answers. The one is for the case where you want to specify the NIC to send the request and the other is for what you're asking: find the correct NIC.
For the second answer, I can only say that it depends. Are these NICs on the same network / subnet? Are they bonded?
If they are on a different network then you can know the host IP Address and use the computer's routing table to see which NIC the packet will go through. If they are bonded on the same network then the request will leave from the bond interface since it is (normally) the one with an IP Address. If they are on the same network and have a different IP on the same subnet then it depends. Can you provide some more information in that case? | 0 | 204 | 0 | 1 | 2016-03-17T07:58:00.000 | python,python-3.x,post,network-programming,python-requests | Python 3: how to get nic from which will be sent packets to certain ip? | 1 | 1 | 1 | 36,056,276 | 0 |
0 | 0 | I'm trying to represent a conversation as a series of decisions for a chatbot I'm building. I'm not sure if a tree is the best data structure of this, but it was the first that came to mind.
For example, the chatbot may ask the user "How are you?", to which the user might respond positively or negatively. If the user responds positively, I want the chatbot to traverse the tree in that direction where the next node would be the set of possible responses to a positive answer (and vice versa).
Is this the right way to represent a conversation like this? If so, what is the best way to implement it? | false | 36,063,280 | 0 | 0 | 0 | 0 | I think it's difficult.Your way is too simple to reach your demand.
First, how do you judge the answer that users respond is positive or negative?
How about the user answering "Who are you?"
If the answer users respond is also simple ,it might work.Just search in the tree to find the answer to the user's repond.
Maybe you can learn some Artificial Intelligence method to do it. | 0 | 572 | 0 | 0 | 2016-03-17T14:20:00.000 | python,data-structures,tree,chat,chatbot | Tree Data Structure for Conversation | 1 | 1 | 1 | 36,063,802 | 0 |
1 | 0 | I am trying to find, do I follow someone or not. I realized that although it is written in official Tweepy document, I cannot use API.exists_friendship anymore. Therefore, I tried to use API.show_friendship(source_id/source_screen_name, target_id/target_screen_name) and as the document said it returns me friendship object. (<tweepy.models.Friendship object at 0x105cd0890>, <tweepy.models.Friendship object at 0x105cd0ed0>)
When I write screen_names = [user.screen_name for user in connection.show_friendship(target_id=someone_id)] it returns my_username and username for someone_id.
Can anyone tell me how can I use it properly? or is there any other method that simply returns me True/False because I just want to know do I follow him/her or not. | false | 36,118,490 | 0 | 0 | 0 | 0 | i am using below code this works fine ..
if(api.show_friendship(source_screen_name='venky6amesh', target_screen_name='vag')):
print("the user is not a friend of ")
try:
api.create_friendship(j)
print('success')
except tweepy.TweepError as e:
print('error')
print(e.message[0]['code'])
else:
print("user is a friend of ",j) | 0 | 3,016 | 0 | 0 | 2016-03-20T19:31:00.000 | python,tweepy | Checking friendship in Tweepy | 1 | 1 | 4 | 55,508,308 | 0 |
1 | 0 | My 10 year old and I are implementing a project which calls for audio to be played by a Chromecast Audio after a physical button is pressed.
She is using python and pychromecast to connect up to a chromecast audio.
The audio files are 50k mp3 files and hosted over wifi on the same raspberry pi running the button tools. They are hosted using nginx.
Delay from firing the play_media function in pychromecast to audio coming out of the chromecast is at times in excess of 3 seconds, and never less than 1.5 seconds. This seems, anecdotally, to be much slower than casting from spotify or pandora. And, it's definitely too slow to make pushing the button 'fun'.
File access times can matter on the pi, but reading the entire file using something like md5sum takes less than .02 seconds, so we are not dealing with filesystem lag.
Average file download times for the mp3 files from the pi is 80-100ms over wifi, so this is not the source of the latency.
Can anyone tell me
What the expected delay is for the chromecast audio to play a short file
If pychromecast is particularly inefficient here, and if so, any suggestions for go, python or lisp-based libraries that could be used.
Any other tips for minimizing latency? We have already downconverted from wav files thinking raw http speed could be an issue.
Thanks in advance! | false | 36,122,859 | 0.197375 | 1 | 0 | 1 | I've been testing notifications with pychromecast. I've got a delay of 7 sec.
Since you can't play a local file, but only a file hosted on a webserver, I guess the chromecast picks up the file externally.
Routing is via google's servers, which is what google does with all its products. | 0 | 657 | 0 | 6 | 2016-03-21T03:51:00.000 | python,audio,raspberry-pi,chromecast | Expected Chromecast Audio Delay? | 1 | 1 | 1 | 41,686,041 | 0 |
0 | 0 | I am currently implementing a socket server using Python's socketServer module. I am struggling to understand how a client 'signals' the server to perform certain tasks.
As you can tell, I am a beginner in this area. I have looked at many tutorials, however, these only tell you how to perform singular tasks in the server e.g. modify a message from the client and send it back.
Ideally what I want to know is there a way for the client to communicate with the server to perform different kinds of tasks.
Is there a standard approach to this issue?
Am I even using the correct type of server?
I was thinking of implementing some form of message passing from the client that tells the server which task it should perform. | true | 36,176,502 | 1.2 | 0 | 0 | 1 | I was thinking of implementing some form of message passing from the client that tells the server which task it should perform.
That's exactly what you need: an application protocol.
A socket (assuming a streaming Internet socket, or TCP) is a stream of bytes, nothing more. To give those bytes any meaning, you need a protocol that determines which byte (or sequence thereof) means what.
The main problem to tackle is that the stream that such a socket provides has no notion of "messages". So when one party sends "HELLO", and "BYE" after that, it all gets concatenated into the stream: "HELLOBYE". Or worse even, your server first receives "HELL", followed by "OBYE".
So you need message framing, or rules how to interpret where messages start and end.
You generally don't want to invent your own application protocol. Usually HTTP or other existing protocols are leveraged to pass messages around. | 0 | 70 | 0 | 1 | 2016-03-23T11:09:00.000 | python,sockets,tcp,server | how does a server execute different tasks based on client input? | 1 | 1 | 1 | 36,176,937 | 0 |
1 | 0 | I recently came to know about protractor framework which provides end to end testing for angular applications.
I would like to know, which test framework suits better for following webstack, either selenium or protractor
Angular, Python and MongoDB.
I am going to use mozilla browser only.
Can anyone please provide your valueable suggestions | false | 36,178,187 | 0.099668 | 1 | 0 | 1 | Protractor is based off using Selenium webdrivers. If you have an Angular app for your entire front-end, I would go with Protractor. If you are going to have a mixed front-end environment, you may want to go with Selenium only. | 0 | 548 | 0 | 0 | 2016-03-23T12:26:00.000 | python,angularjs,selenium,testing,protractor | For E2E Testing: which is better selenium or protractor for following web stack (Angular, Python and MongoDB)? | 1 | 1 | 2 | 36,178,367 | 0 |
0 | 0 | I'm trying to setup a basic socket program using TCP connections in Python, such as creating a socket, binding it to a specific address and port, as well as sending and receiving a HTTP packet.
I'm having trouble receiving the request message from the client.
I'm using the following:
message = serverSocket.receive
But when I go to the socket, port and file that's in my server directory through my browser I get the following error from IDLE:
AttributeError: 'socket' object has no attribute 'receive'
Is this the wrong approach to receive a request message from a client? | false | 36,187,372 | 0.099668 | 0 | 0 | 1 | In order to receive data from a client/server via TCP in python using sockets:
someData = sock.recv(1024) | 0 | 73 | 0 | 0 | 2016-03-23T19:45:00.000 | python | Python Socket Setup | 1 | 1 | 2 | 37,614,849 | 0 |
0 | 0 | I'm storing some information in Base64 encoded python lists, then decode them in javascript. However it does not parse my "list" as an array (the syntax is the same) because it gives me this error:
SyntaxError: JSON.parse: unexpected character at line 1 column 2 of
the JSON data
So it turns out, myString = "['foo']" returns this error, but myString = '["foo"]' works just fine. (In firefox at least)
Why does this happen? It makes zero sense, the quotation marks are not the same, so why does it throw an error?
Python always returns the string wrapped in "" and the actual contents of the list wrapped in '' so there is no way to change that. | true | 36,191,804 | 1.2 | 0 | 0 | 2 | JSON uses " to wrap strings, not ', thus 'foo' is not valid JSON string. | 1 | 36 | 0 | 0 | 2016-03-24T01:38:00.000 | javascript,python,json | Weird error on trying to parse an stringed array with JSON | 1 | 1 | 1 | 36,191,835 | 0 |
1 | 0 | I am trying to intercept HTTP requests sent via an application I have installed on my Windows 7 machine. I'm not sure what platform the application is built on, I just know that Fiddler isn't correctly intercepting anything that this program is sending/receiving. Requests through Chrome are intercepted fine.
Can Fiddler be set up as a proxy for ALL applications, and if so, how would I go about doing this? I have no control over the application code, it's just something I installed. It is a live bidding auction program which seems to mainly display HTML pages inside the application window. | false | 36,193,175 | 0 | 0 | 0 | 0 | Fiddler isn't correctly intercepting anything that this program is sending/receiving
That means the program is either firing requests to localhost (very unlikely), or ignoring the proxy settings for the current user (most likely). The latter also means this application won't function on a machine where a proxy connection is required in order to make HTTP calls to the outside.
The alternative would be to use a packet inspector like Wireshark, or to let the application be fixed to respect proxy settings, or to capture all HTTP requests originating from that machine on another level, for example the next router in your network. | 0 | 767 | 0 | 0 | 2016-03-24T04:13:00.000 | http,python-requests,fiddler | Using Fiddler to intercept requests from Windows program | 1 | 1 | 1 | 36,207,761 | 0 |
0 | 0 | I recently bought a Windows 10 machine and now I want to run a server locally for testing a webpage I am developing.
On Windows 7 it was always very simple to start a HTTP Server via python and the command prompt. Fx writing the below code would fire up a HTTP server and I could watch the website through localhost.
C:\pathToIndexfile\python -m SimpleHTTPServer
This does however not seems to work on Windows 10...
Does anyone know how to do this on Windows 10? | true | 36,223,345 | 1.2 | 0 | 0 | 10 | Ok, so different commands is apparently needed.
This works:
C:\pathToIndexfile\py -m http.server
As pointed out in a comment, the change to "http.server" is not because of windows, but because I changed from python 2 to python 3. | 0 | 25,052 | 1 | 5 | 2016-03-25T15:55:00.000 | python,windows-10,simplehttpserver | How to start python simpleHTTPServer on Windows 10 | 1 | 1 | 3 | 36,223,473 | 0 |
0 | 0 | I am using NameCheap to host my domain, and I use their privateemail.com to host my email. I'm looking to create a python program to retrieve specific/all emails from my inbox and to read the HTML from them (html instead of .body because there is a button that has a hyperlink which I need an is only accessible via html). I had a couple questions for everyone.
Would the best way to do this be via IMAPlib? If it is, how do I find out the imap server for privateemail.com?
I could do this via selenium, but it would be heavy and I would prefer a lighter weight and faster solution. Any ideas on other possible technologies to use?
Thanks! | true | 36,234,690 | 1.2 | 1 | 0 | 1 | Well, just a little bit of testing with telnet will give you the answer to the question 'how do I find the imap server for privateemail.com'.
mail.privateemail.com is their IMAP server. | 0 | 411 | 0 | 0 | 2016-03-26T11:24:00.000 | python,email | Retrieving emails from NameCheap Private Email | 1 | 1 | 1 | 36,242,008 | 0 |
1 | 0 | I have a node.js server which is communicating from a net socket to python socket. When the user sends an asynchronous ajax request with the data, the node server passes it to the python and gets data back to the server and from there to the client.
The problem occurs when the user sends the ajax request: he has to wait for the response and if the python process takes too much time then the ajax is already timed out.
I tried to create a socket server in node.js and a client that connects to the socket server in python with the data to process. The node server responds to the client with a loading screen. When the data is processed the python socket client connects to the node.js socket server and passes the processed data. However the client can not request the processed data because he doesn't know when it's done. | true | 36,246,925 | 1.2 | 0 | 0 | 0 | So you have three systems, and an asynchronous request. I solved a problem like this recently using PHP and the box.com API. PHP doesn't allow keeping a connection open indefinitely so I had a similar problem.
To solve the problem, I would use a recursive request. It's not 'real-time' but that is unlikely to matter.
How this works:
The client browser sends the "Get my download thing" request to the Node.js server. The Node.js server returns a unique request id to the client browser.
The client browser starts a 10 second poll, using the unique request id to see if anything has changed. Currently, the answer is no.
The Node.js server receives this and sends a "Go get his download thing" request to the Python server. (The client browser is still polling every 10 seconds, the answer is still no)
The python server actually goes and gets his download thing, sticks it in a place, creates a URL and returns that to the Node.js server. (The client browser is still polling every 10 seconds, the answer is still no)
The Node.js server receives a message back from the Python server with the URL to the thing. It stores the URL against the request id it started with. At this point, its state changes to "Yes, I have your download thing, and here it is! - URL).
The client browser receives the lovely data packet with its URL, stops polling now, and skips happily away into the sunset. (or similar more appropriate digital response).
Hope this helps to give you a rough idea of how you might solve this problem without depending on push technology. Consider tweaking your poll interval (I suggested 10 seconds to start) depending on how long the download takes. You could even get tricky, wait 30 seconds, and then poll every 2 seconds. Fine tune it to your problem. | 0 | 99 | 0 | 0 | 2016-03-27T11:46:00.000 | javascript,python,ajax,node.js,sockets | Node.js long term tasks | 1 | 1 | 1 | 36,248,252 | 0 |
1 | 0 | Has anyone had trouble verifying/submitting code to the google foobar challenges? I have been stuck unable to progress in the challenges, not because they are difficult but because I literally cannot send anything.
After I type "verify solution.py" it responds "Verifying solution..." then after a delay: "There was a problem evaluating your code."
I had the same problem with challenge 1. I waited an hour then tried verifying again and it worked. Challenge 2 I had no problems. But now with challenge 3 I am back to the same cryptic error.
To ensure it wasn't my code, I ran the challenge with no code other than "return 3" which should be the correct response to test 1. So I would have expected to see a "pass" for test 1 and then "fail" for all the rest of the tests. However it still said "There was a problem evaluating your code."
I tried deleting cookies and running in a different browser. Neither changed anything. I waited overnight, still nothing. I am slowly running out of time to complete the challenge. Is there anything I can do?
Edit: I've gotten negative votes already. Where else would I put a question about the google foobar python challenges? Also, I'd prefer not to include the actual challenge or my code since it's supposedly secret, but if necessary I will do so. | false | 36,265,728 | 0.197375 | 1 | 0 | 2 | Re-indenting the file seemed to help, but that might have just been coincidental. | 0 | 9,217 | 0 | 5 | 2016-03-28T15:44:00.000 | python,python-2.7,google-chrome | Error with the google foobar challenges | 1 | 1 | 2 | 36,270,465 | 0 |
0 | 0 | I am trying to get information on a large number of scholarly articles as part of my research study. The number of articles is on the order of thousands. Since Google Scholar does not have an API, I am trying to scrape/crawl scholar. Now I now, that this is technically against the EULA, but I am trying to be very polite and reasonable about this. I understand that Google doesn't allow bots in order to keep traffic within reasonable limits. I started with a test batch of ~500 hundred requests with 1s in between each request. I got blocked after about the first 100 requests. I tried multiple other strategies including:
Extending the pauses to ~20s and adding some random noise to them
Making the pauses log-normally distributed (so that most pauses are on the order of seconds but every now and then there are longer pauses of several minutes and more)
Doing long pauses (several hours) between blocks of requests (~100).
I doubt that at this point my script is adding any considerable traffic over what any human would. But one way or the other I always get blocked after ~100-200 requests. Does anyone know of a good strategy to overcome this (I don't care if it takes weeks, as long as it is automated). Also, does anyone have experience dealign with Google directly and asking for permission to do something similar (for research etc.)? Is it worth trying to write them and explain what I'm trying to do and how, and see whether I can get permission for my project? And how would I go about contacting them? Thanks! | false | 36,270,867 | 0.197375 | 0 | 0 | 2 | Without testing, I'm still pretty sure one of the following does the trick :
Easy, but small chance of success :
Delete all cookies from site in question after every rand(0,100) request,
then change your user-agent, accepted language, etc. and repeat.
A bit more work, but a much sturdier spider as result :
Send your requests through Tor, other proxies, mobile networks, etc. to mask your IP (also do suggestion 1 at every turn)
Update regarding Selenium
I missed the fact that you're using Selenium, took for granted it was some kind of modern programming language only (I know that Selenium can be driven by most widely used languages, but also as some sort of browser plug-in, demanding very little programming skills).
As I then presume your coding skills aren't (or weren't?) mind-boggling, and for others with the same limitations when using Selenium, my answer is to either learn a simple, scripting language (PowerShell?!) or JavaScript (since it's the web you're on ;-)) and take it from there.
If automating scraping smoothly was as simple as a browser plug-in, the web would have to be a much more messy, obfuscated and credential demanding place. | 0 | 4,309 | 0 | 9 | 2016-03-28T20:45:00.000 | python,web-crawler,google-scholar | Crawling Google Scholar | 1 | 1 | 2 | 37,187,565 | 0 |
1 | 0 | I'm developing an html web page which will visualize data. The intention is that this web page works only on one computer, so I don't want it to be online. Just off line. The web uses only Js, css and html. It is very simple and is not using any database, the data is loaded through D3js XMLHttpRequest. Up to now it is working with a local python server for development, through python -m SimpleHTTPServer. Eventually I will want to launch it easyer. Is it possible to pack the whole thing in a launchable app? Do you recommend some tools to do it or some things to read? What about the server part? Is it possible to launch a "SimpleHTTPServer" kind of thing without the console? Or maybe just one command which launches the server plus the web?
Thanks in advance. | false | 36,275,331 | 0 | 0 | 0 | 0 | Not too sure what you are trying to achieve from reading the question. but from what i understand is that you'd like to be able to launch your application and serve it on the web when done.
you could use heroku!(www.heroku.com), same way you are hosting the application on your local, you simply make a Procfile, and in that file you put in the command that you'd normally run on your local, and push to Heroku. | 0 | 61 | 0 | 0 | 2016-03-29T04:16:00.000 | javascript,python,server | Serving locally a webapp | 1 | 1 | 2 | 36,275,505 | 0 |
0 | 0 | I know this question has already been asked several times on the site, but I have not seen any satisfactory answer. I have a project partitionned like this
TLS project (folder)
sniffer_tls.py
| - tls (folder)
| - __init__.py
| - tls_1_2.py
| - handshake (folder)
| --__ Init__.py
| - client_hello.py
when I am importing tls_1_2.py sniffer_tls.py in the main file, there is no problem. By cons when I import client_hello in tls_1_2.py, there python that mistake me out
File "/home/kevin/Documents/Python/Projet TLS/tls/tls_1_2.py", line 8, in
import handshake.client_hello
ImportError: No module named 'handshake'
I tried to import this way
import handshake.client_hello
and then I tried another way that I read on the forum
import client_hello from handshake.client_hello
I deleted the init.py file to test, it does not work either, I really need help to solve this problem | false | 36,328,123 | 0.379949 | 0 | 0 | 2 | You seem to have a typo in your "handshake" __init__.py file. Instead you've named it __Init__.py with capital 'I' | 0 | 258 | 0 | 1 | 2016-03-31T08:32:00.000 | python-3.x,import | ImportError: No module named xxxxx | 1 | 1 | 1 | 42,805,878 | 0 |
0 | 0 | So I have a script which uploads a file to a specific folder. I want to get URL of the most recently uploaded item in that folder? How would I accomplish this in as simple a manner is possibly.
For example, say I have a folder called "Photos", and I want to retrieve the latest item that was uploaded to that folder and display it somewhere. How can I get that URL? You may assume "Photos" is a shared folder. | false | 36,380,202 | 0 | 0 | 0 | 0 | Hash the file names, stored the last file name into database
Rename the file names using timestamp | 0 | 191 | 0 | 0 | 2016-04-03T00:18:00.000 | python,google-drive-api | Get the URL of the last item added to a folder | 1 | 1 | 2 | 36,380,299 | 0 |
0 | 0 | I'm using select.poll on Ubuntu with a socket and have registered POLLIN, POLLERR, and POLLHUP.
My understanding is that the condition when a POLLIN event occurs and recv() returns no data indicates that the peer has disconnected. My testing seems to verify this.
But why do I not get POLLHUP? Does this have different semantics? | true | 36,414,696 | 1.2 | 0 | 0 | 3 | The event value is a bitmap.
If you get POLLIN(value:1), you have something to read,
If you get POLLHUP(value:16), your input is ended,
So when you get POLLIN(1) & POLLHUP(16) = 17, that meanss that your input is ended and that your have still something to read from the buffer,
After you read everything from the buffer, you get only POLLHUP alone everytime you call poll() :
In that case, it is no use to keep a file descriptor in the poll list,
it is better to unregister this file descriptor immediately. | 0 | 607 | 0 | 1 | 2016-04-04T23:37:00.000 | python,python-3.x,select | Python: select.POLLHUP | 1 | 1 | 1 | 42,612,778 | 0 |
0 | 0 | I'm working on a graph search problem that can be distilled to the following simpler example:
Updated to clarify based on response below
The Easter Bunny is hopping around the forest collecting eggs. He knows how many eggs to expect from every bush, but every bush has a unique number of eggs. It takes the Easter Bunny 30 minutes to collected from any given bush. The easter bunny searches for eggs 5 days a week, up to 8 hours per day. He typically starts and ends in his burrow, but on Tuesday he plans to end his day at his friend Peter Rabbit's burrow. Mrs. Bunny gave him a list of a few specific bushes to visit on specific days/times - these are intermediate stops that must be hit, but do not list all stops (maybe 1-2 per day). Help the Easter Bunny design a route that gives him the most eggs at the end of the week.
Given Parameters: undirected graph (g), distances between nodes are travel times, 8 hours of time per day, 5 working days, list of (node,time,day) tuples (r) , list of (startNode, endNode, day) tuples (s)
Question: Design a route that maximizes the value collected over the 5 days without going over the allotted time in any given day.
Constraints: visit every node in r on the prescribed time/day. for each day in s, start and end at the corresponding nodes, whose collection value is 0. Nodes cannot be visited more than once per week.
Approach: Since there won't be very many stops, given the time at each stop and the travel times (maybe 10-12 on a large day) my first thought was to brute force all routes that start/stop at the correct points, and just run this 5 times, removing all visited nodes. From there, separately compute the collected value of each allowable route. However, this doesn't account for the fact that my "best" route on day one may ruin a route that would be best on day 5, given required stops on that day.
To solve that problem I considered running one long search by concatenating all the days and just starting from t = 0 (beginning of week) to t = 40 (end of week), with the start/end points for each day as intermediate stops. This gets too long to brute force.
I'm struggling a little with how to approach the problem - it's not a TSP problem - I'm only going to visit a fraction of all nodes (maybe 50 of 200). It's also not a dijkstra's pathing problem, the shortest path typically would be to go nowhere. I need to maximize the total collected value in the allotted time making the required intermediate stops. Any thoughts on how to proceed would be greatly appreciated! Right now I've been approaching this using networkx in python.
Edit following response
In response to your edit - I'm looking for an approach to solve the problem - I can figure out the code later, I'm leaning towards A* over MDFS, because I don't need to just find one path (that will be relatively quick), I need to find an approximation of the best path. I'm struggling to create a heuristic that captures the time constraint (stay under time required to be at next stop) but also max eggs. I don't really want the shortest path, I want the "longest" path with the most eggs. In evaluating where to go next, I can easily do eggs/min and move to the bush with the best rate, but I need to figure out how to encourage it to slowly move towards the target. There will always be a solution - I could hop to the first bush, sit there all day and then go to the solution (there placement/time between is such that it is always solvable) | true | 36,438,428 | 1.2 | 1 | 0 | 1 | The way the problem is posed doesn't make full sense. It is indeed a graph search problem to maximise a sum of numbers (subject to other constraints) and it possibly can be solved via brute force as the number of nodes that will end up being traversed is not necessarily going to climb to the hundreds (for a single trip).
Each path is probably a few nodes long because of the 30 min constraint at each stop. With 8 hours in a day and negligible distances between the bushes that would amount to a maximum of 16 stops. Since the edge costs are not negligible, it means that each trip should have <<16 stops.
What we are after is the maximum sum of 5 days harvest (max of five numbers). Each day's harvest is the sum of collected eggs over a "successful" path.
A successful path is defined as the one satisfying all constraints which are:
The path begins and ends on the same node. It is therefore a cycle EXCEPT for Tuesday. Tuesday's harvest is a path.
The cycle of a given day contains the nodes specified in Mrs Bunny's
list for that day.
The sum of travel times is less than 8 hrs including the 30min harvesting time.
Therefore, you can use a modified Depth First Search (DFS) algorithm. DFS, on its own can produce an exhaustive list of paths for the network. But, this DFS will not have to traverse all of them because of the constraints.
In addition to the nodes visited so far, this DFS keeps track of the "travel time" and "eggs" collected so far and at each "hop" it checks that all constraints are satisfied. If they are not, then it backtracks or abandons the traversed path. This backtracking action "self-limits" the enumerated paths.
If the reasoning is so far inline with the problem (?), here is why it doesn't seem to make full sense. If we were to repeat the weekly harvest process for M times to determine the best visiting daily strategy then we would be left with the problem of determining a sufficiently large M to have covered the majority of paths. Instead we could run the DFS once and determine the route of maximum harvest ONCE, which would then lead to the trivial solution of 4*CycleDailyHarvest + TuePathHarvest. The other option would be to relax the 8hr constraint and say that Mr Bunny can harvest UP TO 8hr a day and not 8hr exactly.
In other words, if all parameters are static, then there is no reason to run this process multiple times. For example, if each bush was to give "up to k eggs" following a specific distribution, maybe we could discover an average daily / weekly visiting strategy with the largest yield. (Or my perception of the problem so far is wrong, in which case, please clarify).
Tuesday's task is easier, it is as if looking for "the path between source and target whose time sum is approximately 8hrs and sum of collected eggs is max". This is another sign of why the problem doesn't make full sense. If everything is static (graph structure, eggs/bush, daily harvest interval) then there is only one such path and no need to examine alternatives.
Hope this helps.
EDIT (following question update):
The update doesn't radically change the core of the previous response which is "Use a modified DFS (for the potential of exhaustively enumerating all paths / cycles) and encode the constraints as conditions on metrics (travel time, eggs harvested) that are updated on each hop". It only modifies the way the constraints are represented. The most significant alteration is the "visit each bush once per week". This would mean that the memory of DFS (the set of visited nodes) is not reset at the end of a cycle or the end of a day but at the end of a week. Or in other words, the DFS now can start with a pre-populated visited set. This is significant because it will reduce the number of "viable" path lengths even more. In fact, depending on the structure of the graph and eggs/bush the problem might even end up being unsolvable (i.e. zero paths / cycles satisfying the conditions).
EDIT2:
There are a few "problems" with that approach which I would like to list here with what I think are valid points not yet seen by your viewpoint but not in an argumentative way:
"I don't need to just find one path (that will be relatively quick), I need to find an approximation of the best path." and "I want the "longest" path with the most eggs." are a little bit contradicting statements but on average they point to just one path. The reason I am saying this is because it shows that either the problem is too difficult or not completely understood (?)
A heuristic will only help in creating a landscape. We still have to traverse the landscape (e.g. steepest descent / ascent) and there will be plenty of opportunity for oscillations as the algorithm might get trapped between two "too-low", "too-high" alternatives or discovery of local-minima / maxima without an obvious way of moving out of them.
A*s main objective is still to return ONE path and it will have to be modified to find alternatives.
When operating over a graph, it is impossible to "encourage" the traversal to move towards a specific target because the "traversing agent" doesn't know where the target is and how to get there in the sense of a linear combination of weights (e.g. "If you get too far, lower some Xp which will force the agent to start turning left heading back towards where it came from". When Mr Bunny is at his burrow he has all K alternatives, after the first possible choice he has K-M1 (M1
The MDFS will help in tracking the different ways these sums are allowed to be created according to the choices specified by the graph. (Afterall, this is a graph-search problem).
Having said this, there are possibly alternative, sub-optimal (in terms of computational complexity) solutions that could be adopted here. The obvious (but dummy one) is, again, to establish two competing processes that impose self-control. One is trying to get Mr Bunny AWAY from his burrow and one is trying to get Mr Bunny BACK to his burrow. Both processes are based on the above MDFS and are tracking the cost of MOVEAWAY+GOBACK and the path they produce is the union of the nodes. It might look a bit like A* but this one is reset at every traversal. It operates like this:
AWAY STEP:
Start an MDFS outwards from Mr Bunny's burrow and keep track of distance / egg sum, move to the lowestCost/highestReward target node.
GO BACK STEP:
Now, pre-populate the visited set of the GO BACK MDFS and try to get back home via a route NOT TAKEN SO FAR. Keep track of cost / reward.
Once you reach home again, you have a possible collection path. Repeat the above while the generated paths are within the time specification.
This will result in a palette of paths which you can mix and match over a week (4 repetitions + TuesdayPath) for the lowestCost / highestReward options.
It's not optimal because you might get repeating paths (the AWAY of one trip being the BACK of another) and because this quickly eliminates visited nodes it might still run out of solutions quickly. | 0 | 662 | 0 | 0 | 2016-04-05T22:47:00.000 | python,graph-theory,networkx | Graph search - find most productive route | 1 | 1 | 1 | 36,454,065 | 0 |
1 | 0 | I have already written a script that opens Firefox with a URL, scrape data and close. The page belongs to a gaming site where page refresh contents via Ajax.
Now one way is to fetch those AJAX requests and get data or refresh page after certain period of time within open browser.
For the later case, how should I do? Should I call the method after certain time period or what? | false | 36,445,162 | 0 | 0 | 0 | 0 | You can implement so-called smart wait.
Indicate the most frequently updating and useful for you web element
at the page
Get data from it by using JavaScript since DOM model will not be updated without page refresh, eg:
driver.execute_script('document.getElementById("demo").innerHTML')
Wait for certain time, get it again and compare with previous result. If changed - refresh page, fetch data, etc. | 0 | 1,158 | 0 | 1 | 2016-04-06T08:11:00.000 | python,ajax,selenium | Python Selenium: How to get fresh data from the page get refreshed periodically? | 1 | 2 | 4 | 36,446,094 | 0 |
1 | 0 | I have already written a script that opens Firefox with a URL, scrape data and close. The page belongs to a gaming site where page refresh contents via Ajax.
Now one way is to fetch those AJAX requests and get data or refresh page after certain period of time within open browser.
For the later case, how should I do? Should I call the method after certain time period or what? | false | 36,445,162 | 0 | 0 | 0 | 0 | Make sure to call findElement() again after waiting, bc otherwise you might not get a fresh instance. Or use page factory, which will get a fresh copy of the WebElement for you every time the instance is accessed. | 0 | 1,158 | 0 | 1 | 2016-04-06T08:11:00.000 | python,ajax,selenium | Python Selenium: How to get fresh data from the page get refreshed periodically? | 1 | 2 | 4 | 36,449,601 | 0 |
1 | 0 | I verify launched current activity if it's in browser or in app by comparing with current activity.
activity = driverAppium.current_activity
And then I verify if activity matches with browser activity name e.g. org.chromium.browser...
But can I verify the http response on the webpage e.g. 200 or 404?
With above test always passes even though webpage didn't load or get null response.
Can I verify with current activity and response both? | false | 36,460,455 | 0 | 0 | 0 | 0 | There are two ways to do it what I can think of,
UI prospect :
Capture the screenshot of the webview with 200 response. Let's call it expectedScreen.png
Capture the screenshot of the under test response(be it 200, 400 etc.). Lets call this finalScreen.png
Compare both the images to verify/assert.
API prospect : Since the Activity suppose to be displayed shall never/rarely be changed depending on transitions between different activities on your application as designed, so verifying current activity is a less important check during the test. You can rather verify these using API calls and then(if you get proper response) look for presence of elements on screen accordingly. | 0 | 438 | 0 | 0 | 2016-04-06T19:19:00.000 | android-activity,automation,appium,python-appium | Verify appium current activity response | 1 | 1 | 1 | 36,461,041 | 0 |
1 | 0 | I want to get a list of all the pages in a given section for a given notebook. I have the id for the section I want and use it in a GET call to obtain a dictionary of the section's information. One of the keys in the dictionary is "pagesUrl". A GET call on this returns a list of dictionaries where there's one dictionary for each page in this section.
Up until yesterday, this worked perfectly. However, as of today, pagesUrl only returns pages created within the last minute or so. Anything older isn't seen. Does anyone know why this is happening? | false | 36,489,891 | 0 | 0 | 0 | 0 | Yesterday (2016/04/08) there was an incident with the OneNote API which prevented us from updating the list of pages. This was resolved rpughly at 11PM PST and the API should be returning all pages now. | 1 | 61 | 0 | 2 | 2016-04-08T01:13:00.000 | python,api,get,onenote,onenote-api | OneNote's PagesUrl Not Including All Pages in a Section | 1 | 1 | 1 | 36,504,337 | 0 |
0 | 0 | I am starting to make a python program to get user locations, I've never worked with the twitter API and I've looked at the documentation but I don't understand much. I'm using tweepy, can anyone tell me how I can do this? I've got the basics down, I found a project on github on how to download a user's tweets and I understand most of it. | false | 36,490,085 | 0.132549 | 1 | 0 | 2 | Once you have a tweet, the tweet includes a user, which belongs to the user model. To call the location just do the following
tweet.user.location | 0 | 13,479 | 0 | 5 | 2016-04-08T01:40:00.000 | python,python-2.7,twitter,tweepy | How to get twitter user's location with tweepy? | 1 | 1 | 3 | 42,499,529 | 0 |
1 | 0 | The standard (for me) OAuth flow is:
generate url flow.step1_get_authorize_url() and ask user to allow app access
get the code
get the credentials with flow.step2_exchange(auth_code)
But I faced with another service, where I just need to initiate a POST request to token_uri with client_id and client_secret passed as form values (application/x-www-form-urlencoded), grant_type is client_credentials and scope is also passed as form field value.
Does oauth2client library supports it? | false | 36,521,976 | 0 | 0 | 0 | 0 | Update the grant_type to implicit | 0 | 106 | 0 | 0 | 2016-04-09T19:59:00.000 | python,oauth,oauth2client | How to get OAuth token without user consent? | 1 | 1 | 1 | 36,541,353 | 0 |
0 | 0 | I'm having some trouble understanding the variant syntax format when creating sockets (I've started learning Python a few weeks ago).
Can someone please explain what is the difference (if any) between the below?
s = socket()
s = socket.socket()
s = socket(AF_INET, SOCK_STREAM)
Thanks. | false | 36,538,637 | 0 | 0 | 0 | 0 | The diffrence depend on what module you use.
If you use from socket import socket, AF_INET, SOCK_STREAM :
This will work. s will be uninitialized socket object.
Will not work, because socket is constructor and not a module.
This will work. s will be initialized socket object.
If you use import socket :
Will not work, because socket is module and not a constructor(not a function- you cant call it).
This will work. s will be uninitialized socket object.
Will not work, because socket is module and not a constructor.
Hope this help | 1 | 58 | 0 | 0 | 2016-04-11T01:54:00.000 | python | Difference in socket syntax | 1 | 1 | 2 | 46,259,141 | 0 |
0 | 0 | I am using 3rd party to send and receive SMS, which includes text plus url of image. Is there any way that latest smartphones shows picture instead of link? Like the downloadable content. | false | 36,540,431 | 0 | 1 | 0 | 0 | If you want to show the image content from the URL all you can do is write a notification Application Which will read from the SMS that you are sending and (using the thirdparty number from which you are sending the sms) and notify the user with image (by reading the content of the SMS and downloading it from URL).
But then all your users will have to download your app.and will need read permissions for SMS. | 0 | 107 | 0 | 0 | 2016-04-11T05:27:00.000 | c#,php,python,sms,sms-gateway | SMS with picture link | 1 | 1 | 1 | 36,540,940 | 0 |
0 | 0 | If I retweet a tweet, is there a way of finding out how many times "my retweet" has been retweeted / favorited? Are there provisions in the Twitter API to retrieve that? There is no key in retweeted_status which gives that information. What am I missing here? | false | 36,542,813 | -0.379949 | 1 | 0 | -2 | Yes you can track it. Get the stats(favorite, retweet_count) of your retweet(time when you are retweeting it.) and save this stats somewhere as a check-point. Next time when someone is going to retweet it again you will get an updated stats of the your previous retweet and do compare latest stats with the existing check-point. | 0 | 1,191 | 0 | 3 | 2016-04-11T07:50:00.000 | python,twitter,tweepy | Find the number of retweets of a retweet using Tweepy? | 1 | 1 | 1 | 36,543,167 | 0 |
0 | 0 | I am using python with boto3 to upload file into s3 bucket. Boto3 support upload_file() to create s3 object. But this API takes file name as input parameter
Can we give actual data buffer as a parameter to upload file () function instanced of file name?
I knew that we can use put_object() function if we want to give data buffer as parameter to create s3 object. But I want to use upload_file with data buffer parameter. Is there any way to get out of this?
Thanks in advance | true | 36,568,713 | 1.2 | 0 | 1 | 1 | There is currently no way to use a file-like object with upload_file. put_object and upload_part do support these, though you don't get the advantage of automatic multipart uploads. | 0 | 638 | 0 | 1 | 2016-04-12T09:13:00.000 | python-2.7,boto,boto3 | Boto3 : Can we use actual data buffer as parameter instaed of file name to upload file in s3? | 1 | 1 | 1 | 36,582,398 | 0 |
0 | 0 | I am currently developing a python program for a Raspberry Pi. This Raspberry is meant to control a solar panel. In fact, there will be many Raspberry(ies) controlling solar panels and they will be connected to each others by RJ wires. The idea is that every Raspberry has the same status, there is not any "server" Raspberry and "client" Raspberry.
The program will receive GPS data, i.e. position, time...
Except from the GPS data, the Raspberry(ies) will not have direct internet access. However, it will be possible to plug a 3G key in order to gain access to internet.
The problem is the following : I want to update my python program remotely, by internet provided by my 3G key (the solar panels are in a field, and I'm home for instance so I do not want to drive a hundred miles to get my Raspberry(ies) back and update them manually...). How is it possible to make the update remotely considering that I do not have a real "server" in my network of Raspberry(ies)? | false | 36,575,268 | 0 | 1 | 0 | 0 | I think you however need a server(or it can be just file-share service). If I got it correctly you need to control(or just update) Raspberry(ies), that connected to internet via 3G. So, there are options I see:
Connect them into VPN;
Write script that always be checking for new update for your app from a http\ftp file-sharing server;
Use reverse-shell, but working depends on NAT specs that uses 3G provider. | 0 | 223 | 0 | 0 | 2016-04-12T13:48:00.000 | python,linux,gps,updates,working-remotely | How can I update a python program remotely on linux? | 1 | 1 | 1 | 36,577,750 | 0 |
0 | 0 | I'm writing a script that uses paramiko to ssh onto several remote hosts and run a few checks. Some hosts are setup as fail-overs for others and I can't determine which is in use until I try to connect. Upon connecting to one of these 'inactive' hosts the host will inform me that you need to connect to another 'active' IP and then close the connection after n seconds. This appears to be written to the stdout of the SSH connection/session (i.e. it is not an SSH banner).
I've used paramiko quite a bit, but I'm at a loss as to how to get this output from the connection, exec_command will obviously give me stdout and stderr, but the host is outputting this immediately upon connection, and it doesn't accept any other incoming requests/messages. It just closes after n seconds.
I don't want to have to wait until the timeout to move onto the next host and I'd also like to verify that that's the reason for not being able to connect and run the checks, otherwise my script works as intended.
Any suggestions as to how I can capture this output, with or without paramiko, is greatly appreciated. | true | 36,576,158 | 1.2 | 1 | 0 | 0 | I figured out a way to get the data, it was pretty straight forward to be honest, albeit a little hackish. This might not work in other cases, especially if there is latency, but I could also be misunderstanding what's happening:
When the connection opens, the server spits out two messages, one saying it can't chdir to a particular directory, then a few milliseconds later it spits out another message stating that you need to connect to the other IP. If I send a command immediately after connecting (doesn't matter what command), exec_command will interpret this second message as the response. So for now I have a solution to my problem as I can check this string for a known message and change the flow of execution.
However, if what I describe is accurate, then this may not work in situations where there is too much latency and the 'test' command isn't sent before the server response has been received.
As far as I can tell (and I may be very wrong), there is currently no proper way to get the stdout stream immediately after opening the connection with paramiko. If someone knows a way, please let me know. | 0 | 391 | 0 | 1 | 2016-04-12T14:23:00.000 | python,ssh,paramiko | Paramiko get stdout from connection object (not exec_command) | 1 | 1 | 1 | 36,601,638 | 0 |
1 | 0 | Django REST framework introduces a Request object that extends the regular HttpRequest, this new object type has request.data to access JSON data for 'POST', 'PUT' and 'PATCH' requests.
However, I can get the same data by accessing request.body parameter which was part of original Django HttpRequest type object.
One difference which I see is that request.data can be accessed only one time. This restriction doesnt apply to request.body.
My question is what are the differences between the two. What is preferred and what is the reason DRF provides an alternative way of doing same thing when There should be one-- and preferably only one --obvious way to do it.
UPDATE: Limiting the usecase where body is always of type JSON. Never XML/ image or conventional form data. What are pros/cons of each? | false | 36,616,309 | 1 | 0 | 0 | 8 | In rest_framework.request.Request
request.body is bytes, which is always available, thus there is no limit in usage
request.data is a "property" method and can raise an exception,
but it gives you parsed data, which are more convenient
However, the world is not perfect and here is a case when request.body win
Consider this example:
If client send:
content-type: text/plain
and your REST's endpoint doesn't accept text/plain
your server will return 415 Unsupported Media Type
if you access request.data
But what if you know that json.loads(request.body) is correct json.
So you want to use that and only request.body allow that.
FYI: A described example is a message of AWS SNS notification sent by AWS to HTTP endpoint. AWS SNS works as a client here and of course, this case is a bug in their SNS.
Another example of benefits from request.body is a case when you have own custom parsing and you use own MIME format. | 0 | 34,630 | 0 | 30 | 2016-04-14T07:22:00.000 | python,django,django-rest-framework | request.data in DRF vs request.body in Django | 1 | 1 | 2 | 60,892,980 | 0 |
1 | 0 | I'm looking for a solution that could help me out automating the already opened application in Chrome web browser using Selenium and Python web driver. The issue is that the application is super secured, and if it is opened in incognito mode as Selenium tries to do, it sends special code on my phone. This defeats the whole purpose. Can someone provide a hacky way or any other work around/open source tool to automate the application. | false | 36,618,103 | 0 | 0 | 0 | 0 | Selenium doesn't start Chrome in incognito mode, It just creates a new and fresh profile in the temp folder. You could force Selenium to use the default profile or you could launch Chrome with the debug port opened and the let Selenium connect to it. There is also a third way which is to preinstall the webdriver extension in Chrome. These are the only ways I've encountered to automate Chrome with Selenium. | 0 | 127 | 0 | 0 | 2016-04-14T08:51:00.000 | python,selenium | Selenium: How to work with already opened web application in Chrome | 1 | 1 | 1 | 36,622,332 | 0 |
0 | 0 | I"m using python 2.7 and paramiko 1.16
While attempting an SSH to el capitan, paramiko throws the exception no acceptable kex algorithm. I tried setting kex, cyphers in sshd_config, but sshd can't be restarted for some reasons. I tried some client side fixes, but upgrading paramiko did not fix the problem. | false | 36,658,093 | 0 | 1 | 0 | 0 | Workaround from another stack overflow issue by putting the following cipher/mac/kex settings to sushi_config:
Ciphers [email protected],[email protected],aes256-ctr,aes128-ctr
MACs [email protected],[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1
KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 | 0 | 677 | 0 | 0 | 2016-04-15T22:53:00.000 | python,ssh,paramiko | paramiko no acceptable kex algorithm while ssh to el capitan | 1 | 1 | 1 | 36,700,655 | 0 |
1 | 0 | I have setup an Atlassian Bamboo deploy plan. One of its steps is to run a command to run automated UI tests written in Selenium for Python. This runs on a headless Centos 6 server.
I had to install the X-server to simulate the existence of a display
I made the following commands run in the system boot so that the X-server is always started when the machine starts
Xvfb :1 -screen 1600x900x16
export DISPLAY=:1
The command task in the deployment plan simply invokes the following
/usr/local/bin/python3.5 .py
The funny thing is that when I run that directly from the command line it works perfect the the UI unit tests work. They start firefox and start dealing with the site.
On the other hand, when this is done via the deployment command I keep getting the error "The browser appears to have exited "
17-Apr-2016 14:18:23 selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details" As if it still does not sense that there is a display.
I even added a task in the deployment job to run X-server again but it came back with error that the server is already running.
This is done on Bamboo version 5.10.3 build 51020.
So, any ideas why it would fail within the deployment job?
Thanks, | false | 36,686,661 | 0 | 0 | 0 | 0 | I solved the problem by changing the task type from a command task to a script task. My understanding is that not all tasks are run in the sequence as they were defined in the job. If this is not the case, then it might be a bug in Bamboo. | 0 | 837 | 0 | 0 | 2016-04-18T06:15:00.000 | python-3.x,selenium,centos6,bamboo | Atlassian Bamboo command tasks not running correctly | 1 | 1 | 1 | 36,808,047 | 0 |
0 | 0 | I try to import the PyDrive module in my PyCharm project : from pydrive.auth import GoogleAuth.
I tried different things :
Installing it directly from the project interpreter
Download it with a pip command and import it with the path for the poject interpreter
The same thing in Linux
Nothing works. Each time PyCharm recognize the module and even sugest the auto-completion, but when I run the project it keeps saying ImportError: No module named pydrive.auth
Any suggestion ?
EDIT : When I put directly the pydrive folder in my repository, and this time : ImportError: No module named httplib2 from the first import of PyDrive.
My path is correct and httplib2 is again in my PyCharm project | true | 36,713,581 | 1.2 | 0 | 0 | 1 | After noticing that the module is already installed, both by pip and by the project interpreter, and nothing worked, this what did the trick (finaly!):
make sure the module is indeed installed:
sudo pip{2\3} install --upgrade httplib2
locate the module on your computer:
find / | grep httplib2
you will need to reach the place in which pip is installing the module, the path would probably look like this:
/usr/local/lib/python2.7/dist-packages
get into the path specified there, search for the module and copy all the relevant files and folders into your local pycharm project environment. this will be a directory with a path like this:
/home/your_user/.virtualenvs/project_name/lib/python2.7
this is it. note however you may need to do this multiple times, since each module may have a dependencies...
good luck! | 1 | 413 | 1 | 1 | 2016-04-19T09:01:00.000 | python,module,pycharm,pydrive | PyCharm recognize a module but do not import it | 1 | 1 | 1 | 54,055,599 | 0 |
1 | 0 | I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client.
Is there a way to return a png image to the API gateway, and if so how can this be done? | false | 36,717,654 | 0 | 0 | 0 | 0 | You could return it base64-encoded... | 0 | 835 | 0 | 0 | 2016-04-19T11:56:00.000 | python,aws-lambda,aws-api-gateway | Passing an image from Lambda to API Gateway | 1 | 2 | 2 | 36,757,815 | 0 |
1 | 0 | I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client.
Is there a way to return a png image to the API gateway, and if so how can this be done? | false | 36,717,654 | 0 | 0 | 0 | 0 | API Gateway does not currently support passing through binary data either as part of a request nor as part of a response. This feature request is on our backlog and is prioritized fairly high. | 0 | 835 | 0 | 0 | 2016-04-19T11:56:00.000 | python,aws-lambda,aws-api-gateway | Passing an image from Lambda to API Gateway | 1 | 2 | 2 | 36,727,013 | 0 |
1 | 1 | I found that some of the stock exchanges is not supported for datareader. Example, Singapore. Any workaround?
query = web.DataReader(("SGX:BLA"), 'google', start, now) return such error`
IOError: after 3 tries, Google did not return a 200 for url 'http://www.google.com/finance/historical?q=SGX%3ABLA&startdate=Jan+01%2C+2015&enddate=Apr+20%2C+2016&output=csv
It works for IDX indonesia
query = web.DataReader(("IDX:CASS"), 'google', start, now) | false | 36,749,105 | 0 | 0 | 0 | 0 | That URL is a 404 - pandas isn't at fault, maybe just check the URL? Perhaps they're on different exchanges with different google finance support. | 0 | 850 | 0 | 1 | 2016-04-20T15:52:00.000 | python-2.7,pandas,datareader,google-finance,pandas-datareader | Pandas: datareader unable to get historical stock data | 1 | 1 | 1 | 36,783,492 | 0 |
0 | 0 | I have a bot written in Python running on Amazon EC2 with Django as a framework. The bot's end goal is to sustain conversations with multiple users on the same Slack team at once. As I understand it, Amazon will handle load-bearing between Slack teams, but I'm trying to figure out how to manage the load within a single Slack.
Right now, my bot sits in a busy loop waiting for a single user to respond. I've been doing some research on this - is Celery the right tool for the job? Should I split each conversation into a separate thread/task, or maybe have a dispatcher handle new messages? Is there a way for Slack to send an interrupt, or am I stuck with while loops?
Thanks for any help/guidance! I'm pretty new to this.
Edit: I managed to solve this problem by implementing a list of "Conversation" objects pertaining to each user. These objects save the state of each conversation, so that the bot can pick up where it left off when the user messages again. | true | 36,773,380 | 1.2 | 0 | 0 | 0 | Assumptions:
You're using the outgoing webhooks from slack, not the Real Time Messaging API
You're not trying to do some kind of multiple question-answer response where state between each question & answer needs to be maintained.
Skip all the Django stuff and just use AWS Lambda to respond to user requests. That only works for fairly simple "MyBot: do_something_for_me" style things but it's working pretty well for us. Lot easier to manage as well since there's no ec2, no rds, easy deployment, etc. Just make sure you set a reasonable time limit for each Lambda request. From my experience 3 seconds is generally enough time unless you've got a bit of a larger script.
If you really really really have to maintain all this state then you might be better off just writing some kind of quick thing in Flask rather than going through all the setup of django. You'll then have to deal with all the deployment, autoscaling, backup rigmarole that you would for any web service but if you need it, well then ya need it =) | 0 | 982 | 0 | 1 | 2016-04-21T14:57:00.000 | python,django,amazon-ec2,celery,slack-api | Serving multiple users at once with a Python Slack bot | 1 | 1 | 1 | 36,780,059 | 0 |
0 | 0 | I am trying to import urllib.request for python 2.7.10 on PyCharm 4.5.4 on Window 10 but getting the error "ImportError: No module named request". | false | 36,781,105 | 0.033321 | 0 | 0 | 1 | It may happen sometimes, usually in the Linux environment. You have both 2.x and 3.x versions of python installed.
So, in that case, if you are using the command python "file.py"
then by default, it will python 2.x will run the file.
So, use the command python3 "file.py"
I was facing this issue. Maybe it can resolve someone's issue. | 0 | 101,433 | 0 | 37 | 2016-04-21T21:50:00.000 | python,importerror | Import urllib.request, ImportError: No module named request | 1 | 2 | 6 | 63,840,105 | 0 |
0 | 0 | I am trying to import urllib.request for python 2.7.10 on PyCharm 4.5.4 on Window 10 but getting the error "ImportError: No module named request". | false | 36,781,105 | 1 | 0 | 0 | 12 | You'll get this error if you try running a python 3 file with python 2. | 0 | 101,433 | 0 | 37 | 2016-04-21T21:50:00.000 | python,importerror | Import urllib.request, ImportError: No module named request | 1 | 2 | 6 | 55,922,381 | 0 |
0 | 0 | I am trying to find a way to block a certain Mac address / Internal IP from accessing the internet (Blocking a device in the LAN to WAN) in python.
This option is available in every modern router in every home but mine is kinda old and doesn't have that feature.
I have a basic knowledge in networking stuff and consider myself an Advanced-Beginner in python, so I'm up for the challenge but still need your help.
*Of course with the option to enable the internet again for that device | false | 36,873,990 | 0 | 0 | 0 | 0 | Blocking of traffic has to happen inside the router. If the router does not have this feature, consider to replace it with a new one. | 0 | 1,760 | 0 | 1 | 2016-04-26T19:23:00.000 | python,networking,network-programming | How can I block Internet access for a certain IP in my local network in python? | 1 | 2 | 2 | 36,913,510 | 0 |
0 | 0 | I am trying to find a way to block a certain Mac address / Internal IP from accessing the internet (Blocking a device in the LAN to WAN) in python.
This option is available in every modern router in every home but mine is kinda old and doesn't have that feature.
I have a basic knowledge in networking stuff and consider myself an Advanced-Beginner in python, so I'm up for the challenge but still need your help.
*Of course with the option to enable the internet again for that device | false | 36,873,990 | 0 | 0 | 0 | 0 | I know I am kinda late now but... You can't necessarily block internet access to a machine like you would do in your router's config.
What you CAN do is implement something like an ARP Spoofer. Basically what you would do in a Man-in-the-Middle attack.
You send a malicious ARP packet to poison the target's ARP table. Making it believe your machine is the router/default gateway. That way you can intercept every packet being transmitted by the target. You can then choose if you want to router them or not.
If you choose not to forward the packets, the connection to the internet is cut off.
If you want to forward the packets to the actual router (in order to allow the target to access the internet) you must enable IP Forwarding on your machine.
You can do this by running echo 1 >> /proc/sys/net/ipv4/ip_forward on Linux or changing the Registry Key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\IPEnableRouter on Windows ('1' forwards the packets, '0' doesn't). By default IP forwarding is set to 0.
Remember you must resend the malicious ARP packet every couple of seconds as the ARP tables get updated quite frequently. This means you don't necessarily have to change the IP Forwarding configuration on your machine. After a minute or less of exiting the script the target's ARP table will go back to normal, giving them access to the internet again.
Here are some python modules you might want to take a look at:
Scapy (Packet Manipulation Tool)
winreg (Windows Registry) | 0 | 1,760 | 0 | 1 | 2016-04-26T19:23:00.000 | python,networking,network-programming | How can I block Internet access for a certain IP in my local network in python? | 1 | 2 | 2 | 70,466,752 | 0 |
0 | 0 | I write chat using crossbar.io. We have several nodes of chat.
I need write statistics about each of nodes, that's why I need to get host name where specific node is run.
Is it possible get host name from component instance?
I use last version of crossbar/autobahn and python 3.4.
Expect get - 127.0.0.1 if I use local environment. | false | 36,874,061 | 0.197375 | 0 | 0 | 2 | In case of your machine has a resolvable hostname try with:
import socket
socket.gethostbyname(socket.getfqdn())
Update. This is a more complete solution, should work fine with all OS:
import socket
print [l for l in ([ip for ip in socket.gethostbyname_ex(
socket.gethostname())[2] if not ip.startswith('127.')][:1], [[(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()
) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1]])
if l][0][0] | 0 | 121 | 0 | 1 | 2016-04-26T19:27:00.000 | python-3.x,wamp,autobahn,crossbar | How to get Crossbar.io host name? | 1 | 1 | 2 | 36,875,764 | 0 |
0 | 0 | I am 'kind of' new to programming and must have searched a large chunk of the web in connection with this question. I am sure the answer is somewhere out there but I am probably simply not using the right terminology to find it. Nevertheless, I did my best and I am totally stuck. I hope people here understand the feeling and won't mind helping.
I am currently working on a data driven web app that I am building together with an outsourced developer while also learning more about programming. I've got some rusty knowledge of it but I've been working in business-oriented non-technical roles for a few years now and the technical knowledge gathered some dust.
The said web app uses MySql database to store information. In this MySql database there is currently a table containing 200,000 variables (Company Names). I want to run those Company Names through a third-party json RESTful API to return some additional data regarding those Companies.
There are 2 questions here and I don't expect straight answers. Pointing me in the right learning direction would be sufficient:
1. How would I go about taking those 200,000 variables and executing a script that would automatically make 200,000 calls to the API to obtain the data I am after. How do I then save this data to a json or csv file to import to MySql? I know how to make single API requests, using curl but making automated large volume requests like that is a mystery to me. I don't know whether I should create a json file out of it or somehow queue the requests, I am lost.
2. The API mentioned above is limited to 600 calls per 5 minutes perios, how do I introduce some sort of control system so that when the maximum volume of API calls is reached the script pauses and only returns to working when the specified amount of time goes by? What language is best to interact with the json RESTful API and to write the script described in question no1?
Thank you for your help.
Kam | true | 36,901,709 | 1.2 | 0 | 0 | 0 | You don't mention what server side language you're using, but the concepts would be the same for all - make your query to get your 200K variables, loop through the result set, making the curl call to the API for each, store the results in an array, json encode the array at the end of the loop, and then dump the result to a file. As for the limit to requests per time period, most languages have some sort of pause function, in PHP it's sleep(). If all else fails, you could put a loop that does nothing (except take time) into each call to put a delay in the process. | 0 | 392 | 0 | 0 | 2016-04-27T22:01:00.000 | jquery,python,mysql,api,scripting | Most effective way to run execute API calls | 1 | 1 | 1 | 36,901,878 | 0 |
0 | 0 | I know writing print "website url" does not provide a clickable url. So is it possible to get a clickable URL in python, which would take you directly to that website? And if so, how can it be done? | false | 36,905,809 | 0 | 0 | 0 | 0 | This is typically a function of the terminal that you're using.
In iterm2, you can click links by pressing cmd+alt+left click.
In gnome-terminal, you can click links by pressting ctrl+left click or right clicking and open link. | 0 | 46 | 0 | 0 | 2016-04-28T05:16:00.000 | python,python-2.7,url | Clickable website URL | 1 | 1 | 1 | 36,947,097 | 0 |
0 | 0 | I am working on a school project, and had an idea which i think will benifet me outside of school and is a bit over what school requires from me.
That is why i have a little lack of knowledge, everything regarding threads and dealing with multiple clients at once.
I had a few ideas, such as using UDP and wait for 2 connections and handle each one, but it made my code really messy and hard to follow, and really not efficent.
I would like to know if there is a good way to handle such a problem, and how. | false | 36,926,832 | 0 | 0 | 0 | 0 | If you are a host then you are creating a new socket for each new client. With that in mind you can create a program that listens for connections and then create a new thread for each connection (to a client). Each thread can do multiple tasks, control the socket and/or exchange data with the main thread.
The same applies for you being a client: you can create a new thread for each new connection.
I hope that helps. | 1 | 54 | 0 | 0 | 2016-04-28T22:55:00.000 | python,multithreading,sockets,udp,ports | Exchange data between 2 connections efficently python sockets | 1 | 1 | 1 | 36,927,011 | 0 |
0 | 0 | I was using browsercookie library and it was awesome. However, at some moment it just stopped working and I cannot see why.
Basically, it throws the error that it cannot locate cookie file at /Users/UserName/Library/Application Support/Google/Chrome/Default/Cookies
Google and Stackoverflow search does not give a hint where to look for an error. Would appreciate any help.
Mac OS 10.11.3, Chrome Version 50.0.2661.86 (64-bit), python2.7, pysqlite preinstalled. | true | 36,935,030 | 1.2 | 0 | 0 | 0 | A solution turned to be tricky: when you add or change profiles, sometimes Chrome changes folders where it stores cookies. In my case the solution was to change "Default" word in cookie path to "Profile 2". | 0 | 910 | 0 | 0 | 2016-04-29T09:47:00.000 | python-2.7,google-chrome,cookies | BrowserCookieError: Can not find cookie file | 1 | 1 | 1 | 36,935,556 | 0 |
1 | 0 | I hosted a Python/Flask web service on my Amazon (AWS) EC2 instance. modified the security group rules such that All inbound traffic is allowed.
I can login from ssh and ping(with public ip) is working fine but I couldn't open the service URL from the web browser. Could any one please suggest how can I debug this issue?
Thanks, | false | 36,961,672 | 0.379949 | 0 | 0 | 2 | It seems that web service isn't up and running or it is not listening on right port or it is listening just on 127.0.0.1 address. Check it with 'sudo netstat -tnlp' command. You should see process name, what IP and port it is listening on. | 0 | 543 | 0 | 0 | 2016-05-01T00:22:00.000 | web-services,amazon-web-services,amazon-ec2,flask-sqlalchemy,python-webbrowser | Web service hosted on EC2 host is not reachable from browser | 1 | 1 | 1 | 36,962,685 | 0 |
0 | 0 | My main Question:
How do I switch pages?
I did some things on a page and than switch to another one,
how do I update the driver to be the current page? | false | 36,978,007 | 0.379949 | 0 | 0 | 2 | With .get(url), just like you got to the first page. | 0 | 108 | 0 | 0 | 2016-05-02T08:19:00.000 | python,selenium,switch-statement | Selenium Python, New Page | 1 | 1 | 1 | 36,978,124 | 0 |
0 | 0 | In my python application, browser sends a request to my server to fetch certain information. I need to track the IP of the source from where the request was made.Normally, I am able to fetch that info by this call :
request.headers.get('Remote-Addr')
But, when I deploy the application behind a load balancer like HaProxy, the IP given is that of the load balancer and not the browser's.
How do I obtain the IP of the browser at my server when it's behind a load balancer?
Another problem with my case is that I am using TCP connection from browser to my server via HAProxy and not using http. | true | 36,997,104 | 1.2 | 0 | 0 | 3 | I had this issue with AWS ELB and Apache. The solution was mod_rpaf, which reads the X-Forwarded-For header and replaces it into the standard ip header.
You should check that haproxy is setting the X-Forwarded-For header (which contains the real client IP). You can use modrpaf or another technique to read the real IP. | 0 | 242 | 0 | 4 | 2016-05-03T06:26:00.000 | python,ip,haproxy | How to track IP of the browser when there is a load balancer in the middle | 1 | 1 | 1 | 36,997,238 | 0 |
0 | 0 | I'm using Google Analytics API to make a Python program.
For now it's capable to make specific querys, but...
Is possible to obtain a large JSON with all the data in a Google Analytics account?
I've been searching and i didn't have found any answer.
Someone know if it's possible and how? | false | 36,998,698 | 0 | 0 | 1 | 0 | Google Analytics stores a ton (technical term) of data; there are a lot of metrics and dimensions, and some of them (such as the users metric) have to be calculated specifically for every query. It's easy to underestimate the flexibility of Google Analytics, but the fact that it's easy to apply a carefully defined segment to three-year old data in real time means that the data will be stored in a horrendously complicated format, which is kept away from you for proprietary purposes.
So the data set would be vast, and incomprehensible. On top of that, there would be serious ramifications with regard to privacy, because of the way that Google stores the data (an issue which they can circumvent so long as you can only access the data through their protocols.
Short answer, you can take as much data as you can accurately describe and ask for, but there's no 'download all' button. | 0 | 488 | 0 | 1 | 2016-05-03T07:59:00.000 | python,google-analytics,google-api,google-analytics-api | Query all data of a Google Analytcs account | 1 | 1 | 2 | 37,029,398 | 0 |
1 | 0 | I keep getting SSLError: ('bad handshake SysCallError(0, None)) anytime I try to make a request with python requests in my django app.
What could possibly be the issue? | true | 37,009,692 | 1.2 | 0 | 0 | 6 | I did bunch of things but I believe pip uninstall pyopenssl did the trick | 0 | 1,752 | 0 | 3 | 2016-05-03T16:38:00.000 | python,django,python-requests | SSL Error: Bad handshake | 1 | 1 | 1 | 38,236,543 | 0 |
1 | 0 | I am trying to automate a process in which a user goes on a specific website, clicks a few buttons, selects the same values on the drop down lists and finally gets a link on which he/she can then download csv files of the data.
The third-party vendor does not have an API. How can I automate such a step?
The data I am looking for is processed by the third party and not available on the screen at any given point. | false | 37,010,482 | 0.197375 | 0 | 0 | 1 | Generally, you can inspect the web traffic to figure out what kind of request is being sent. EG., the tamperdata plugin for firefox, or the firebug net panel.
Figure out what the browser is sending (EG., POST request to the server) which will include all the form data of buttons and dropdowns, and then replicate that in your own code using Apache HTTP Client or jsoup or other HTTP client library. | 0 | 218 | 0 | 1 | 2016-05-03T17:20:00.000 | java,python,automation,web,automated-tests | How to automate user clicking through third-party website to retrieve data? | 1 | 1 | 1 | 37,010,752 | 0 |
0 | 0 | I have not seen any questions regarding to packet filter in Python and I am wondering, If it's possible to build it at all.
Is there any way building a custom firewall in Python? Null-routing specific IP's for example, or blocking them when request amount capacity is reached in 5 seconds.
What modules would it need, would it be extra difficult? Is Python useful for things like firewall?
Also would it be possible to add powerful protection? So it can filter packets on all the layers.
I'm not asking for script or exact tutorial to build it, my sorted question:
How possible would it be to build firewall in Python? Could I make it powerful enough to filter packets on all layers? would it be easy to build simple firewall? | false | 37,011,901 | 0.197375 | 1 | 0 | 1 | Yes, that would be possible, Python has a large networking support (I would starting with the socket module, see the docs for that).
I would not say that it will be easy or build in a single weekend, but you should give it a try and spend some time on it! | 0 | 1,306 | 0 | 0 | 2016-05-03T18:38:00.000 | python,network-programming | Packet filter in Python? | 1 | 1 | 1 | 37,013,135 | 0 |
0 | 0 | I would like to create AMI for an instance for 1 volume not for all volumes using boto. I've a script to automate AMI creation for couple of instances. However one of the instance consisting of huge volume for backups(No worries about that data). We would like to take snapshot of root volume in AMI creation not for the other volumes. is there any way to do this? | false | 37,018,770 | 0 | 0 | 0 | 0 | If the backup volume is a separate EBS/SSD volume, you may have chance to create a small snapshot from your root volume. You just need to unmount it from (as in Linux) OS level.
Load the instance OS
Unmount the huge volume from the OS level
shutdown OS
Take a snapshot
Reload the instance
Mount the volume back
However, this will not work if your backup volume is also part of the your root instance volume.
Important note : DO NOT run any AWS detach-volume command. An OS unmount is not AWS detach-volume. | 0 | 246 | 0 | 0 | 2016-05-04T04:55:00.000 | python,boto | How to Create AMI for 1 volume using boto? | 1 | 1 | 1 | 37,026,741 | 0 |
0 | 0 | Is it possible to use a HTTP GET request to stream YouTube videos?
I've looked at the Google YouTube API docs, it's not clear that this can be done. There are packages like pytube, but they are meant to be used directly, not by using HTTP requests.
Any info would be appreciated. | true | 37,054,317 | 1.2 | 0 | 0 | 1 | You'll have to reverse-engineer youtube's code in order to stream it by yourself, and it would not be necessarily possible to do with http only. | 0 | 189 | 0 | 0 | 2016-05-05T15:20:00.000 | python,youtube | Python - streaming Youtube videos using GET | 1 | 1 | 1 | 37,054,362 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.