Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | I am trying to scrap a website where targeted items are populated using document.write method. How can I get full browser html rendered version of the website in the Scrapy? | true | 22,757,755 | 1.2 | 0 | 0 | 2 | You can't do this, as scrapy will not execute the JavaScript code.
What you can do:
Rely on a headless browser like Selenium, which will execute the JavaScript. Afterwards, use XPath (or simple DOM access) like before to query the web page after executing the page.
Understand where the contents come from, and load and parse the source directly instead. Chrome Dev Tools / Firebug might help you with that, have a look at the "Network" panel that shows fetched data.
Especially look for JSON, sometimes also XML. | 0 | 428 | 0 | 0 | 2014-03-31T09:14:00.000 | python,web-scraping,scrapy | Scrapy: scraping website where targeted items are populated using document.write | 1 | 1 | 1 | 22,757,917 | 0 |
0 | 0 | I'm creating a unittest- and Selenium-based test suite for a web application. It is reachable by several hostnames, e.g. implying different languages; but of course I want to be able to test e.g. my development instances as well without changing the code (and without fiddling with the hosts file which doesn't work for me anymore, because of network security considerations, I suppose).
Thus, I'd like to be able to specify the hostname by commandline arguments.
The test runner does argument parsing itself, e.g. for chosing the tests to execute.
What is the recommended method to handle this situation? | true | 22,805,650 | 1.2 | 1 | 0 | 0 | The solution I came up with finally is:
Have a module for the tests which fixes the global data, including the hostname, and provides my TestCase class (I added an assertLoadsOk method to simply check for the HTTP status code).
This module does commandline processing as well:
It checks for its own options
and removes them from the argument vector (sys.argv).
When finding an "unknown" option, stop processing the options, and leave the rest to the testrunner.
The commandline processing happens on import, before initializing my TestCase class.
It works well for me ... | 0 | 94 | 0 | 0 | 2014-04-02T08:30:00.000 | selenium,python-unittest | How to pass an argument (e.g. the hostname) to the testrunner | 1 | 1 | 1 | 22,939,972 | 0 |
0 | 0 | I want to take input from the webpage, parse the data submitted using python or tcl and start the script execution based on the inputs given.
Please suggest me a solution, how it can be done.
I am not sure whether some web server need to be started for this.
Thanks in Advance.
Regards,
Surya | false | 22,807,281 | 0 | 0 | 0 | 0 | Surya,
you should have a look at the ncgi and htmlparse packages in tcllib to extract the information you need.
Joachim | 0 | 514 | 0 | 0 | 2014-04-02T09:35:00.000 | python-2.7,tcl | Read input from the webpage, parse the data submitted using python or tcl and start the script execution based on the inputs given | 1 | 1 | 2 | 22,811,026 | 0 |
1 | 0 | I'm trying to create a bot for an online game. The values for the game are stored in Javascript variables, which I can access. However, running my bot code in Javascript freezes the browser, since my code is the only thing that executes.
I'm trying to code my bot in Python, then, since it can run synchronously with the browser. How can I pass the Javascript variables to a client-side Python program? | false | 22,903,625 | 0.197375 | 0 | 0 | 1 | I'm trying to code my bot in Python, then, since it can run synchronously with the browser. How can I pass the JavaScript variables to a client-side Python program?...
You can pass the JavaScript variables only with query string.
I create the server in CherryPy (CherryPy is an object-oriented web application framework using the Python ) and the client function with file HTML.
Repeat: The data can be passed only by query string because the Server works statically and the Client works dynamically.
This is a stupid sentence but so function a general Client/Server.
The server receive a call or message one times and offers the service and response.
I also can wrong....This is a my opinion.
Exists Mako Templates where you can include the pages HTML (helpful for build the structure of the site) or pass variable from Server to Client.
I not know nothing programs or languages that you can pass the JavaScript variable to Server (and I try with Mako Templates but not function). | 1 | 672 | 0 | 1 | 2014-04-07T04:29:00.000 | javascript,python,parameter-passing | Reading Javascript Variable in Python | 1 | 1 | 1 | 22,907,980 | 0 |
0 | 0 | I created a simple threaded python server, and I have two parameters for format, one is JSON (return string data) and the other is zip. When a user selects the format=zip as one of the input parameters, I need the server to return a zip file back to the user. How should I return a file to a user on a do_GET() for my server? Do I just return the URL where the file can be downloaded or can I send the file back to the user directly? If option two is possible, how do I do this?
Thank you | true | 22,910,772 | 1.2 | 0 | 0 | 0 | The issue was that I hadn't closed the zipfile object before I tried to return it. It appeared there was a lock on the file.
To return a zip file from a simple http python server using GET, you need to do the following:
Set the header to 'application/zip'
self.send_header("Content-type:", "application/zip")
Create the zip file using zipfile module
Using the file path (ex: c:/temp/zipfile.zip) open the file using 'rb' method to read the binary information
openObj = open( < path > , 'rb')
return the object back to the browser
openObj.close()
del openObj
self.wfile.write(openObj.read())
That's about it. Thank you all for your help. | 1 | 257 | 0 | 0 | 2014-04-07T11:14:00.000 | python,json,rest,python-2.7,simplehttpserver | Issue with Python Server Returning File On GET | 1 | 1 | 2 | 22,915,936 | 0 |
0 | 0 | I'm trying to send this string on Python SQS:
"Talhão", with no quotes.
How do I do that?
Thanks! | false | 22,939,822 | 0 | 0 | 0 | 0 | install the newest version of boto (2.27 or more, lower versions have an issue with unicode strings)
send it as unicode, and you will succeed | 0 | 95 | 0 | 0 | 2014-04-08T14:18:00.000 | python,amazon-web-services,boto,amazon-sqs | Amazon SQS Python/boto: how do I send messages with accented characters? | 1 | 1 | 1 | 23,070,269 | 0 |
1 | 0 | Is there a way to send data packets from an active Python script to a webpage currently running JavaScript?
The specific usage I'm looking for is to give the ability for the webpage, using JavaScript, to tell the Python script information about the current state of the webpage, then for the Python script to interpret that data and then send data back to the webpage, which the JavaScript then uses to decide which function to execute.
This is for a video game bot (legally), so it would need to happen in real time. I'm fairly proficient in Python and web requests, but I'm just getting into JavaScript, so hopefully a solution for this wouldn't be too complex in terms of Javascript.
EDIT: One way I was thinking to accomplish this would be to have Javascript write to a file that the Python script could also read and write to, but a quick google search says that JavaScript is very limited in terms of file I/O. Would there be a way to accomplish this? | false | 22,950,275 | 0 | 0 | 0 | 0 | For security reasons, javascript in a browser is usually restricted to only communicate with the site it was loaded from.
Given that, that's an AJAX call, a very standard thing to do. | 0 | 645 | 0 | 0 | 2014-04-08T23:41:00.000 | javascript,python | Using Python to communicate with JavaScript? | 1 | 1 | 3 | 22,950,323 | 0 |
0 | 0 | I've written a script that pulls data from my school's website and I'm having some trouble with execution time.
There are over 20 campuses, each with data for three semesters. The script looks up those school names, then the semesters available for each school, then the subjects/departments that are offering classes each semester. Then the script searches for the classes per department and then I do things with that data.
I timed the execution of the script on just one campus, and it ran for over three minutes. When I ran it for all 24 campuses it took over an hour. I'm using the "requests" library, which runs each HTTP request in synchronously. I'm using the "requests" library, primarily because it handles sessions nicely.
I'm looking for ways to bring down the time the script takes to run, by making the various requests for each semester run in parallel. I suspect that if I run three semesters asynchronously, then each school should take a minute, instead of three. Then, I can run all schools in parallel and achieve the same minute for all of the data. A minute is a lot less than an hour and a quarter!
Am I wrong in my guess that multithreading/processing will bring down the execution time so drastically? What Python libraries should I be using for threads or processes?
Once I've got each school being processed on a thread, how do I consolidate the data from all the schools into one place? I know that it's considered poor practice for threads to alter global state, but what's the alternative here? | false | 23,014,432 | 0 | 0 | 0 | 0 | In python you want multiprocessing over multithreading. Threads don't do well in Python beacause of the GIL. | 1 | 67 | 0 | 0 | 2014-04-11T13:46:00.000 | python,multithreading,performance,multiprocessing | Performance Improvements with Processes or Threads | 1 | 1 | 3 | 23,014,920 | 0 |
1 | 0 | Is there a way to pass formdata in a scrapy shell?
I am trying to scrape data from an authenticated session, and it would be nice to check xpaths and so on through a scrapy shell. | false | 23,023,294 | 0 | 0 | 0 | 0 | one workaround is to first login using scrapy (using FormRequest) and then invoke inspect_response(response) in the parse method | 0 | 407 | 0 | 1 | 2014-04-11T21:59:00.000 | python,web-scraping,scrapy | Pass username/password (Formdata) in a scrapy shell | 1 | 1 | 1 | 23,269,542 | 0 |
0 | 0 | I am new to selenium.
I found selenium would not use my local firefox browser. Seems it create a fresh one with no plugin.
But I want do something with plugin on, such as: modify request headers, aotuproxy. I only found setting headers example in java. Though proxy can be set by using webdriver.FirefoxProfile().set_preference('network.proxy.http',...., it is not so sweet to fit my aim.
So I think it would be very nice to make selenium use my firefox. But I can not figure it out. | false | 23,027,988 | 0.099668 | 0 | 0 | 1 | Selenium cannot connect to an existing browser. It can only launch new instances. | 0 | 97 | 0 | 0 | 2014-04-12T08:19:00.000 | python,firefox,selenium | How can I make Selenium use my firefox (not create a fresh one) | 1 | 1 | 2 | 23,028,084 | 0 |
0 | 0 | I know the httpserver module in tornado is implemented based on the tcpserver module, so I can write a socket server based on tornado. But how can I write a server that is both a socket server and a web server?
For example, if I want to implement a chat app. A user can either login through a browser or a client program. The browser user can send msg to the client user through the back-end server. So the back-end server is a web and socket server. | true | 23,028,941 | 1.2 | 0 | 0 | 3 | You can start multiple servers that share an IOLoop within the same process. Your HTTPServer could listen on one port, and the TCPServer could listen on another. | 0 | 1,209 | 1 | 2 | 2014-04-12T10:07:00.000 | python,sockets,web,tornado | How to use tornado as both a socket server and web server? | 1 | 1 | 1 | 23,031,157 | 0 |
0 | 0 | Is there a way to pull traffic information from a particular youtube video (demographics for example, age of users, country, gender, etc.), say using the python gdata module or the youtube API? I have been looking around the module's documentation, but so far nothing. | false | 23,037,748 | 0 | 0 | 0 | 0 | There used to be a large selection of things in the past, but they are all gone now. So do not think it is possible any more. | 0 | 182 | 0 | 1 | 2014-04-13T00:25:00.000 | python,youtube-api,gdata | YouTube API retrieve demographic information about a video | 1 | 1 | 1 | 23,088,868 | 0 |
1 | 0 | I have to copy a part of one document to another, but I don't want to modify the document I copy from.
If I use .extract() it removes the element from the tree. If I just append selected element like document2.append(document1.tag) it still removes the element from document1.
As I use real files I can just not save document1 after modification, but is there any way to do this without corrupting a document? | false | 23,057,631 | 1 | 0 | 0 | 8 | It may not be the fastest solution, but it is short and seems to work...
clonedtag = BeautifulSoup(str(sourcetag)).body.contents[0]
BeautifulSoup creates an extra <html><body>...</body></html> around the cloned tag (in order to make the "soup" a sane html document). .body.contents[0] removes those wrapping tags.
This idea was derived Peter Woods comment above and Clemens Klein-Robbenhaar's comment below. | 0 | 10,039 | 0 | 21 | 2014-04-14T10:21:00.000 | python,beautifulsoup | clone element with beautifulsoup | 1 | 1 | 3 | 27,881,018 | 0 |
0 | 0 | How do i send xml file to http request(to web url) in python 2.7 or 3.3 or 3.4 and what are the packages that need to be installed in ubuntu.. | true | 23,075,224 | 1.2 | 0 | 0 | 0 | import requests url = "192.168.6.x:8089/test/request?"; files =""" """ r = requests.post(url, str(files)) print (r.text) print(r.elapsed) | 0 | 168 | 0 | 0 | 2014-04-15T05:21:00.000 | python,http | How do i send xml file to http request in python and what are the packages that need to be installed in ubuntu | 1 | 1 | 1 | 23,190,634 | 0 |
1 | 0 | I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth.
When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back?
I am doing this in Python with Django and I have Analytics API turned on i my API console! | false | 23,128,964 | 0 | 0 | 0 | 0 | You should use View ID Not account ID, the 'View ID', you can go:
Admin -> Select Site -> Under "View" -> View Settings , if it doesn't works
you can go: Admin->Profiles->Profile Settings | 0 | 5,740 | 0 | 3 | 2014-04-17T09:07:00.000 | python,django,python-2.7,google-analytics-api,http-status-code-403 | Google Analytics reports API - Insufficient Permission 403 | 1 | 2 | 3 | 29,837,598 | 0 |
1 | 0 | I am trying to access data from google-analytics. I am following the guide and is able to gauthorize my user and get the code from oauth.
When I try to access data from GA I only get 403 Insufficient Permission back. Do I somehow have to connect my project in Google API console to my analytics project? How would I do this? Or is there some other reason why I get 403 Insufficient Permission back?
I am doing this in Python with Django and I have Analytics API turned on i my API console! | true | 23,128,964 | 1.2 | 0 | 0 | 9 | Had the same problem, but now is solved.
Use View ID Not account ID, the 'View ID', can be found in the Admin->Profiles->Profile Settings tab
UPDATE
now, if you have more a account , you must go: Admin -> Select account -> under View-> click on View Settings | 0 | 5,740 | 0 | 3 | 2014-04-17T09:07:00.000 | python,django,python-2.7,google-analytics-api,http-status-code-403 | Google Analytics reports API - Insufficient Permission 403 | 1 | 2 | 3 | 24,274,077 | 0 |
0 | 0 | I am using Python 3.4 on a Linux PC. I have written a program to access ftp page using ftplib module and download a file from it on my pc.
I want to know the total network data transfer (including both sent and received) that happened in this process?
How should I go about it?
Any leads will be helpful. | false | 23,130,855 | 0 | 0 | 0 | 0 | Use wireshark, you have to setup a filter for ftp (filter like: port ftp). Then
start sniffing the transfer traffic. When the transfer is done you can see the
captured packets in the main window. You have to right click on the first ftp packet
and then select "Follow TCP stream". Then you should be able to view all your transfered
bytes and such. This works also with tcpdump and such but it is commandline only.
Kind regards,
Dirk | 0 | 47 | 0 | 1 | 2014-04-17T10:36:00.000 | python,linux,ftplib | Get data transfered in Python Modules | 1 | 1 | 1 | 23,132,117 | 0 |
0 | 0 | I am using asynchronous socket API in C#. In the client side, I need a buffer to store the binary data read from the server. And other client logic will check the buffer, unpack the head to see the length, if the length is less than that indicated by the header, continue. And next time we check the buffer again. For the network logic, I need to maintain this buffer, and I want to know what data type should I use.
In python we use a string as a buffer, but I don't think this is gonna work in C#. Inefficient, Encoding problem (I need to parse the binary data my own, not necessarily to a string), Frequently changed. What about stringbuilder? Any other suggestions? | false | 23,146,345 | 0.379949 | 0 | 0 | 2 | I would use byte[]. It will get the job done. | 1 | 116 | 0 | 0 | 2014-04-18T02:06:00.000 | c#,python,sockets,io | What data type is good for a IO buffer in C# | 1 | 1 | 1 | 23,146,561 | 0 |
1 | 0 | I'm trying to implement a secure google cloud endpoint in python for multi-clients (js / ios / android)
I want my users to be able to log by three ways loginForm / Google / Facebook.
I read a lot of docummentation about that but I didn't realy understood how I have to handle connection flow and session (or something else) to keep my users logged.
I'm also looking for a way to debug my endpoint by displaying objects like Request for exemple.
If someone know a good tutorial talking about that, it will be verry helpfull.
thank you | true | 23,154,120 | 1.2 | 0 | 0 | 0 | For request details, add 'HttpServletRequest' (java) to your API function parameter.
For Google authentication, add 'User' (java) to your API function parameter and integrate with Google login on client.
For twitter integration, use Google app-engine OpenID.
For facebook/loginForm, its all on you to develop a custom auth. | 0 | 95 | 0 | 1 | 2014-04-18T12:26:00.000 | python,facebook,google-app-engine,authentication,google-cloud-endpoints | google endpoint custom auth python | 1 | 1 | 1 | 23,223,929 | 0 |
1 | 0 | I would like to have all the text visible from a website, after the HTML is rendered. I'm working in Python with Scrapy framework.
With xpath('//body//text()') I'm able to get it, but with the HTML tags, and I only want the text. Any solution for this? | false | 23,156,780 | 0.066568 | 0 | 0 | 1 | The xpath('//body//text()') doesn't always drive dipper into the nodes in your last used tag(in your case body.) If you type xpath('//body/node()/text()').extract() you will see the nodes which are in you html body. You can try xpath('//body/descendant::text()'). | 0 | 27,552 | 0 | 23 | 2014-04-18T15:03:00.000 | python,html,xpath,web-scraping,scrapy | How can I get all the plain text from a website with Scrapy? | 1 | 1 | 3 | 50,809,934 | 0 |
0 | 0 | I am running python 2.7 + bottle on cygwin and I wanted to access a sample webpage from chrome.
I am unable to access the website running on http://localhost:8080/hello but when I do a curl within cygwin I am able to access it.
Error Message when accessing through Chrome
Connection refused
Description: Connection refused
Please let me know how I can access my local bottle website running inside Cygwin from windows browser. | true | 23,191,241 | 1.2 | 0 | 0 | 1 | Since you get a connection refused error, the best I can think of is that this is a browser issue. Try editing the LAN settings on your Chrome browser to bypass proxy server for local address. | 0 | 1,043 | 1 | 1 | 2014-04-21T05:16:00.000 | python,cygwin,bottle | Accessing localhost from windows browser | 1 | 1 | 1 | 23,191,551 | 0 |
0 | 0 | When I make an HTTPS request using the aiohttp library with asyncio and Python 3.4 in Windows 7, the request fails with a NotImplementedError in the _make_ssl_transport function in base_events.py as shown in the traceback.
On Windows, I use the ProactorEventLoop. I think you have to use that one to get asyncio to work. I tried the same request in a Debian 7 VM with a compiled version of Python 3.4, and the same request works. I don't use the ProactorEventLoop in Debian, just the default though.
Any ideas or workarounds? Or, should I give up with aiohttp HTTPS on Windows for now? I am not able to use Linux for this project, needs to be on Windows. | false | 23,203,874 | 0 | 0 | 0 | 0 | As jfs and OP observe,
ProactorEventLoop is incompatible with SSL, and
The default loop supports SSL on Windows. By extension aiohttp should also work with https on Windows. | 0 | 1,453 | 0 | 7 | 2014-04-21T18:54:00.000 | python,windows,python-3.x,python-asyncio | Does the aiohttp Python library in Windows support HTTPS? | 1 | 1 | 1 | 48,090,576 | 0 |
0 | 0 | Is it possible to create a single connection object to be used for different aws services ?
Each time a connection is made its a new api call so i believe it would save some time if a connection once created can be reused. | false | 23,210,636 | 0 | 0 | 0 | 0 | Although it might be possible to use a single connection for multiple services, that's not how boto is written and as the comment above states, I doubt very much that it would improve your performance. I would recommend that you create a single connection per service and keep reusing that connection. Boto caches connections and will also handle any reconnection that might be required if you don't use the connection for a while or encounter some error. | 0 | 120 | 0 | 0 | 2014-04-22T04:52:00.000 | python,amazon-web-services,boto | Single boto connection object for different aws services | 1 | 1 | 1 | 23,243,516 | 0 |
0 | 0 | I am trying to send the commands using python socket.
I have to send 'ctrl+a' key stroke, first.
Typically, I connect using telnet and type 'ctrl+a' then type the enter.
In terminal 'ctrl+a' looked as '^A'.
So I tried to send using python send function like below.
s.send('^A')
But it didn't work.
It looked as '^A' on the terminal but it doesn't feel like the text.
I need to send real 'ctrl+a' message.
How can I do that?
Please advice.
Thank you. | false | 23,211,180 | 0.664037 | 0 | 0 | 4 | s.send('\x01') (Python2); s.send(b'\x01') (Python3).
Ctrl+A is control character with numeric value 1. | 0 | 1,291 | 0 | 0 | 2014-04-22T05:35:00.000 | python,sockets | send ctrl+a message on python socket | 1 | 1 | 1 | 23,211,267 | 0 |
0 | 0 | I have a couple of gigantic XML files (10GB-40GB) that have a very simple structure: just a single root node containing multiple row nodes. I'm trying to parse them using SAX in Python, but the extra processing I have to do for each row means that the 40GB file takes an entire day to complete. To speed things up, I'd like to use all my cores simultaneously. Unfortunately, it seems that the SAX parser can't deal with "malformed" chunks of XML, which is what you get when you seek to an arbitrary line in the file and try parsing from there. Since the SAX parser can accept a stream, I think I need to divide my XML file into eight different streams, each containing [number of rows]/8 rows and padded with fake opening and closing tags. How would I go about doing this? Or — is there a better solution that I might be missing? Thank you! | true | 23,214,773 | 1.2 | 0 | 0 | 2 | You can't easily split the SAX parsing into multiple threads, and you don't need to: if you just run the parse without any other processing, it should run in 20 minutes or so. Focus on the processing you do to the data in your ContentHandler. | 0 | 1,062 | 0 | 0 | 2014-04-22T08:50:00.000 | python,xml,parsing,concurrency,sax | Concurrent SAX processing of large, simple XML files? | 1 | 1 | 2 | 23,223,638 | 0 |
1 | 0 | I am running selenium webdriver (firefox) using python on a headless server. I am using pyvirtualdisplay to start and stop the Xvnc display to grab the image of the sites I am visiting. This is working great except flash content is not loading on the pages (I can tell because I am taking screenshots of the pages and I just see empty space where flash content should be on the screenshots).
When I run the same program on my local unix machine, the flash content loads just fine. I have installed flash on my server, and have libflashplayer.so in /usr/lib/mozilla/plugins. The only difference seems to be that I am using the Xvnc display on the server (unless plash wasn't installed properly? but I believe it was since I used to get a message asking me to install flash when I viewed a site that had flash content but since installing flash I dont get that message anymore).
Does anyone have any ideas or experience with this- is there a trick to getting flash to load using a firefox webdriver on a headless server? Thanks | true | 23,227,680 | 1.2 | 0 | 0 | 0 | It turns out, I needed to use selenium to scroll down the page to load all the content. | 0 | 574 | 0 | 0 | 2014-04-22T18:41:00.000 | python,selenium,flash | Flash content not loading using headless selenium on server | 1 | 1 | 1 | 23,436,481 | 0 |
0 | 0 | When I logon to my company's computer with the AD username/password, I find that my Outlook will launch successfully. That means the AD authentication has passed.
In my opinion, outlook retrieves the AD user information, then sends it to an LDAP server to verify.
But I don't know how it retrieves the information, or by some other methods? | false | 23,262,767 | 0 | 1 | 0 | 0 | You are right, there is an ongoing communication between your workstation and the Active Directory server, which can use LDAP protocol.
Since I don't know what you tried so far, I suggest that you look into the python module python-ldap. I have used it in the past to connect, query and modify information on Active-Directory servers. | 0 | 104 | 0 | 0 | 2014-04-24T07:41:00.000 | python,ldap | How does auto-login Outlook successfully when in AD environment? | 1 | 1 | 1 | 23,263,099 | 0 |
0 | 0 | I'm writing up some Selenium tests for my site, and I'd like to test the date/timepickers I have, primarily to make sure that the code I put in to prevent users from putting in dates out of order works.
However, I've realized that the tests I have won't work the way I want them to if it's close to midnight, as the times I'm passing in will wrap to the next day, and be earlier than the current time rather than later, or vice versa.
Is it possible to run these tests as if it were a specific date/time? | false | 23,271,792 | 0 | 0 | 0 | 0 | If you want to keep the date/time constant for the purpose of your tests, you can just hardcode the date/time and passing it to the methods you want to test. If "hardcoding a date" doesn't feel right you could hardcode the specific sequence of actions that a browser might go through, to pick a particular date that happens to be static.
After all, you are testing whether your method correctly determines whether a date is before or after another date; how the input is provided is not relevant. | 0 | 304 | 0 | 0 | 2014-04-24T14:26:00.000 | python,date,selenium,selenium-webdriver | Selenium tests using static date | 1 | 1 | 1 | 23,276,624 | 0 |
0 | 0 | How do I determine which type of account a person has when using the permissions api? I need to make a different decision if they have a pro account versus a standard business account. Thanks! | false | 23,306,296 | 0 | 1 | 0 | 0 | I'm not aware of any way to see that via the API. That's typically something you'd leave up to the end-user to know when they're signing up. Ask them if they have Pro or not, and based on that, setup your permissions request accordingly. | 0 | 124 | 0 | 0 | 2014-04-26T03:29:00.000 | python,paypal | PayPal Classic APIs determine account type | 1 | 1 | 2 | 23,308,018 | 0 |
0 | 0 | I am trying to write my own proxy extensions. Both, burp suite as well as mitmproxy allows us to write extensions.
Till now, I am successful with intercepting the request and response headers, and write it to my own output file.
The problem is, I get frequent requests and responses at anonymous time and at the same time, the output is getting written in the file.
How should I identify that which response belongs to which particular request ??
If we see in burp suite, when we click on particulat URL in target, we see two different tabs- "Request" and "Response". How is burp suite identifying this ?
Similar is the case with mitmproxy.
I am new to proxy extensions, so any help would be great.
----EDIT----
If any additional information is required then pls let me know. | false | 23,306,361 | 0.197375 | 0 | 0 | 1 | In mitmproxy 0.10, a flow object is passed to the response handler function. You can access both flow.request and flow.response. | 0 | 175 | 0 | 0 | 2014-04-26T03:40:00.000 | python,proxy | how to identify http response belongs to which particular request through python? | 1 | 1 | 1 | 23,442,148 | 0 |
0 | 0 | I am trying to analyse a social network (basically, friends of a facebook user) with python. My main goal is to detect the social circles of the network. So far i tried to use networkx, but I couldn't understand how it can detect communities. Is there a way, with netowrkx or with another package, to solve this problem? thank you! | false | 23,317,710 | 0.197375 | 0 | 0 | 1 | Recommend to use igraph which has many community detection algorithms. | 0 | 474 | 0 | 0 | 2014-04-26T23:41:00.000 | python,facebook,social-networking,networkx | Detecting communities on a social network with python | 1 | 1 | 1 | 23,737,732 | 0 |
0 | 0 | I have a directory of images that I'd like to transfer to Google drive via a python script.
What's a good way to upload (recursively) a directory of images to Google drive while preserving the original directory structure? Would there be any benefit to making this multithreaded? And if so, how would that work? | false | 23,324,690 | 0.197375 | 0 | 0 | 1 | I would do this in two passes. Start by scanning the folder hierarchy, and then recreate the folders on drive. Update your in-memory folder model with the Drive folder ids. Then scan your files, uploading each one with appropriate parent id.
Only make it multithreaded if each thread will have a unique client id. Otherwise you will end up triggering the rate limit bug in Drive. If you have a large number of files, buy the boxed set of Game Of Thrones. | 0 | 738 | 0 | 1 | 2014-04-27T14:35:00.000 | python,google-drive-api | Best way to upload a directory of images to Google drive in python | 1 | 1 | 1 | 23,331,227 | 0 |
0 | 0 | I need to calculate a digest (checksum) from the request body (e.g. raw POST data) that is being sent via QNetworkRequest and include a digest signature in the request header.
How could I do this before sending the request (so the signature can be included in the header) ?
This is trivial when I'm using a byte array as the request body, but what if I have a QHttpMultiPart object ?
Basically something like QHttpMultiPart.toString(). | true | 23,366,161 | 1.2 | 0 | 0 | 0 | As of now, the only way seems to be to assemble the MIME multi-part body oneself, produce a digest of it and pass that byte data to QNetworkAccessManager sending method. | 0 | 531 | 0 | 1 | 2014-04-29T13:31:00.000 | python,qt,pyqt,pyqt4 | How to access request body of QNetworkRequest with QHttpMultiPart? | 1 | 1 | 1 | 23,383,069 | 0 |
0 | 0 | I would like to start the conversation in blocking mode and later switch to non-blocking.
Is that a stupid idea?
The python docs are kind of ambiguous about it, there it says:
... You do this [setblocking(0)] after creating the socket, but before using it. (Actually, if you’re nuts, you can switch back and forth.)
I read this as 'please don't do that', so I was wondering if there are reasons as to why it is discouraged.
Is there some kind of undefined behavior, what problems could I run into? | false | 23,383,615 | 0 | 0 | 0 | 0 | Yes you can do this, but mostly people do multiple blocking sockets with threads or multiple non-blocking sockets with event loops. But it should be no problem in switching in-between, as long as you don't switch between buffered and unbuffered I/O. | 1 | 243 | 0 | 1 | 2014-04-30T08:47:00.000 | python,sockets | switching between blocking and nonblocking on a python socket | 1 | 2 | 2 | 23,384,465 | 0 |
0 | 0 | I would like to start the conversation in blocking mode and later switch to non-blocking.
Is that a stupid idea?
The python docs are kind of ambiguous about it, there it says:
... You do this [setblocking(0)] after creating the socket, but before using it. (Actually, if you’re nuts, you can switch back and forth.)
I read this as 'please don't do that', so I was wondering if there are reasons as to why it is discouraged.
Is there some kind of undefined behavior, what problems could I run into? | false | 23,383,615 | 0 | 0 | 0 | 0 | blocking and nonblocking socket are different programming model.
switching between them will make your program too complicated. | 1 | 243 | 0 | 1 | 2014-04-30T08:47:00.000 | python,sockets | switching between blocking and nonblocking on a python socket | 1 | 2 | 2 | 23,384,627 | 0 |
1 | 0 | I'm using Selenium (python bindings, with Django) for automated web app testing. My page has a form with a button, and the button has a .click() handler that captures the click (i.e. the form is not immediately submitted). The handler runs function2(), and when function2() is done, it submits the form.
In my test with Selenium, I fill in the form, then click the Submit button. I want to verify that the form is eventually submitted (resulting in a POST request). The form POSTs and then redirects with GET to the same url, so I can't check for a new url. I think I need to check that the POST request occurs. How do I check for successful form submission?
Alternatively I could check at the database level to see that some new object is created, but that would make for a bad "unit" test. | false | 23,397,090 | 0 | 0 | 0 | 0 | I guess it depends on how deep down the rabbit hole you want to go.
If you're building a test for the functional side from the user's perspective, and your GET action results in changes on the webpage, then trigger the submit with selenium, then wait for the changes to propagate to the webpage (like waiting for one/more elements to change value or waiting for an element to appear)
If you want to build an unit test, then all you should be testing is the ability to Submit the data, not also the ability of the javascript code to do a POST request, then a GET then display the data.
If you want to build an integration test, then you will need to check that each individual action in the sequence you described is performed correctly in whatever scenario you deem appropriate and then check that the total result of those actions is as expected. The tricky part will be chaining all those checks together.
If you want to build an end to end test, then you need to check for all of the above, plus changes to any permanent storage locations that the code you test changes (like databases or in-memory structures) plus whatever stress/security/usability/performance checks your software needs to pass in your specific context. | 0 | 2,251 | 0 | 3 | 2014-04-30T20:04:00.000 | python,django,forms,selenium | How to detect POST after form submit in Selenium python? | 1 | 1 | 2 | 45,631,061 | 0 |
0 | 0 | Drag and drop feature is not working in test framework. It is working for last 6 months and all of a sudden this feature alone of Selenium is not working.
Previously my selenium version 2.33 and upgraded to latest 2.41.0. Still it is not working
All other actions say double click, filling text box etc are working fine.
I am not getting any error from selenium that drag and drop is failing but the object is not seen in the target.
I increased implicit time, and also inserted explicit sleep but it is not working.
firefox version is 16.
Did anyone faced similar issue? | false | 23,422,259 | 0 | 0 | 0 | 0 | Is drag and drop working manually when you open page in firefox? It may happen page has changed. | 0 | 291 | 0 | 1 | 2014-05-02T06:40:00.000 | python,firefox,drag-and-drop,selenium-webdriver | Python Selenium Webdriver: Drag and drop failing | 1 | 1 | 1 | 23,422,482 | 0 |
1 | 0 | I am using scrapy to scrape reviews about books from a site. Till now i have made a crawler and scraped comments of a single book by giving its url as starting url by myself and i even had to give tags of comments about that book by myself after finding it from page's source code. Ant it worked. But the problem is that till now the work i've done manually i want it to be done automatically. i.e. I want some way that crawler should be able to find book's page in the website and scrape its comments. I am extracting comments from goodreads and it doesn't provide a uniform method for url's or even tags are also different for different books. Plus i don't want to use Api. I want to do all work by myself. Any help would be appreciated. | true | 23,423,582 | 1.2 | 0 | 0 | 0 | HtmlAgilityPack helped me in parsing and reading Xpath for the reviews. It worked :) | 0 | 514 | 0 | 0 | 2014-05-02T08:11:00.000 | python,web-crawler,scrapy | Comments scraping without using Api | 1 | 1 | 2 | 37,941,838 | 0 |
0 | 0 | Is there a way to get CherryPy to respond to a url that includes a period, such as http://some/base/path/oldscript.py ? I have a number of old CGI scripts that were called like that, and I'm trying to roll them into a nice pretty CherryPy web app - but I don't want to break all the bookmarks and the like that still point to the CGI scripts. Ideally the same method would respond to the url regardless of if it has a .py or not. | true | 23,436,414 | 1.2 | 0 | 0 | 2 | Turns out using an _ in the method definition works as a dot/period. This does mean I have to have two function definitions if I want to be able to call it either with or without the .py, but since I can easily call the one function from the other, this is a minor quibble. | 1 | 450 | 0 | 1 | 2014-05-02T20:28:00.000 | python,url,cherrypy | cherrypy: respond to url that includes a dot? | 1 | 1 | 1 | 23,436,508 | 0 |
0 | 0 | Is there a way beyond catching exceptions(HTTPError,URLError), to check if access to a url is permitted.
I have to constantly(every 30 secs) ping a url (using urllib2.open)(which might or might not be valid).
And throwing and catching exceptions each time seems excessive. I couldn't find anything in the documentation. | true | 23,437,084 | 1.2 | 0 | 0 | 2 | There's nothing wrong with using try/except. That's the way to do such things in python.
it's easier to ask forgiveness than permission | 0 | 121 | 0 | 2 | 2014-05-02T21:16:00.000 | python,urllib2 | Python: Checking if you can connect to a given url (urllib2) | 1 | 1 | 2 | 23,437,205 | 0 |
0 | 0 | I have a ZMQ server listening on port 12345 TCP. When another server connects on that port locally or via VM it works fine, but if I try from a remote server that has to go through port forwarding on my Fios firewall it just bombs. The packets are showing up in Wireshark but ZMQ just ignores them. Is there anyway to get past this? | false | 23,453,650 | 0 | 1 | 0 | 0 | You shouldn't be able to bind more than once to the same port number, either from the same process or another.
ZMQ should give a failure when you issue bind with a port number already in use. Are you checking return codes? | 0 | 144 | 0 | 0 | 2014-05-04T07:16:00.000 | python,zeromq | Incoming ZeroMQ traffic dropped by server due to NAT? | 1 | 1 | 1 | 23,468,773 | 0 |
1 | 0 | I have built a program for summarization that utilizes a parser to parse from multiple websites at a time. I extract only <p> in each article.
This throws out a lot of random content that is unrelated to the article. I've seen several people who can parse any article perfectly. How can i do it? I am using Beautiful Soup | false | 23,454,496 | 0 | 0 | 0 | 0 | Your solution is really going to be specific to each website page you want to scrape, so, without knowing the websites of interest, the only thing I could really suggest would be to inspect the page source of each page you want to scrape and look if the article is contained in some html element with a specific attribute (either a unique class, id, or even summary attribute) and then use beautiful soup to get the inner html text from that element | 0 | 917 | 0 | 0 | 2014-05-04T09:13:00.000 | python,parsing,html-parsing,beautifulsoup | Parsing multiple News articles | 1 | 1 | 2 | 23,464,291 | 0 |
0 | 0 | I'm working on a game that allows you to add other users as your friend. The game is in Flash AS2 and the server is Python.
I'm currently hoping to make it so the game sends an HTTP request to an API server that will then verify everything, such as if they're already friends, if they can be friends, etc, and then if everything's okay, I'd then insert the request into a request able. Anyway, if the user is currently online, I would like to send the user a notification through the server.
The only way I could see this happening is if I opened up a socket connection through the add friend script and then sent a special packet letting the server know that it needs to send the notification. Right now that seems like it would take a while to do and not so efficient if a lot of users used it daily.
How would I best approach this and is there any other efficient ways? | true | 23,507,196 | 1.2 | 0 | 0 | 0 | I've decided it would be better if I still do the HTTP request, but depending on what the request returned, I could have the client send a packet to the server telling it to give the specific user a notification.
For example, if I tried adding user ID #323095643, I send the HTTP request and everything turns out okay and I get an OK response, the flash client sends a packet to the server basically saying "Hey, give user ID #323095643 a notification saying '_ wants you to be their friend'" and it sends it to that user. | 0 | 41 | 0 | 0 | 2014-05-07T01:25:00.000 | php,python,sockets | Communicating to Socket Server From External Script | 1 | 1 | 1 | 23,507,273 | 0 |
1 | 0 | I want to make my spider start every three hours.
I have a scrapy confinguration file located in c:/scrapyd folder.
I changed the poll_interval to 100
the spider works, but it didn't repeat each 100 seconds.
how to do that please? | false | 23,526,579 | 0.197375 | 0 | 0 | 1 | Maybe you should do a cron job that executes every three hours and performs a curl call to Scrapyd to schedule the job. | 0 | 215 | 0 | 0 | 2014-05-07T19:25:00.000 | python,scrapy,scrapyd | scrapyd pool_intervel to scheduler a spider | 1 | 1 | 1 | 24,659,705 | 0 |
0 | 0 | I had built a python udp hole puncher using raw socket and I wonder whether is there a service, or an option to use an external server, on the web(like dedicated server) that will host and run this program.
openshift was something i considered but it did not work because it uses apache is a proxy and therefore its impossible to use raw sockets for connection.
I prefer a free solution
thanks a lot | false | 23,548,149 | 0 | 0 | 0 | 0 | That will not work on OpenShift, we only offer two kinds of external ports for use, http/https and ws/wss | 0 | 287 | 1 | 1 | 2014-05-08T17:03:00.000 | python,sockets,udp,ip,openshift | Server host for python raw sockets use | 1 | 1 | 2 | 23,548,201 | 0 |
0 | 0 | Basically I need a tool (preferably for linux) for getting some data from webpage, opening some others, filling form on them, closing windows and clicking on some buttons.
What would be the tool for doing this in whole? Some scripting language like Perl or Python can do it for me?
This is maybe difficult so give me the way that is most user friendly. :-)
I am not familiar with Perl or Python, but I am strong in my will to make it work because it is important to me. Notify that I can't do anything on the server, so everything that needs to be done is from clientside/web browser.
open webpage
on that page click on the link to open webpage in new window
on new page click on two radio buttons and submit
close that window and return to previous
extract part of the text from tag that follows text "IP Address: " to the end of the line and save it to the variable
extract part of the text from tag that follows text "Timestamp: " to the end of the line and save it to the variable
extract part of the text from tag that follows text "File Name: " to the end of the line and save it to the variable (these three are similar to make)
open URL in the new window
fill in the form with the data that is extracted from the previous page and submit
if there is "answer"(result is table) from the form do "procedure1"(copy entire table with result and concat its cells, close window and return to previous, again click some buttons/links on that page etc.....)
if there is "no answer"(empty table) from the form do "precedure2"(opening new window, filling form with data in variables, clicking some buttons and closing that window) than proceed to "procedure1"
Thank you all for helping me. | true | 23,590,683 | 1.2 | 0 | 0 | 1 | Short answer: In Python, you can use mechanize together with BeautifulSoup to achieve what you want. Try it and figure out from the docs of these packages how to use them, then come back here with specific questions when things don't work. | 0 | 135 | 0 | 1 | 2014-05-11T08:48:00.000 | javascript,jquery,python,perl | Some scriping language for my task? | 1 | 1 | 1 | 23,590,735 | 0 |
0 | 0 | I am trying to use Python Selenium to click a button on a webpage, but Selenium is giving exception "Element is not currently visible and so may not be interacted with".
The DOM structure is quite simple:
<body style="overflow: hidden;">
...
<div aria-hidden="false" style="display: block; ...">
...
<button id="button-aaa" aria-hidden="true" style="...">
...
</div>
...
</body>
I have searched Google and Stackoverflow. Some users say Selenium cannot click a web element which is under a parent node with overflow: hidden. But surprisingly, I found that Selenium is able to click some other buttons which are also under a parent node with overflow: hidden.
Anyway, I have tried to use driver.execute_script to change the <body> style to overflow: none, but Selenium is still unable to click this button.
I have also tried to change the button's aria-hidden="true" to aria-hidden="false", but Selenium still failed to click.
I have also tried to add "display: block;" to the button's style, and tried all different combination of style changes, but Selenium still failed to click.
I have used this commands to check the button: buttonelement.is_displayed(). It always returns False no matter what style I change in the DOM. The button is clearly visually visible in the Firefox browser, and it is clickable and functioning. By using the Chrome console, I am able to select the button using ID.
May I know how can I check what is causing a web element to be invisible to Python Selenium? | true | 23,613,260 | 1.2 | 0 | 0 | 1 | Overflow takes only one of five values (overflow: auto | hidden | scroll | visible | inherit). Use visible | 0 | 1,022 | 0 | 1 | 2014-05-12T15:37:00.000 | javascript,python,css,selenium | How to check what is causing a web element to be invisible to Python Selenium? | 1 | 1 | 1 | 23,649,576 | 0 |
0 | 0 | I'm working on a project that needs to parse a very big XML file (about 10GB). Because process time is really long (about days), It's possible that my code exit in the middle of the process; so I want to save my code's status once in a while and then be able to restart it from last save point.
Is there a way to start (restart) a SAX parser not from the beginning of a XML file?
P.S: I'm programming using Python, but solutions for Java and C++ are also acceptable. | false | 23,627,989 | 0.197375 | 0 | 0 | 1 | Not really sure if this answers your question, but I would take a different approach. 10GB is not THAT much data, so you could implement a two-phase parsing.
Phase 1 would be to split the file in smaller chunks based on some tag, so you end up with more smaller files. For example if your first file is A.xml, you split it to A_0.xml, A_1.xml etc.
Phase 2 would do the real heavy lifting on each chuck, so you invoke it on A_0.xml, then after that on A_1.xml etc. You could then restart on a chunk after your code has exitted. | 0 | 162 | 0 | 0 | 2014-05-13T09:51:00.000 | java,python,c++,xml,sax | restart SAX parser from the middle of the document | 1 | 1 | 1 | 23,628,074 | 0 |
1 | 0 | I have a REST API and now I want to create a web site that will use this API as only and primary datasource. The system is distributed: REST API is on one group of machines and the site will be on the other(s).
I'm expecting to have quite a lot of load, so I'd like to make requests as efficient, as possible.
Do I need some async HTTP requests library or any HTTP client library will work?
API is done using Flask, web site will be also built using Flask and Jinja as template engine. | false | 23,657,718 | 0.066568 | 0 | 0 | 1 | Start simple and use the way, which seems to be easy to use for you. Consider optimization to be done later on only if needed.
Use of async libraries would come into play as helpful if you would have thousands of request a second. Much sooner you are likely to have performance problems related to database (if you use it), which is not to be resolved by async magic. | 1 | 3,037 | 0 | 1 | 2014-05-14T14:35:00.000 | python,rest | Consume REST API from Python: do I need an async library? | 1 | 1 | 3 | 23,657,898 | 0 |
1 | 0 | In mechanize we click links either by using follow_link or click_link. Is there a similar kind of thing in beautiful soup to click a link on a web page? | false | 23,679,480 | 0 | 0 | 0 | 0 | print(soup.find('h1',class_='pdp_product_title'))
does not give any result
<div class="pr2-sm css-1ou6bb2"><h2 class="headline-5-small pb1-sm d-sm-ib css-1ppcdci" data-test="product-sub-title">Women's Shoe</h2><h1 id="pdp_product_title" class="headline-2 css-zis9ta" data-test="product-title">Nike Air Force 1 Shadow</h1></div> | 0 | 52,594 | 0 | 9 | 2014-05-15T13:19:00.000 | python,web-scraping,beautifulsoup | Clicking link using beautifulsoup in python | 1 | 1 | 2 | 65,200,203 | 0 |
0 | 0 | I have a project where the goal is to have a user double click on an xml file, which is sent to my python script to work with. What is the best way to do accomplish something like this? Can this even be done programmatically or is this some kind of system preference I have to modify? How can my program retrieve the file that was double clicked on (at least the name). | false | 23,687,183 | 0 | 0 | 0 | 0 | This problem is a bit underspecified.
Say, you're on some Windows: You could assign the correct python invocation as default open action to the .xml file type, but this would then apply to all .xml files. And you might need elevated access rights to perform the change.
On Linux, you'd have to deal with various ways of desktop environments ... and then I'd open an issue because it does not work with xfe running under awesome on my box here.
In short: Solve this problem by avoiding it.
Can you make do with "drag this file onto the application icon"?
Can you make the .xml files appear in a dedicated place and have a background task check for new ones? | 0 | 301 | 0 | 0 | 2014-05-15T19:37:00.000 | python | Execute Python when double clicking a file | 1 | 1 | 2 | 23,687,339 | 0 |
1 | 0 | So I did a research before posting this and the solutions I found didn't work, more precisely:
-Connecting to my laptop's IPv4 192.168.XXX.XXX - didn't work
-Conecting to 10.0.2.2 (plus the port) - didn't work
I need to test an API i built using Django Rest framework so I can get the json it returns, but i cant access through an Android app i'm building (I'm testing with a real device, not an emulator). Internet permissions are set on Manifest and i can access remote websites normally. Just can't reach my laptop's localhost(they are in the same network)
I'm pretty new to Android and Python and Django as well (used to built Django's Rest Framework API).
EDIT: I use localhost:8000/snippets.json or smth like this to connect on my laptop.
PS: I read something about XAMP server... do I need it in this case?
Thanks in advance | false | 23,716,724 | 0.066568 | 0 | 0 | 1 | I tried the above, but failed to work in my case. Then with running
python manage.py runserver 0.0.0.0:8000 I also had to add my IP to ALLOWED_HOSTS in settings.py, which solved this issue.
eg.
Add your ip to allowed hosts in settings.py
ALLOWED_HOSTS = ['192.168.XXX.XXX']
Then run the server
python manage.py runserver 0.0.0.0:8000 | 0 | 2,908 | 0 | 3 | 2014-05-17T22:31:00.000 | android,python,django,localhost,django-rest-framework | Can't access my laptop's localhost through an Android app | 1 | 1 | 3 | 47,569,447 | 0 |
0 | 0 | I'd like to know if is it possible to browse all links in a site (including the parent links and sublinks) using python selenium (example: yahoo.com),
fetch all links in the homepage,
open each one of them
open all the links in the sublinks to three four levels.
I'm using selenium on python.
Thanks
Ala'a | false | 23,717,539 | 0 | 0 | 0 | 0 | Sure it is possible, but you have to instruct selenium to enter these links one by one as you are working within one browser.
In case, the pages are not having the links rendered by JavaScript in the browser, it would be much more efficient to fetch these pages by direct http request and process it this way. In this case I would recommend using requests. However, even with requests it is up to your code to locate all urls in the page and follow up with fetching those pages.
There might be also other Python packages, which are specialized on this kind of task, but here I cannot serve with real experience. | 0 | 446 | 0 | 1 | 2014-05-18T00:48:00.000 | python,selenium | Browse links recursively using selenium | 1 | 1 | 2 | 23,717,600 | 0 |
0 | 0 | I am writing a socket based "python cmd like" server module which can support cli interactive functions such as autocompletion or command history, by doing that a simple "telnet" or "nc" client side may able to connect to server to read/set something on server side.
after searching, there are a lot of modules can do "cmd" part such like python standard module "cmd" or "ipython" or even vty simulator, however, I cannot find a module can actually bind to socket directly to detect keystrokes such as "tab" key or "control+c" client side. Most of them just able to process line(s) read, which not suitable for autocompletion with tab press or command history with up/down press.
I think this question can be simplify to:
Is that possible to read socket keystroke input non-blocking, then process this key input value somehow server side - for example ASCII code + 1, then echo back to socket to show in client side?
Thank you for your help. | true | 23,718,920 | 1.2 | 0 | 0 | 1 | What you want is not possible. As you say, you want to write a socket based cmd like server. The server will open a socket and listen for data from the client. Now it is possible to read socket input character by character (which is not the same as non-blocking BTW), but that will not help you.
It is up to the client program to decide how and when to send the data. So if the client side program decides to "eat" tab and control characters, then you will simply not see them. So if you want to process keystrokes one by one, you will also need a client application. | 0 | 1,014 | 0 | 3 | 2014-05-18T05:42:00.000 | python,sockets,interactive,keystroke | How to handle keystroke in python socket? | 1 | 1 | 1 | 23,719,308 | 0 |
1 | 0 | This is my first StackOverflow post so please bear with me.
What I'm trying to accomplish is a simple program written in python which will change all of a certain html tag's content (ex. all <h1> or all <p> tags) to something else. This should be done on an existing web page which is currently open in a web browser.
In other words, I want to be able to automate the inspect element function in a browser which will then let me change elements however I wish. I know these changes will just be on my side, but that will serve my larger purpose.
I looked at Beautiful Soup and couldn't find anything in the documentation which will let me change the website as seen in a browser. If someone could point me in the right direction, I would be greatly appreciative! | true | 23,790,052 | 1.2 | 0 | 0 | 0 | What you are talking about seems to be much more of the job of a browser extension. Javascript will be much more appropriate, as @brbcoding said. Beautiful Soup is for scraping web pages, not for modifying them on the client side in a browser. To be honest, I don't think you can use Python for that. | 0 | 2,023 | 0 | 1 | 2014-05-21T17:25:00.000 | python,html | Change website text with python | 1 | 1 | 1 | 23,790,111 | 0 |
1 | 0 | I want to parse a website which contains a list of people and their information, The problem is that the website using ajax loads new and new information as I scroll down the website.
I need information of ALL the people.
urllib.open(..).read() does not take care of the scroll down. Can you please suggest me a way to parse all the data. | false | 23,831,048 | 0.197375 | 0 | 0 | 1 | You can use "Network" panel of chrome's devTool to figure out what is the path that the ajax requests to.
Then use python script to fetch the content with the path. | 1 | 62 | 0 | 0 | 2014-05-23T13:50:00.000 | jquery,python,ajax | python program to read a ajax website | 1 | 1 | 1 | 23,831,288 | 0 |
0 | 0 | I'm using PyBrain in a project over Windows 7 and I've had not problem with this library until I had to write the trained network to a XML file.
I tried this "from pybrain.tools.xml.networkwriter import NetworkWriter" but I got an importation error.
Can anyone tell me if there's a requirement to get this job done?
I tried installing the library called "lxml", because I have it installed on my linux pc, but it doesn't seem to work along side with pybrain. | false | 23,878,984 | 0 | 0 | 0 | 0 | PyBrain declares in it's setup.py only one required package scipy.
Scipy is a bit complex stuff and it is best installed from binaries (at least on Windows). So if you manage installing scipy, you shall make PyBrain running. | 0 | 120 | 0 | 0 | 2014-05-26T23:32:00.000 | python,xml,pybrain | Which XML library is used for PyBrain? | 1 | 1 | 1 | 23,879,122 | 0 |
0 | 0 | I am developing an app to retrieve and store files to dropbox using oauth2 protocol. Is there any way to use dropbox core api to send invites to share a folder in Python ? | true | 23,894,089 | 1.2 | 0 | 0 | 0 | No, the Dropbox API currently doesn't expose any way to send shared folder invites. But a shared folder API is being worked on. | 0 | 139 | 0 | 0 | 2014-05-27T16:06:00.000 | python,oauth-2.0,dropbox-api | Is there any way to use dropbox core api to send invites to share a folder in Python? | 1 | 1 | 1 | 23,894,424 | 0 |
1 | 0 | I'm trying to teach myself a concept by writing a script. Basically, I'm trying to write a Python script that, given a few keywords, will crawl web pages until it finds the data I need. For example, say I want to find a list of venemous snakes that live in the US. I might run my script with the keywords list,venemous,snakes,US, and I want to be able to trust with at least 80% certainty that it will return a list of snakes in the US.
I already know how to implement the web spider part, I just want to learn how I can determine a web page's relevancy without knowing a single thing about the page's structure. I have researched web scraping techniques but they all seem to assume knowledge of the page's html tag structure. Is there a certain algorithm out there that would allow me to pull data from the page and determine its relevancy?
Any pointers would be greatly appreciated. I am using Python with urllib and BeautifulSoup. | false | 23,921,986 | 0.197375 | 0 | 0 | 2 | You're basically asking "how do I write a search engine." This is... not trivial.
The right way to do this is to just use Google's (or Bing's, or Yahoo!'s, or...) search API and show the top n results. But if you're just working on a personal project to teach yourself some concepts (not sure which ones those would be exactly though), then here are a few suggestions:
search the text content of the appropriate tags (<p>, <div>, and so forth) for relevant keywords (duh)
use the relevant keywords to check for the presence of tags that might contain what you're looking for. For example, if you're looking for a list of things, then a page containing <ul> or <ol> or even <table> might be a good candidate
build a synonym dictionary and search each page for synonyms of your keywords too. Limiting yourself to "US" might mean an artificially low ranking for a page containing just "America"
keep a list of words which are not in your set of keywords and give a higher ranking to pages which contain the most of them. These pages are (arguably) more likely to contain the answer you're looking for
good luck (you'll need it)! | 0 | 3,298 | 0 | 8 | 2014-05-28T21:13:00.000 | python,web-scraping,beautifulsoup,web-crawler | Web scraping without knowledge of page structure | 1 | 1 | 2 | 23,922,228 | 0 |
0 | 0 | I've created a test that click on an element X.
This element is only revealed after you click on another button,
and those elements are connected with ng-hide.
When i try to run my code the click on X element doesn't work.
However, in debug mode or after adding 1 second sleep, it does.
I'm using selenium framework in python, with a remote webdriver with ImplicitlyWait of 10 sec.
Does someone knows the reason for this behavior? | false | 23,998,359 | 0 | 0 | 0 | 0 | As said by @Siking, this is clearly a timing issue.
The fact is that Selenium is very fast and faster than loading of element.
Sometimes, Selenium requires a pause or a sleep to ensure that an element is present.
I also recommend -especially for asynchronous requests- using waitForElementPresent to wait until ajax method is finished. | 0 | 2,065 | 0 | 4 | 2014-06-02T15:42:00.000 | python,angularjs,testing,selenium | Selenium click on elements works only in debug mode | 1 | 1 | 1 | 24,002,478 | 0 |
0 | 0 | I need to transfer data via pyzmq through two computers connected by an ethernet cable. I have already set up a script that runs on the same computer correctly, but I need to find the tcp address of the other computer in order to communicate. They both run Ubuntu 14.04. One of them should be a server processing requests while the other sends requests. How do I transfer data over tcp through ethernet? I simply need a way to find the address.
EDIT: (Clarification) I am running a behavioural study. I have a program called OpenSesame which runs in python and takes python scripts. I need a participant to be able to sit at a computer and be able to ask another person questions (specifically for help in a task). I need a server (using pyzmq preferably) to be connected by ethernet and communicate with that computer. It wrote a script. It works on the same computer, but not over ethernet. I need to find the address | false | 24,019,331 | 0 | 0 | 0 | 0 | Maybe you could periodically send datagram message containing peer's ip address (or some other useful information) to broadcast address, to allow other peers to discover it. And after peer's address is dicovered you can estabish connection via ZeroMQ or other kind... connection. :) | 0 | 223 | 1 | 1 | 2014-06-03T15:40:00.000 | python,network-programming,zeromq,ethernet,pyzmq | How can I find the tcp address for another computer to transfer over ethernet? | 1 | 1 | 2 | 24,155,570 | 0 |
1 | 0 | My script logs in to my account, navigates the links it needs to, but I need to download an image. This seems to be easy enough to do using urlretrive. The problem is that the src attribute for the image contains a link which points it to the page which initiates a download prompt, and so my only foreseeable option is to right click and select 'save as'. I'm using mechanize and from what I can tell Mechanize doesn't have this functionality. My question is should I switch to something like Selenium? | false | 24,027,928 | 0 | 0 | 0 | 0 | I would try to watch Chrome's network tab, and try to imitate the final request to get the image. If it turned out to be too difficult, then I would use selenium as you suggested. | 0 | 194 | 0 | 1 | 2014-06-04T02:12:00.000 | python,selenium,web-scraping,mechanize | Using Mechanize for python, need to be able to right click | 1 | 1 | 2 | 24,027,994 | 0 |
0 | 0 | I have tried splinter for browser automation. Used firefox webdriver in splinter. But the problem is high CPU usage when the firefox loads and sometimes its hangs the gui. Please suggest me an option. I'm in a Linux box(Ubuntu) and building an app using pygtk. | false | 24,030,051 | 0 | 0 | 0 | 0 | Selinum with phantomjs should be a good replacement of splinter. | 0 | 260 | 0 | 1 | 2014-06-04T05:57:00.000 | python,linux,pygtk,browser-automation | Is there any inbuilt browser for web automation using python? | 1 | 1 | 1 | 24,030,111 | 0 |
1 | 0 | I am having a problem on handling the pop up windows in robot framework.
The process I want to automate is :
When the button is clicked, the popup window appears. When the link from that popup window is clicked, the popup window is closed automatically and go back to the main page.
While the popup window appears, the main page is disabled, and it can be enabled only when the link from the pop up window is clicked.
The problem I have here is that I cannot go back to the main page after clicking the link from the popup window. I got the following error.
20140604 16:04:24.160 : FAIL : NoSuchWindowException: Message: u'Unable to get browser'
I hope you guys can help me solve this problem.
Thank you! | false | 24,032,359 | 0 | 0 | 0 | 0 | I have seen this issue and found that there is a recovery period where Selenium does not work correctly for a short time after closing a window. Try using a fixed delay or poll with Wait Until Keyword Succeeds combined with a keyword from Selenium2Library. | 0 | 1,622 | 0 | 0 | 2014-06-04T08:20:00.000 | python-2.7,selenium,selenium-webdriver,robotframework,automated-tests | Going back to the main page after closing the pop up window | 1 | 1 | 3 | 24,085,441 | 0 |
0 | 0 | Hy!
I use google-mail-oauth2-tools but I have a problem:
When I write the verification code the program dead.
Traceback (most recent call last):
File "oauth2.py", line 346, in <module>
main(sys.argv)
File "oauth2.py", line 315, in main
print 'Refresh Token: %s' % response['refresh_token']
KeyError: 'refresh_token
Why?
Thank you! | true | 24,054,363 | 1.2 | 1 | 0 | 1 | You get this keyerror because there is no refresh_token in the response. If you didn't ask for offline access in your request, there will be no refresh token in the response, only an access token, bearer and token expiry. | 0 | 467 | 0 | 0 | 2014-06-05T07:38:00.000 | python,oauth-2.0,keyerror | google-mail-oauth2-tools KeyError :-/ | 1 | 1 | 1 | 24,058,208 | 0 |
1 | 0 | There is XML data passed in POST request as a simple string and not inside some name=value pair,
with HTTP header (optionally) set to 'Content-Type: text/xml'.
How can I get this data in Python (by its own ways or by tools of Django)? | true | 24,061,320 | 1.2 | 0 | 0 | 2 | I'm not quite sure what you're asking. Do you just want to know how to access the POST data? You can get that via request.body, which will contain your XML as a string. You can then use your favourite Python XML parser on it. | 0 | 2,986 | 0 | 0 | 2014-06-05T13:18:00.000 | python,xml,django,post | How can I get XML data in Python/Django passed in POST request? | 1 | 1 | 1 | 24,061,391 | 0 |
1 | 0 | I downloaded the freebase data dump and I want to use it to get information about a query just like how I do it using the web API. How exactly do I do it? I tried using a simple zgrep but the result was a mess and takes too much time. Any graceful way to do it (preferably something that plays nicely with python)? | false | 24,069,711 | 0.197375 | 0 | 0 | 1 | The Freebase dump is in RDF format. The easiest way to query it is to dump it (or a subset of it) into an RDF store. It'll be quicker to query, but you'll need to pay the database load time up front first. | 0 | 964 | 0 | 3 | 2014-06-05T20:31:00.000 | python,rdf,freebase | How to search freebase data dump | 1 | 1 | 1 | 25,625,683 | 0 |
0 | 0 | How do you have two programs communicate when there is a firewall in the way. I would like something sort of like socket, but that doesn't work through firewalls. It is okay if you have to use a 3rd party resource. I am doing this in python. | false | 24,110,915 | 0.379949 | 0 | 0 | 2 | There are 2 possible ways:
1) UPnP / NATPMP / PCP - These are protocols implemented by some(most?) routers
more likely local networks to allow applications behind a firewall interact
in this case you send packets (from both clients) to their respective routers
using the protocol mentioned above asking for a port opening, then communicate
regulary using sockets.
2) In some cases NAT traversal is possible - read about STUN servers and the ICE
protocol. - This is most common for UDP communication, though sometimes TCP
traffic can be also traversed in the network this way - most common tech is
UDP hole punching
3) If none of these apply (say, symmetric NAT on a huge scale network) the only way
would be a TURN approach when you relay all data through your publicly accesible
server.
P2P and NAT traversal are common in SIP,Voip and torrents, hence, free libraries
like VUZE (torrent open source lib) can be a good place to start digging... :) | 0 | 73 | 0 | 0 | 2014-06-08T21:16:00.000 | python,networking | Have programs communicate when there is a firewall? | 1 | 1 | 1 | 24,110,983 | 0 |
1 | 0 | I need to create a BASH script, ideally using SED to find and replace value lists in href URL link constructs with HTML sit files, looking-up in a map (old to new values), that have a given URL construct. There are around 25K site files to look through, and the map has around 6,000 entries that I have to search through.
All old and new values have 6 digits.
The URL construct is:
One value:
HREF=".*jsp\?.*N=[0-9]{1,}.*"
List of values:
HREF=".*\.jsp\?.*N=[0-9]{1,}+N=[0-9]{1,}+N=[0-9]{1,}...*"
The list of values are delimited by + PLUS symbol, and the list can be 1 to n values in length.
I want to ignore a construct such as this:
HREF=".*\.jsp\?.*N=0.*"
IE the list is only N=0
Effectively I'm only interested in URL's that include one or more values that are in the file map, that are not prepended with CHANGED -- IE the list requires updating.
PLEASE NOTE: in the above construct examples: .* means any character that isn't a digit; I'm just interested in any 6 digit values in the list of values after N=; so I've trying to isolate the N= list from the rest of the URL construct, and it should be noted that this N= list can appear anywhere within this URL construct.
Initially, I want to create a script that will create a report of all links that fulfills the above criteria and that have a 6 digital OLD value that's in the map file, with its file path, to get an understanding of links impacted. EG:
Filename link
filea.jsp /jsp/search/results.jsp?N=204200+731&Ntx=mode+matchallpartial&Ntk=gensearch&Ntt=
filea.jsp /jsp/search/BROWSE.jsp?Ntx=mode+matchallpartial&N=213890+217867+731&
fileb.jsp /jsp/search/results.jsp?N=0+450+207827+213767&Ntx=mode+matchallpartial&Ntk=gensearch&Ntt=
Lastly, I'd like to find and replace all 6 digit numbers, within the URL construct lists, as outlined above, as efficiently as possible (I'd like it to be reasonably fast as there could be around 25K files, with 6K values to look up, with potentially multiple values in the list).
**PLEASE NOTE:** There is an additional issue I have, when finding and replacing, is that an old value could have been assigned a new value, that's already been used, that may also have to be replaced.
E.G. If the map file is as below:
MAP-FILE.txt
OLD NEW
214865 218494
214866 217854
214867 214868
214868 218633
... ...
and there is a HREF link such as:
/jsp/search/results.jsp?Ntx=mode+matchallpartial&Ntk=gensearch&N=0+450+214867+214868
214867 changes to 214868 - this would need to be prepended to flag that this value has been changed, and should not be replaced, otherwise what was 214867 would become 218633 as all 214868 would be changed to 218633. Hope this makes sense - I would then need to run through file and remove all 6 digit numbers that had been marked with the prepended flag, such that link would become:
/jsp/search/results.jsp?Ntx=mode+matchallpartial&Ntk=gensearch&N=0+450+214868CHANGED+218633CHANGED
Unless there's a better way to manage these infile changes.
Could someone please help me on this, I'm note an expert with these kind of changes - so help would be massively appreciated.
Many thanks in advance,
Alex | false | 24,126,783 | 0 | 0 | 0 | 0 | I will write the outline for the code in some kind of pseudocode. And I don't remember Python well to quickly write the code in Python.
First find what type it is (if contains N=0 then type 3, if contains "+" then type 2, else type 1) and get a list of strings containing "N=..." by exploding (name of PHP function) by "+" sign.
The first loop is on links. The second loop is for each N= number. The third loop looks in map file and finds the replacing value. Load the data of the map file to a variable before all the loops. File reading is the slowest operation you have in programming.
You replace the value in the third loop, then implode (PHP function) the list of new strings to a new link when returning to a first loop.
Probably you have several files with the links then you need another loop for the files.
When dealing with repeated codes you nees a while loop until spare number found. And you need to save the numbers that are already used in a list. | 1 | 95 | 0 | 1 | 2014-06-09T18:44:00.000 | python,html,regex,bash,sed | How to find and replace 6 digit numbers within HREF links from map of values across site files, ideally using SED/Python | 1 | 1 | 1 | 24,127,504 | 0 |
0 | 0 | Trying to set the timeout for requests in uWSGI, I'm not sure of the correct setting. There seem to be multiple timeout options (socket, interface, etc.) and it's not readily evident which setting to configure or where to set it.
The behavior I'm looking for is to extend the time a request to the resource layer of a REST application can take. | false | 24,127,601 | -0.066568 | 0 | 0 | -1 | it worked for me by comment
#master = true
and put this,
lazy-apps = true
in uwsgi.ini file | 0 | 51,844 | 0 | 52 | 2014-06-09T19:35:00.000 | python,uwsgi | uWSGI request timeout in Python | 1 | 2 | 3 | 67,135,000 | 0 |
0 | 0 | Trying to set the timeout for requests in uWSGI, I'm not sure of the correct setting. There seem to be multiple timeout options (socket, interface, etc.) and it's not readily evident which setting to configure or where to set it.
The behavior I'm looking for is to extend the time a request to the resource layer of a REST application can take. | false | 24,127,601 | 1 | 0 | 0 | 22 | Setting http-timeout worked for me. I have http = :8080, so I assume if you use file system socket, you have to use socket-timeout. | 0 | 51,844 | 0 | 52 | 2014-06-09T19:35:00.000 | python,uwsgi | uWSGI request timeout in Python | 1 | 2 | 3 | 28,458,462 | 0 |
0 | 0 | Is it possible to calculate the performance testing through selenium with python?
If it is possible, how should I do it? | false | 24,135,896 | 0 | 1 | 0 | 0 | Selenium is not the right tool to use for performance testing.
jmeter is a great tool for this. You would be able to see the response for each request. | 0 | 304 | 0 | 3 | 2014-06-10T08:06:00.000 | python,selenium-webdriver,selenium-firefoxdriver | Is it possible to calculate the performance testing through selenium with python? | 1 | 1 | 1 | 25,492,157 | 0 |
0 | 0 | I'll explain you better my problem.
I've code a simple python server who listening for web client connection.
The server is running but i must add a function and i don't know how resolve this..
I must set up a timer, if the client don't connect every N seconds, I've to log it.
I already looked for set up a timeout but in the lib socket, the timeout doesn't do what i want ...
I tried to set up a timer with timestamps and compare values, but the socket.listen() method don't stop operation until a client connects. And i wanna stop the listen() method if the time is exceeded. | false | 24,218,114 | 0 | 0 | 0 | 0 | How about using a token that is switched when the client connects? Put it in a while loop and if the token is ever the same non-switched value twice in a row kill the loop and stop listen(). | 0 | 458 | 0 | 0 | 2014-06-14T08:34:00.000 | python,sockets,tcp,timer | Python set up a timer for client connection | 1 | 1 | 1 | 24,218,375 | 0 |
1 | 0 | The scenario: I have a Networkx network with around 120.000 edges which I need to query each time a user requests a page or clicks something on the page, so a lot of calls.
I could load and parse the network each call, but that would be a waste of time as that would take around 4 seconds each time (excluding the querying).
I was hoping I could store this network object (which is static) somewhere globally and just query it when needed, but I can't find an easy way to do so. Putting all the edges in a DB is not an option as it doesn't eliminate the time needed for parsing. | true | 24,231,504 | 1.2 | 0 | 0 | 4 | You could simply install it as a global variable. Call the function that loads it in a module-level context and import that module when you need it (or use a singleton pattern that loads it on first access, but it's basically the same thing).
You should never use a global variable in a webapp if you expect to alter the contents on the fly, but for static content there's nothing wrong with them.
Just be aware that if you put the import inside a function, then that import will run for the first time when that function is run, which means that the first time someone accesses a specific server after reboot they'll have to wait for the data to load. If you instead put the import in a module-level context so that it's loaded on app start, then your app will take four seconds (or whatever) longer to start in the first place. You'll have to pick one of those two performance hits -- the latter is probably kinder to users. | 0 | 118 | 0 | 2 | 2014-06-15T16:05:00.000 | python,django,python-2.7,networkx | Django large variable storage | 1 | 1 | 1 | 24,231,580 | 0 |
0 | 0 | I trying to run a hello world app on my device environment using ga and Python, however even not doing any explicit url request, the urllib2 is having some problems with my proxy server.
I tried adding the localhost to the list of exclusion and it didn't work. If I disable the proxy on machine it works perfectly.
How can I make it work without disabling the proxy for all programs? | false | 24,243,891 | 0 | 0 | 0 | 0 | If you are running your app from the terminal, using dev_appserver.py try using the --skip_sdk_update_check switch, it may be the SDK update check that is failing. | 0 | 27 | 1 | 0 | 2014-06-16T12:35:00.000 | python,google-app-engine,python-2.7 | Need to ignore proxy without disabling it on control panel | 1 | 1 | 2 | 24,250,751 | 0 |
0 | 0 | I am trying to run a python script in which I am opening a webpage and clicking on some element. But the script is running very slow and giving random exceptions .
Mostly it halts at line
driver = webdriver.Firefox()
Message -
selenium.common.exceptions.WebDriverException: Message: 'Can\'t load the profile. Profile ?Dir: /tmp/tmp4liaEq Firefox output: Xlib: extension "RANDR" missing on display ":1733".\n1403086712970\taddons.xpi\tDEBUG\tstartup\n1403086713204\taddons.xpi\tDEBUG\tcheckForChanges\n1403086713568\taddons.xpi\tDEBUG\tNo changes found\n'
Sometimes -
driver.find_element_by_xpath("//a[@id='some_id']")
Returning error that the element is not visible so can't be clicked.
The same script runs smoothly on my system that has 4GB RAM.
(EC2 system specs ~ 600mb memory)
I tried looking into the system and "top" command returned -
604332k total, 577412k used, 26920k free, 6616k buffers
I have installed firefox and also xvfb since i am running the firefox headlessly | false | 24,282,955 | 0 | 0 | 0 | 0 | I was facing the same problem.
Running the scripts as root solved the problem.
Also making an user running the test a sudoer worked. | 0 | 1,073 | 0 | 1 | 2014-06-18T10:16:00.000 | python,firefox,selenium,amazon-web-services,amazon-ec2 | Unable to run python selenium webdriver scripts over Amazon EC2 | 1 | 1 | 1 | 26,747,525 | 0 |
1 | 0 | I am scraping a webpage. The webpage consists of 50 entries. After 50 entries it gives a
Load more reults button. I need to automatically select it. How can I do it. For scraping I am using Python, Lxml. | true | 24,304,640 | 1.2 | 0 | 0 | 4 | Even JavaScript is using http requests to get the data, so one method would be to investigate, what requests are providing the data when user asks to "Load more results" and emulate these requests.
This is not traditional scraping, which is based on plain or rendered html content and detecting further links, but can be working solution.
Next actions:
visit the page in Google Chrome or Firefox
press F12 to start up Developer tools or Firebug
switch to "Network" tab
click "Load more results"
check, what http requests have served data for loading more results and what data they return.
try to emulate these requests from Python
Note, that the data do not necessarily come in HTML or XML form, but could be in JSON. But Python provide enough tools to process this format too. | 0 | 2,379 | 0 | 3 | 2014-06-19T10:42:00.000 | python,web-scraping,lxml | How to select "Load more results" button when scraping using Python & lxml | 1 | 2 | 2 | 24,305,212 | 0 |
1 | 0 | I am scraping a webpage. The webpage consists of 50 entries. After 50 entries it gives a
Load more reults button. I need to automatically select it. How can I do it. For scraping I am using Python, Lxml. | false | 24,304,640 | 0.099668 | 0 | 0 | 1 | You can't do that. The functionality is provided by javascript, which lxml will not execute. | 0 | 2,379 | 0 | 3 | 2014-06-19T10:42:00.000 | python,web-scraping,lxml | How to select "Load more results" button when scraping using Python & lxml | 1 | 2 | 2 | 24,304,877 | 0 |
0 | 0 | I'd like to write a program that parses an online .docx file to build an XML document. I know (or at least I think I know) that browsers need a plug-in to view .docx in browser, but I'm not that familiar with plug-ins or how the work. After looking at a .docx file in Notepad++, it seems clear to me that I won't be able to parse the binary data. Is there a way to simulate the opening of the .docx file for my purposes (EDIT: that is, without downloading and saving the file to my hard drive) within the the abilities of any languages or libraries?
My question is more about the opening of the file without downloading than about the actual parsing of it, as I've looked into the Apache POI API for parsing the document in Java. | true | 24,310,407 | 1.2 | 0 | 0 | 4 | Let me try to make this clear.
If you are viewing it, then you have downloaded it. You are "downloading" this webpage in order for your browser to render it. You're "downloading" a link to a document which tells you that there is a document. You cannot view the document unless you download it.
Yes, you have to download it.
Downloading a file is just getting it from the remote server.
Of course, you don't have to write it to your hard drive. You can download it and store it in memory, and then deal with it from memory.
Once you open a connection, you get an InputStream object to read bytes. You can pass that into the Apache POI libraries to read the file. | 0 | 839 | 0 | 0 | 2014-06-19T15:27:00.000 | java,python,xml,regex,docx | Is it possible to read in and parse a .docx file that is linked to on a website without downloading the file (in Java, Python, or another language)? | 1 | 1 | 2 | 24,310,461 | 0 |
0 | 0 | I been using python to create an web app and it has been doing well so far. Now I would like to encrypt the transmission of the data between client and server using https. The communication is generally just post form and web pages, no money transactions are involve. Is there anything I need to change to the python code except setting the server up with certificate and configurate it to use https? I see a lot of information regarding ssl for python and I not sure if I need those modules and python setup to make https work.
Thanks | true | 24,311,929 | 1.2 | 0 | 0 | 0 | Typically, the ssl part for Python web app is managed by some frontend web server like nginx, apache or so.
This does not require any modification of your code (assuming, you are not expecting user to authenticate by ssl certificate on client side, what is quite exotic, but possible scenario).
If you want to run pure Python solution, I would recommend using cherrypy, which is able providing rather reliable and performant web server part (it will be very likely slower then served behind nginx or apache). | 0 | 45 | 0 | 2 | 2014-06-19T16:39:00.000 | python,ssl | Is there any thing needed for https python web page | 1 | 1 | 1 | 24,312,062 | 0 |
0 | 0 | How can I programmatically convert urllib2.Request object into the equivalent curl command?
My Python script constructs several urllib2.Request objects, with different headers and postdata. For debugging purposes, I'd like to replay each request with curl. This seems tricky, as we must consider Bash escaping and urllib2's default headers. Is there a simple way to do this? | false | 24,336,306 | 0 | 0 | 0 | 0 | Simple way would be to run wireshark to capture the desired requests and then replay them with a packet replay tool like TCPreplay. If you do want to modify parts in curl for debugging wireshark will show you all the headers urllib2 is setting so you can set them the same in curl. | 0 | 309 | 0 | 0 | 2014-06-20T22:01:00.000 | python,curl,urllib2,urllib | Translate a general urllib2.Request to curl command | 1 | 1 | 1 | 24,336,466 | 0 |
0 | 0 | Using Python, I would like to pull data from NetSuite, along with adding/updating data in NetSuite. For example, I would like to create sales orders and add line items via Python.
I'm aware that they have a WSDL that I could potentially use. (And I was hoping that they would also have an API, but apparently not...) Does anyone have examples working with this WSDL in Python? Are there better ways to integrate with NetSuite? | false | 24,413,025 | 0.066568 | 0 | 0 | 1 | Netsuite has provided toolkits for Java, .Net and PHP to access their webservices. For other languages either there are third party toolkits or you have to send Raw SOAP requests.
For my Python based projects I'm using Raw SOAP requests method. I suggest that first you get familiar with Netsuite Web services using any of the available toolkits and then for Python use this knowledge to generate raw SOAP requests. SOAPUI can also be of great help.
Have you explored restlets they are a generally good alternate for webservices. | 1 | 6,680 | 0 | 4 | 2014-06-25T15:38:00.000 | python,wsdl,netsuite | Accessing NetSuite data with Python | 1 | 1 | 3 | 24,416,478 | 0 |
0 | 0 | I'm working on an online multiplayer game. I already developed the login servers and database for any persistent storage; both are written in Python and will be hosted with Google's App Engine. (For now.)
I'm relatively comfortable with two languages - Java and Python. I'd like to write the actual gameplay server in one of those languages, and I'd like for the latency of the client to gameplay-server connection to be as low as possible, so I assume that the majority of gameplay data (e.g. fine player movements) will need to be sent via UDP connections. I'm unfamiliar with UDP connections so I really don't know where to begin designing the server.
How should the server be threaded? One thread per client connection that retains session info, and then a separate thread(s) to control autonomous world changes (NPCs moving, etc.)?
How should relatively large packets be transmitted? (e.g. ~25 nearby players and all of their gameplay data, usernames, etc.) TCP or UDP?
Lastly - is it safe for the gameplay server to interface with the login server via HTTP requests, how do I verify (from the login server's perspective) the gameplay server's identity - simple password, encryption?
Didn't want to ask this kind of question because I know they're usually flagged as unproductive - which language would be better for me (as someone inexperienced with socketing) to write a sufficiently efficient server - assume equal experience with both?
Lastly - if this is relevant - I have not begun development on the client - not sure what my goals for the game itself are yet, I just want the servers to be scalable (up to ~150 players, beyond that I expect and understand that major rewrite will probably be necessary) and able to support a fair amount of players and open-world style content. (no server-taxing physics or anything like that necessary) | true | 24,417,793 | 1.2 | 0 | 0 | 1 | "I assume that the majority of gameplay data (e.g. fine player
movements) will need to be sent via UDP connections. I'm unfamiliar
with UDP connections so I really don't know where to begin designing
the server."
UDP can be lower latency, but sometimes, it is far more important that packets aren't dropped in a game. If it makes any difference to you, World of Warcraft uses TCP. If you chose to use UDP, you would have to implement something to handle dropped packets. Otherwise, what happens if a player uses an important ability (Such as a spell interrupt or a heal) and the packet gets dropped? You COULD use both UDP and TCP to communicate different things, but that adds a lot of complexity. WoW uses only a single port for all gameplay traffic, plus a UDP port for the in-game voice chat that nobody actually uses.
"How should the server be threaded? One thread per client connection that retains session info, and then a separate thread(s) to control autonomous world changes (NPCs moving, etc.)?"
One thread per client connection can end up with a lot of threads, but would be a necessity if you use synchronous sockets. I'm not really sure of the best answer for this.
"How should relatively large packets be transmitted? (e.g. ~25 nearby players and all of their gameplay data, usernames, etc.) TCP or UDP?"
This is what makes MMORPG servers so CPU and bandwidth intense. Every action has to be relayed to potentially dozens of players, possibly hundreds if it scales that much. This is more of a scaling issue than a TCP vs UDP issue. To be honest, I wouldn't worry much about it unless your game catches on and it actually becomes an issue.
"Lastly - is it safe for the gameplay server to interface with the login server via HTTP requests, how do I verify (from the login server's perspective) the gameplay server's identity - simple password, encryption?"
You could easily use SSL.
"Lastly - if this is relevant - I have not begun development on the client - not sure what my goals for the game itself are yet, I just want the servers to be scalable (up to ~150 players, beyond that I expect and understand that major rewrite will probably be necessary) and able to support a fair amount of players and open-world style content. (no server-taxing physics or anything like that necessary)"
I wouldn't use Python for your server. It is horrendously slow and won't scale well. It's fine for web servers and applications where latency isn't too much of an issue, but for a real-time game server handling 100+ players, I'd imagine it would fall apart. Java will work, but even THAT will run into scaling issues before a natively coded server does. I'd use Java to rapidly prototype the game and get it working, then consider a rewrite in C/C++ to speed it up later.
Also, something to consider regarding Python...if you haven't read about the Global Interpreter Lock, I'd make sure to do that. Because of the GIL, Python can be very ineffective at multithreading unless you're making calls to native libraries. You can get around it with multiprocessing, but then you have to deal with the overhead of communication between processes. | 0 | 1,168 | 0 | 0 | 2014-06-25T20:18:00.000 | java,python,multithreading,sockets,udp | Structuring a server for an online multiplayer game | 1 | 1 | 1 | 24,419,007 | 0 |
0 | 0 | I have an application written in Python and C++. I use SWIG to wrap the C++ parts. I'm interested in porting this application to work with Chrome native client (NaCl and/or PNaCl). I see that Python 2.7 is listed on the naclports page, so presumably it won't be a problem to run the Python code. But does it support C extensions? Will it be able to load my SWIG wrapper when running under native client? | true | 24,418,748 | 1.2 | 0 | 0 | 1 | Yes, the python port (BTW there is 2.7 as well as 3.3) in naclports supports C extensions. There are several in naclports already (see ports/python_modules).
I don't know if any of them use swig by I don't think that would be a problem. | 0 | 199 | 0 | 0 | 2014-06-25T21:18:00.000 | python,swig,google-nativeclient | Use Python extensions with Chrome native client | 1 | 1 | 1 | 24,420,392 | 0 |
0 | 0 | I execute my Selenium tests on FF16, Selenium 2.33, Python on Linux. I have created separate firefox profiles corresponding to my devices.
I observed a directory 'webdriver-py-profilecopy' is created in tmp directory. And I see that this directory is deleted after completion of tests. But sometimes these directories are not cleared. The size of this directory is around 28mb. I want to change the tmp directory location.
Is there a way to change temp file location?
In Java webdriver provides a way to define our own temp directory. Is there a way to do it in Python webdriver
TemporaryFilesystem.setTemporaryDirectory | false | 24,424,745 | 0 | 0 | 0 | 0 | I used tempdir = tempfile.mkdtemp(suffix='foo',prefix='bar',dir=myTemp) and it worked.
Thanks | 0 | 1,294 | 0 | 0 | 2014-06-26T07:29:00.000 | python,firefox,selenium,webdriver | Change temporary file location in python webdriver | 1 | 1 | 1 | 24,430,044 | 0 |
0 | 0 | I'm trying to download an "onion" site , I was trying to use Httrack and Internet Download Manager, unfortunately with no success.
How can I download a Tor "onion" website in depth of 1/2? | false | 24,427,882 | 0.066568 | 0 | 0 | 1 | Httrack does not work with Tor, but you can download TorCap2, start Tor, set proxy to 127.0.0.1:9150 (or other port, check it with netstat), enter program location and parameters. I use wget and it's working great. | 0 | 4,525 | 0 | 4 | 2014-06-26T10:16:00.000 | python,download,tor | Is it possible to download a Tor onion website? | 1 | 1 | 3 | 30,506,125 | 0 |
0 | 0 | I want to be able to edit existing XML config files via Python while preserving the formatting of the file and the comments in them so that its still human readable.
I will be updating existing XML elements and changing values as well as adding new XML elements to the file.
Available XML parsers such as ElementTree and lxml are great ways to edit XML files but you loose the original formatting(when adding new elements to the file) and comments that were in the file.
Using Regular expressions seems to be an option but I know that this is not recommended with XML.
So I'm looking for something along the lines of a Pythonic XML file editor. What is the best way to go about this? Thanks. | false | 24,453,842 | 0 | 0 | 0 | 0 | I recommend you to parse the XML document using a SAX parser, this gives you great flexibility to make your changes and to write back the document as it was.
Take a look at the xml.sax modules (see Python's documentation). | 0 | 1,103 | 0 | 2 | 2014-06-27T14:06:00.000 | python,xml | What is the best option for editing XML files in Python that preserve the original formatting of the file? | 1 | 1 | 2 | 24,454,049 | 0 |
0 | 0 | I am trying to automate emails using python. Unfortunately, the network administrators at my work have blocked SMTP relay, so I cannot use that approach to send the emails (they are addressed externally).
I am therefore using win32com to automatically send these emails via outlook. This is working fine except for one thing. I want to choose the "FROM" field within my python code, but I simply cannot figure out how to do this.
Any insight would be greatly appreciated. | true | 24,454,538 | 1.2 | 1 | 0 | 5 | If you configured a separate POP3/SMTP account, set the MailItem.SendUsingAccount property to an account from the Namespace.Accounts collection.
If you are sending on behalf of an Exchange user, set the MailItem.SentOnBehalfOfName property | 0 | 2,282 | 0 | 3 | 2014-06-27T14:39:00.000 | python,outlook,win32com | Choosing "From" field using python win32com outlook | 1 | 1 | 1 | 24,454,678 | 0 |
1 | 0 | I am trying to copy a file from my aws ec2 instance to S3 bucket folder, but i am getting error
Here is the command sample
aws s3 cp /home/abc/icon.jpg s3://mybucket/myfolder
This the error i am getting
upload failed: ./icon.jpg to s3://mybucket/myfolder/icon.jpg HTTPSConnectionPool(host='s3-us-west-1b.amazonaws.com', port=443): Max retries exceeded with url: /mybucket/myfolder/icon.jpg (Caused by : [Errno -2] Name or service not known)
I have already configured the config file for aws cli command line
Please suggest the solution to this problem | false | 24,487,444 | 0 | 0 | 0 | 0 | One possible issue could be that proxy might not have been set in your instance service role. Configure the env to point to your proxy servers via HTTP_PROXY / HTTPS_PROXY (Since the above error displays 443, it should be HTTPS_PROXY). | 0 | 11,516 | 0 | 4 | 2014-06-30T09:56:00.000 | python,amazon-web-services,amazon-s3,aws-cli | HTTPSConnectionPool(host='s3-us-west-1b.amazonaws.com', port=443): Max retries exceeded with url | 1 | 1 | 3 | 42,559,759 | 0 |
0 | 0 | We have following scenario:
Our service shows the bill amount to the user in INR (Indian Rupees). For example Rs 1000
We want to receive full payment in INR in our bank account. That is, we should get full Rs 1000 in our account.
User selects his preferred currency as one of the following: USD, GBP, CAD etc based on the credit card he is carrying.
All the extra charges like Paypal fee + currency conversion charges should be deducted from the user's credit card.
How can this be done via REST API? We are using Python REST SDK. | false | 24,496,445 | 0 | 0 | 0 | 0 | The merchant is generally responsible for paying transaction fees and conversion fees. As far as I know there is no way to programmatically force the cardholder to pay them. | 0 | 59 | 0 | 0 | 2014-06-30T18:18:00.000 | python,paypal | User's preferred currency is different from the product amount currency | 1 | 1 | 1 | 24,497,485 | 0 |
0 | 0 | I'm working on a client/server application in Python, where client and server share a lot of code.
How should the folder structure look like?
My idea is to have three folders with the code files in it
server
server.py
etc.
client
client.py
etc.
common
common.py
etc.
But how can I import from common.py in server.py when server.py has to be executable (can't be a package)?
Currently we have all files in the same folder but since the project got more complex this isn't manageable anymore. | true | 24,500,190 | 1.2 | 0 | 0 | 0 | One solution is to have the executable scripts all at the top folder like this:
server
server specific code
client
client specific code
common
common code
server.py (executable script that imports from server and common)
client.py (executable script that imports from client and common)
When deploying the server I just copy server.py, the server and the common folder. Similar for the client.
It's not the ideal solution and I'd be thankful if someone comes up with a better one but that is how I use it now. | 1 | 249 | 0 | 0 | 2014-06-30T23:01:00.000 | python,client-server | Layout for Client/Server project with common code | 1 | 1 | 1 | 24,532,689 | 0 |
1 | 0 | I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine.
I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port.
However, when I try to access it through my android on the same wifi I get a connection refused error. | false | 24,557,707 | 0 | 0 | 0 | 0 | Both "localhost" and "127.0.0.1" are local loopback interfaces only: they only make sense within the same machine. From your Android device, assuming it's on the same wifi network as your machine, you'll need to use the actual IP address of your main machine: you can either find that from the network settings of that machine, or from your router's web interface. | 0 | 184 | 1 | 1 | 2014-07-03T15:25:00.000 | android,python,solr,webserver,tokyo-tyrant | Difficulty accessing local webserver | 1 | 3 | 4 | 24,558,290 | 0 |
1 | 0 | I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine.
I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port.
However, when I try to access it through my android on the same wifi I get a connection refused error. | true | 24,557,707 | 1.2 | 0 | 0 | 0 | In case someone has the same problem, I solved it.
The connection has to be by cable and on the VMware Player settings the network connection has to be bridged, also you must click "Configure adapters" and uncheck the "VirtualBox Host-Only Ethernet Adapter". | 0 | 184 | 1 | 1 | 2014-07-03T15:25:00.000 | android,python,solr,webserver,tokyo-tyrant | Difficulty accessing local webserver | 1 | 3 | 4 | 24,635,024 | 0 |
1 | 0 | I have an Echoprint local webserver (uses tokyotyrant, python, solr) set up on a Linux virtual machine.
I can access it through the browser or curl in the virtual machine using http//localhost:8080 and in the non-virtual machine (couldn't find out how to say it better) I use the IP on the virtual machine also with the 8080 port.
However, when I try to access it through my android on the same wifi I get a connection refused error. | false | 24,557,707 | 0 | 0 | 0 | 0 | Is the server bound to localhost or 0.0.0.0?
Maybe your host resolves that ip to some kind of a localhost as well, due to bridging. | 0 | 184 | 1 | 1 | 2014-07-03T15:25:00.000 | android,python,solr,webserver,tokyo-tyrant | Difficulty accessing local webserver | 1 | 3 | 4 | 24,557,803 | 0 |
0 | 0 | I'm analyzing tweets and need to find which state (in the USA) the user was in from their GPS coordinates. I will not have an internet connection available so I can't use an online service such as the Google Maps API to reverse geocode.
Does anyone have any suggestions? I am writing the script in python so if anyone knows of a python library that I can use that would be great. Or if anyone can point me to a research paper or efficient algorithm I can implement to accomplish this that would also be very helpful. I have found some data that represents the state boundaries in GPS coordinates but I can't think of an efficient way to determine which state the user's coordinates are in. | true | 24,604,661 | 1.2 | 1 | 0 | 2 | Use a point-in-polygon algorithm to determine if the coordinate is inside of a state (represented by a polygon with GAP coordinates as points). Practically speaking, it doesn't seem like you would be able to improve much upon simply checking each state one at a time, though some optimizations can be made if it's too slow.
However, parts of Alaska are on both sides of the 180th meridian which cases problems. One solution would be to offset the coordinates a bit by adding 30 degrees modulus 180 to the longitude for each GPS coordinate (user coordinates and state coordinates). This has the effect of moving the 180th meridian about 30 degrees west and should be enough to ensure that the entire US on one side of the 180th meridian. | 0 | 2,493 | 0 | 3 | 2014-07-07T06:59:00.000 | python,algorithm,gps,reverse-geocoding | Determine the US state from GPS coordinates without using online service | 1 | 1 | 3 | 24,612,105 | 0 |
0 | 0 | I am doing allot of network developing and I am starting a new research.
I need to send a packet which will then cause another SYN packet to be sent.
This is how I want it to look:
I send syn --> --> sends another SYN before SYN/ACK packet.
How can I cause?
I am using Scapy + Python. | false | 24,610,812 | 0 | 0 | 0 | 0 | I don't know if I understand you correctly. Is there any difference between your two SYN packets? If so, just create two SYN as you want and then send them together. If not, send same packets twice using scapy.send(pkt, 2).I don't remember the specific parameters, but I'm sure scapy.send can send as many packets and fast as you like. | 0 | 270 | 1 | 0 | 2014-07-07T12:38:00.000 | python,tcp,scapy | How do I create a double SYN packet? | 1 | 1 | 1 | 24,612,638 | 0 |
0 | 0 | I can build Facebook login with Python Social Auth. But in order to access full content of the site I want users to be authorised first. Would it be possible to get guidelines how such solution should be build? | false | 24,617,063 | 0 | 1 | 0 | 0 | You can still use the @login_required decorator | 0 | 145 | 0 | 0 | 2014-07-07T18:13:00.000 | python,django,python-social-auth | User authorization in Python Social Auth | 1 | 1 | 1 | 24,626,165 | 0 |
0 | 0 | How can i get twitter information (number of followers, following, etc.) about a set of twitter handles using the Twitter API?
i have already used Python-Twitter library but this only gives me information about my own twitter account, but i need the same for other twitter users (i have a list).
Can you please guide me in the right direction? or refer to some good blogs/articles | true | 24,653,225 | 1.2 | 1 | 0 | 0 | If you want the latest tweets from specific users, Twitter offers the Streaming API.
The Streaming API is the real-time sample of the Twitter Firehose. This API is for those developers with data intensive needs. If you're looking to build a data mining product or are interested in analytics research, the Streaming API is most suited for such things.
If you're trying to access old information, the REST API with its severe request limits is the only way to go. | 0 | 58 | 0 | 0 | 2014-07-09T12:05:00.000 | python-2.7,twitter | Twitter API access using Python (newbie:Help Needed) | 1 | 1 | 1 | 24,654,111 | 0 |
1 | 0 | I'd like to detect if the browser made a request via AJAX (AngularJS) so that I can return a JSON array, or if I have to render the template. How can I do this? | false | 24,687,736 | 0.066568 | 0 | 0 | 1 | There isn't any way to be certain whether a request is made by ajax.
What I found that worked for me, was to simply include a get parameter for xhr requests and simply omit the parameter on non-xhr requests.
For example:
XHR Request: example.com/search?q=Boots&api=1
Other Requests: example.com/search?q=Boots | 0 | 5,005 | 0 | 15 | 2014-07-10T23:06:00.000 | python,ajax,angularjs,flask | How can I identify requests made via AJAX in Python's Flask? | 1 | 1 | 3 | 56,367,083 | 0 |
1 | 0 | I'm using boto to upload and download files to S3 & Glacier.
How can I ratelimit/throttle the uploading and downloading speeds? | false | 24,702,818 | 0.066568 | 0 | 0 | 1 | The simplest would be to use traffic shaping tools under linux, like tc. These tools let you control bandwidth and even simulate network packet loss or even long distance communication issues. Easy to write a python script to control the port behavior via a shell. | 0 | 1,618 | 0 | 0 | 2014-07-11T16:53:00.000 | python,boto | How to throttle S3 & Glacier upload/download speeds with boto? | 1 | 1 | 3 | 41,159,662 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.