Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | I'm scraping content from a website using Python. First I used BeautifulSoup and Mechanize on Python but I saw that the website had a button that created content via JavaScript so I decided to use Selenium.
Given that I can find elements and get their content using Selenium with methods like driver.find_element_by_xpath, what reason is there to use BeautifulSoup when I could just use Selenium for everything?
And in this particular case, I need to use Selenium to click on the JavaScript button so is it better to use Selenium to parse as well or should I use both Selenium and Beautiful Soup? | false | 17,436,014 | 0.197375 | 0 | 0 | 3 | I would recommend using Selenium for things such as interacting with web pages whether it is in a full blown browser, or a browser in headless mode, such as headless Chrome. I would also like to say that beautiful soup is better for observing and writing statements that rely on if an element is found or WHAT is found, and then using selenium ot execute interactive tasks with the page if the user desires. | 0 | 40,675 | 0 | 55 | 2013-07-02T21:19:00.000 | javascript,python,selenium,beautifulsoup | Selenium versus BeautifulSoup for web scraping | 1 | 1 | 3 | 55,484,972 | 0 |
1 | 0 | I installed OpenERP V 7 on my local machine. I made modification in the CSS. I also remove some menu, change the labels of some windows and change the position of some menus (one after the other in the order decided by the customer).
The work required is over and runs well on the premises. Now I'm looking for a way to move my work on the server while keeping the changes. Knowing that I worked directly through the interface of OpenERP.
Someone has an idea? | true | 17,469,330 | 1.2 | 0 | 0 | 1 | This is not a generally accepted way of doing customization in openerp. Ususally, you should make a custom module that implements your customization when installed on the OpenERP server installation.
Are you using Windows or Linux? The concept here is to move all of the server addons files to the upsite server, including a dump of the database which can be restored on the upsite server.
Here's how.
First click the Manage databases at the login screen,
Do a backup database and save the generated dump file.
Install openerp o nthe upsite server (*major versions must match).
Copy the server addons folder, and upload to the upsite server's addon directory.
Restart openerp service.
Then restore the dump file from your backup location.
This is basically how you can mirror a "customized" openerp installation across servers.
Hope this helps. | 0 | 2,217 | 0 | 0 | 2013-07-04T11:38:00.000 | python,xml,postgresql,openerp | after working in local server, how to move OpenERP on a remote server? | 1 | 1 | 1 | 17,473,961 | 0 |
1 | 0 | Sorry about the awkward title.
I am building a Python API. Part of it involves sending and receiving data to an Amazon SQS to communicate with some stuff on an EC2 instance. I don't want to distribute the API with my amazon keys in it though.
What is the correct way around an issue like this? Do I have to write a separate layer that sits in front of SQS with my own authentication or is there a way to add permissions to amazon keys such that uses could just send and receive messages to SQS but couldn't create additional queues or access any other web services? | false | 17,507,395 | 0 | 0 | 0 | 0 | It depends on your identity requirements. If it's ok for your clients to have AWS accounts, you can give their accounts permission to send messages to your queue. If you want your own identity, then yes, you would need to build a service layer infront of AWS to broker API requests | 0 | 106 | 0 | 0 | 2013-07-06T21:39:00.000 | python,amazon-web-services,amazon-sqs | What is the correct way to expose an AWS in an API without giving out your keys? | 1 | 1 | 1 | 17,906,399 | 0 |
1 | 0 | I am trying to give user a temporary link to download files from Amazon S3 bucket in python.
I am using generate_url method which generates the url for a specified time peiod.
My concern is when this link is created,anyone including the desired user can hit this url in that time period and get the files.How can I prevent other people from getting access to the files ? | true | 17,546,608 | 1.2 | 0 | 0 | 1 | Make sure that the expiration time of the link is set to a very short time. Then make sure that you are communicating with the user via SSL, and that the link provided is SSL. When using an SSL connection, both the data from the page and the URL are encrypted, and no one 'sniffing' the data should be able to see anything.
The only other way to put a real lock down on a file like that would be to then aggressively check the log files generated by the S3 bucket, and check the link for abuse. The problem however is that the traffic for your link may take several hours to make it into logs, but dependant on how long you want these links to last that time delay may be acceptable. Then assuming you find abuse, such as several different IP addresses hitting the link, you can stop the hits by renaming the file on S3.
The ultimate is for your server to grab the data off S3 and spoon feed it to the customer. Then it is impossible for anyone to get the file unless you authenticate them and they remain in the session. The big downer of course is that you are taxing your server, and are defeating half the reason S3 is cool, namely you don't have to serve the file, S3 does. But if your server is on Amazon EC2, there is no cost in pulling from S3, only the download to the customer would be charged. Additionally EC2 instances can access and download data from S3 at local network levels of speed, and like I said its free. | 0 | 1,631 | 0 | 0 | 2013-07-09T11:04:00.000 | python,amazon-s3 | How to generate a secure temporary url to download file from Amazon S3? | 1 | 1 | 1 | 17,759,017 | 0 |
1 | 0 | I'm not exactly sure how to phrase my question but I'll give it my best shot.
If I load up a webpage, in the HTML it executes a JavaScript file. And if I view the page source I can see the source of that JavaScript (though it's not very well formatted and hard to understand).
Is there a way to run the JavaScript from e.g. Python code, without going through the browser? i.e if I wanted to access a particular function in that JavaScript, is there a clean way to call just that from a Python script, and read the results?
For example... a webpage displays a number that I want access to. It's not in the page source because it's a result from a JavaScript call. Is there a way to call that JavaScript from Python? | false | 17,578,253 | 0 | 0 | 0 | 0 | Although your question isn't very clear. I'm guessing that you are trying to access the javascript console.
In Google Chrome:
Press F12
Go to the 'console' tab
In Mozilla Firefox with Firebug installed:
Open Firebug
Go to the 'console' tab
From the console you can execute javascript query's (calling functions, accessing variables etc.).
I hope this answered your question properly. | 0 | 119 | 0 | 1 | 2013-07-10T18:29:00.000 | javascript,python,html | Can I scrape data from web pages when the data comes from JavaScript? | 1 | 1 | 3 | 17,578,385 | 0 |
0 | 0 | whats the smartest way to to Try again when I get socket time out exception
Try:
opener=urllib.FancyURLopener(proxies)
res=opener.open(req)
except Exception as details:
self.writeLog(details)
lets for the above code I get a time out error from the socket, I wanna change the proxy and try again how do I do that (my function can not be done recursive) should I use something like while error is not socket time out keep doing this ? or should do a while True and in the except part change the proxy?
whats the smartest way? | false | 17,599,706 | 0 | 0 | 0 | 0 | Write a separate function that handles the connect and if the connect fails this function throws an exception. Then in your main code you can use a try-catch and a while loop to try to connect multiple times. | 0 | 153 | 0 | 0 | 2013-07-11T17:25:00.000 | python,sockets | python try if socket times out, try again | 1 | 1 | 2 | 17,599,907 | 0 |
1 | 0 | Briefly idea, My web crawler have 2 main jobs. Collector and Crawler, The collector will collecting all of the url items for each sites and storing non duplicated url. The crawler will grab the urls from storage, extract needed data and store its back.
2 Machines
Bot machine -> 8 core, Physical Linux OS (No VM on this machine)
Storage machine -> mySql with clustering (VM for clustering), 2 databases (url and data); url database on port 1 and data port 2
Objective: Crawled 100 sites and try to decrease the bottle neck situation
First case: Collector *request(urllib) all sites , collect the url
items for each sites and * insert if it's non duplicated url to
Storage machine on port 1. Crawler *get the url from storage port 1 ,
*request site and extract needed data and *store it's back on port 2
This cause the connection bottle neck for both request web sites and mySql connection
Second case: Instead of inserting across the machine, Collector store
the url on my own mini database file system.There is no *read a huge
file(use os command technic) just *write (append) and *remove header
of the file.
This cause the connection request web sites and I/O (read,write) bottle neck (may be)
Both case also have the CPU bound cause of collecting and crawling 100 sites
As I heard for I/O bound use multithreading, CPU bound use multiprocessing
How about both ? scrappy ? any idea or suggestion ? | false | 17,601,124 | 0 | 0 | 0 | 0 | Look into grequests, it doesn't do actual muti-threading or multiprocessing, but it scales much better than both. | 0 | 763 | 1 | 0 | 2013-07-11T18:51:00.000 | python,multithreading,performance,multiprocessing,web-crawler | Python web crawler multithreading and multiprocessing | 1 | 1 | 1 | 17,601,331 | 0 |
0 | 0 | If I issue multiple socket.connect() in different threads - would a single socket.connect() make the current thread to wait for an ACK response, or meanwhile another thread would be able to issue a socket.connect()? | true | 17,617,535 | 1.2 | 0 | 0 | 1 | These two options "current thread waits for an ACK" and "another thread would be able to connect" are not mutually exclusive. Both are true. That's the whole point of threads that one can continue while the other is blocked. | 1 | 118 | 0 | 0 | 2013-07-12T14:39:00.000 | multithreading,python-2.6,asyncsocket | socket.connect() blocks other threads? | 1 | 1 | 1 | 17,619,173 | 0 |
0 | 0 | I need to append a meta data to each message when publishing to the queue. The question is which method is more efficient?
Add custom fields to every message body
Add custom headers to every message
Just in case:
Publisher is on AWS m1.small
Messages rate is less than 500 msgs/s
Rabbit library: pika (python) | true | 17,623,431 | 1.2 | 1 | 0 | 2 | Efficiency in terms of speed, there is probably no answer to your question, since there are efficient parsing methods available to extract the meta data from your messages after they leave RabbitMQ.
But in case of using the meta data to filter your messages, it would be more efficient to do that in RabbitMQ, since you can do that filtering inside of RabbitMQ by using header exchange. | 0 | 377 | 0 | 1 | 2013-07-12T20:22:00.000 | python,rabbitmq,pika | What is more efficient: add fields to the message or create a custom header? RabbitMQ | 1 | 1 | 1 | 17,647,896 | 0 |
1 | 0 | Was wondering if anyone knew how if it was possible to enable programmatic billing for Amazon AWS through the API? I have not found anything on this and I even went broader and looked for billing preferences or account settings through API and still had not luck. I assume the API does not have this functionality but I figured I would ask. | true | 17,627,389 | 1.2 | 1 | 0 | 0 | Currently, there is no API for doing this. You have to log into your billing preference page and set it up there. I agree that an API would be a great feature to add. | 0 | 356 | 0 | 0 | 2013-07-13T05:51:00.000 | python,amazon-web-services,boto | Enable programmatic billing for Amazon AWS through API (python) | 1 | 1 | 1 | 17,630,560 | 0 |
1 | 0 | I want to use windmill or selenium to simulate a browser that visits a website, scrapes the content and after analyzing the content goes on with some action depending of the analysis.
As an example. The browser visits a website, where we can find, say 50 links. While the browser is still running, a python script for example can analyze the found links and decides on what link the browser should click.
My big question is with how many http Requests this can be done using windmill or selenium. I mean do these two programs can simulate visiting a website in a browser and scrape the content with just one http request, or would they use another internal request to the website for getting the links, while the browser is still running?
Thx alot! | false | 17,639,299 | 0 | 0 | 0 | 0 | Selenium uses the browser but number of HTTP request is not one. There will be multiple HTTP request to the server for JS, CSS and Images (if any) mentioned in the HTML document.
If you want to scrape the page with single HTTP request, you need to use scrapers which only gets what is present in the HTML source. If you are using Python, check out BeautifulSoup. | 0 | 728 | 0 | 0 | 2013-07-14T12:16:00.000 | python,selenium,selenium-webdriver,httprequest,windmill | Browser Simulation and Scraping with windmill or selenium, how many http requests? | 1 | 1 | 1 | 17,642,669 | 0 |
1 | 0 | Turns out that Flask sets request.data to an empty string if the content type of the request is application/x-www-form-urlencoded. Since I'm using a JSON body request, I just want to parse the json or force Flask to parse it and return request.json.
This is needed because changing the AJAX content type forces an HTTP OPTION request, which complicates the back-end.
How do I make Flask return the raw data in the request object? | false | 17,640,687 | 1 | 0 | 0 | 11 | Use request.get_data() to get the POST data. This works independent of whether the data has content type application/x-www-form-urlencoded or application/octet-stream. | 0 | 23,453 | 0 | 17 | 2013-07-14T15:15:00.000 | python,ajax,post,flask | Flask - How do I read the raw body in a POST request when the content type is "application/x-www-form-urlencoded" | 1 | 1 | 4 | 46,003,806 | 0 |
1 | 0 | I need to take image frames from a webcam server using selenium python module.
The frames webcam is show on the selenium browser ( using http:// .....html webpage) .
The source code of the page show me it's done using some java script.
I try to use:
capture_network_traffic() and captureNetworkTraffic
Not working , also I don't think is a good way to do that.
Also I don't want to use capture_screenshot functions , this take all canvas of the browser.
Any idea ?
Thank you. Regards. | true | 17,641,441 | 1.2 | 0 | 0 | 0 | First, what you'll need to do is navigate to the webpage with Selenium.
Then, analyse the web page source HTML which the Javascript will have rendered by you navigating to the page, to get the image URL. You can do this with Selenium or with an HTML parser.
Then you can easily download the image using wget or some other URL grabber.
You might not even need Selenium to accomplish this if when you get the page, the image is already there. If that is the case you can just use the URL grabber to get the page directly.
Let me know if you want more details or if you have any questions. | 0 | 439 | 0 | 0 | 2013-07-14T16:47:00.000 | python-2.7,selenium,selenium-webdriver | get image frames webcam from server with selenium python script | 1 | 1 | 1 | 17,718,816 | 0 |
1 | 0 | I have a working Scrapy scraper that does a login before proceeding to parse and retrieve data but I want to open source it with exposing my username/password. Where should I factor these out to?
I was thinking of either storing them in ENV and then accessing them via os.environ, putting them in a settings.local.py file that is imported into settings.py, or just having a configuration file that is read by settings.py. | false | 17,644,019 | 0.197375 | 0 | 0 | 1 | The "standard" is to use the environment. Depending on what environment you're running in you may not always be guaranteed a filesystem, however env vars are usually present in most application environments. | 0 | 96 | 0 | 1 | 2013-07-14T21:35:00.000 | python,scrapy | Where should I keep the authentication settings when using Scrapy? | 1 | 1 | 1 | 17,644,278 | 0 |
0 | 0 | Recently, I've been attempting to figure out how I can find out what an unlabeled POST is, and send to it using Python.
The issue of the matter is I'm attempting to make a chat bot entirely in Python in order to increase my knowledge of the language. For said bot, I'm attempting to use a chat-box that runs entirely on jQuery. The issue with this is it has no knowledgeable POST or GET statements associated with the chat-box submissions.
How can I figure out what the POST and GET statements being sent when a message is submitted, and somehow use that to my advantage to send custom POST or GET statements for a chat-bot?
Any help is appreciated, thanks. | false | 17,646,259 | 0 | 1 | 0 | 0 | You need a server in order to be able to receive any GET and POST requests, one of the easier ways to get that is to set up a Django project, ready in minutes and then add custom views to handle the request you want properly. | 0 | 60 | 0 | 0 | 2013-07-15T03:16:00.000 | python,post,get,chat | Python Send to and figure out POST | 1 | 1 | 1 | 17,646,320 | 0 |
0 | 0 | I've been learning about Python socket, http request/reponse handling these days, I'm still very novice to server programming, I have a question regarding to the fundamental idea behind chatting website.
In chatting website, like Omegle or Facebook's chat, how do 2 guys talk to each other? Do sockets on their own computers directly connect to each other, OR... guy A send a message to the web server, and server send this message to guy B, and vice versa?
Because in the first scenario, both users can retrieve each other's IP, and in the second scenario, since you are connecting to a server, you can not.. right?
Thanks a lot to clear this confusion for me, I'm very new and I really appreciate any help from you guys! | false | 17,646,806 | 0 | 0 | 0 | 0 | Most chats will use a push notification system. It will keep track of people within a chat, and as it receives a new message to the chat, it will push it to all the people currently in it. This protects the users from seeing each other. | 0 | 187 | 0 | 1 | 2013-07-15T04:31:00.000 | python,node.js,websocket,chat | the idea behind chat website | 1 | 2 | 2 | 17,646,861 | 0 |
0 | 0 | I've been learning about Python socket, http request/reponse handling these days, I'm still very novice to server programming, I have a question regarding to the fundamental idea behind chatting website.
In chatting website, like Omegle or Facebook's chat, how do 2 guys talk to each other? Do sockets on their own computers directly connect to each other, OR... guy A send a message to the web server, and server send this message to guy B, and vice versa?
Because in the first scenario, both users can retrieve each other's IP, and in the second scenario, since you are connecting to a server, you can not.. right?
Thanks a lot to clear this confusion for me, I'm very new and I really appreciate any help from you guys! | true | 17,646,806 | 1.2 | 0 | 0 | 0 | Usually they both connect to the server.
There are a few reasons to do it this way. For example, imagine you want your users to see the last 10 messages of a conversation. Who's going to store this info? One client? Both? What happens if they use more than one PC/device? What happens if one of them is offline? Well, you will have to send the messages to the server, this way the server will have the conversation history stored, always available.
Another reason, imagine that one user is offline. If the user is offline you can't do anything to contact him. You can't connect. So you will have to send messages to the server, and the server will notify the user once online.
So you are probably going to need a connection to the server (storing common info, providing offline messages, keeping track of active users...).
There is also another reason, if you want two users to connect directly, you need one of them to start a server listening on a (public IP):port, and let the other connect against that ip:port. Well, this is a problem. If you use the clients->server model you don't have to worry about that, because you can open a port in the server easily, for all, without routers and NAT in between. | 0 | 187 | 0 | 1 | 2013-07-15T04:31:00.000 | python,node.js,websocket,chat | the idea behind chat website | 1 | 2 | 2 | 17,649,514 | 0 |
1 | 0 | All the files I upload to the tree.io tickets, projects, etc cannot be downloaded,
I am getting a 404 error.
I am running tree.io in a cantos 6 box.
any ideas to get this working please ? | true | 17,653,546 | 1.2 | 0 | 0 | 1 | after sometime found a solution for this.
i had to define the MEDIA_ROOT as an absolute URL in the settings.py
when i went to static/media there was no folder called attachments, i had to make one and give it the correct permissions for python can write to that folder.
this worked for me. | 0 | 109 | 0 | 2 | 2013-07-15T11:47:00.000 | python,django,apache,treeio-django | cant download files uploaded tree.io | 1 | 1 | 1 | 17,730,860 | 0 |
1 | 0 | I am working on a web application, in which clicking on some link another popup windows appears. The pop windows is not an alert but its a form with various fields to be entered by user and click "Next".
How can I handle/automate this popup windows using selenium.
Summary :-
Click on the hyperlink (url) - "CLICK HERE"
A user registration form appears as a pop up
A data is to be filled by user
Click Next/Submit button.
Another next redirected to another page/form 'User Personal Information Page'
Personal information is to be filled by user
Click "Next/Submit"
Popup disappeared.
Now further processing on original/Base page. | false | 17,676,036 | 1 | 0 | 0 | 8 | If its a new window or an iframe you should use the driver.switch_to_frame(webelement) or driver.switch_to_window(window_name). This should then allow you to interact with the elements within the popup.
After you've finished, you should then driver.switch_to_default_content() to return to the main webpage. | 0 | 84,462 | 0 | 31 | 2013-07-16T12:02:00.000 | python,selenium,webdriver | Python webdriver to handle pop up browser windows which is not an alert | 1 | 1 | 2 | 17,676,233 | 0 |
0 | 0 | I'm trying to get Python to send the EOF signal (Ctrl+D) via Popen(). Unfortunately, I can't find any kind of reference for Popen() signals on *nix-like systems. Does anyone here know how to send an EOF signal like this? Also, is there any reference of acceptable signals to be sent? | true | 17,678,620 | 1.2 | 1 | 0 | 5 | EOF isn't really a signal that you can raise, it's a per-channel exceptional condition. (Pressing Ctrl+D to signal end of interactive input is actually a function of the terminal driver. When you press this key combination at the beginning of a new line, the terminal driver tells the OS kernel that there's no further input available on the input stream.)
Generally, the correct way to signal EOF on a pipe is to close the write channel. Assuming that you created the Popen object with stdin=PIPE, it looks like you should be able to do this. | 0 | 3,348 | 1 | 2 | 2013-07-16T14:00:00.000 | python-2.7,posix,popen,eof | Trying to send an EOF signal (Ctrl+D) signal using Python via Popen() | 1 | 1 | 1 | 17,712,430 | 0 |
0 | 0 | In Python, is there a way to detect whether a given network interface is up?
In my script, the user specifies a network interface, but I would like to make sure that the interface is up and has been assigned an IP address, before doing anything else.
I'm on Linux and I am root. | false | 17,679,887 | 0.066568 | 0 | 0 | 2 | You can see the content of the file in /sys/class/net/<interface>/operstate. If the content is not down then the interface is up. | 0 | 30,551 | 0 | 9 | 2013-07-16T14:50:00.000 | python,networking,ip,dhcp,network-interface | Python: check whether a network interface is up | 1 | 2 | 6 | 46,349,123 | 0 |
0 | 0 | In Python, is there a way to detect whether a given network interface is up?
In my script, the user specifies a network interface, but I would like to make sure that the interface is up and has been assigned an IP address, before doing anything else.
I'm on Linux and I am root. | false | 17,679,887 | 1 | 0 | 0 | 13 | The interface can be configured with an IP address and not be up so the accepted answer is wrong. You actually need to check /sys/class/net/<interface>/flags. If the content is in the variable flags, flags & 0x1 is whether the interface is up or not.
Depending on the application, the /sys/class/net/<interface>/operstate might be what you really want, but technically the interface could be up and the operstate down, e.g. when no cable is connected.
All of this is Linux-specific of course. | 0 | 30,551 | 0 | 9 | 2013-07-16T14:50:00.000 | python,networking,ip,dhcp,network-interface | Python: check whether a network interface is up | 1 | 2 | 6 | 46,932,803 | 0 |
0 | 0 | So, I have an application in Python, but I want to know if the computer (which is running the application) is on, from another remote computer.
Is there any way to do this? I was thinking to use UDP packets, to send some sort of keep-alive, using a counter. Ex. every 5 mins the client sends an UDP 'keep-alive' packet to the server. Thanks in advance! | false | 17,682,818 | 0 | 0 | 0 | 0 | yes thats the way to go. kind of like sending heartbeat ping. Since its UDP and since its just a header message you can reduce the frequency to say 10 seconds. This should not cause any measurable system perf degradation since its just 2 systems we are talking about.
I feel here, UDP is might be better compared to TCP. Its lightweight, does not consume a lot of system resources and is theoretically faster. Downside would be there could be packet loss. You can circumvent that by putting in some logic like when 10 packets (spaced 10 seconds apart) are not received consecutively then declare the other system unreachable. | 0 | 20,956 | 1 | 9 | 2013-07-16T17:06:00.000 | python,udp,heartbeat | Python - Detect if remote computer is on | 1 | 1 | 5 | 17,682,961 | 0 |
1 | 0 | I need to create an application that will do the following:
Accept request via messaging system ( Done )
Process request and determine what script and what type of instance is required for the job ( Done )
Launch an EC2 instance
Upload custom script's (probably from github or may be S3 bucket)
Launch a script with a given argument.
The question is what is the most efficient way to do steps 3,4,5? Don't understand me wrong, right now I'm doing the same thing with script that does all of this
launch instance,
use user_data to download necessary dependencies
than SSH into instance and launch a script
My question is really: is that the only option how to handle this type of work? or may be there is an easy way to do this?
I was looking at OpsWork, and I'm not sure if this is the right thing for me. I know I can do steps 3 and 4 with it, but how about the rest? :
Launch a script with a given argument
Triger an OpsWork to launch an instance when request is came in
By the way I'm using Python, boto to communicate with AWS services. | false | 17,686,939 | 0.099668 | 1 | 0 | 1 | You can use knife-bootstrap. This can be one way to do it. You can use AWS SDK to do most of it
Launch an instance
Add a public IP (if its not in VPC)
Wait for instance to come back online
use knife bootstrap to supply script, setup chef-client, update system
Then use chef cookbook to setup your machine | 0 | 218 | 0 | 0 | 2013-07-16T21:04:00.000 | python,amazon,chef-infra,boto,aws-opsworks | Do I need to SSH into EC2 instance in order to start custom script with arguments, or there are some service that I don't know | 1 | 1 | 2 | 17,691,484 | 0 |
0 | 0 | I'm trying to import premailer in my project, but it keeps failing at the etree import. I installed the 2.7 binary for lxml. The lxml module imports fine, and it's showing the correct path to the library folder if I log the lxml module, but I can't import etree from it. There's an etree.pyd in the lxml folder but python can't seem to see\read it.
I'm on windows7 64bit.
Does anyone know what's going wrong here? | false | 17,688,959 | 0.033321 | 0 | 0 | 1 | Install premailer using
pip install premailer | 1 | 17,785 | 0 | 6 | 2013-07-17T00:02:00.000 | python,google-app-engine,lxml | ImportError: No module named lxml.etree | 1 | 3 | 6 | 62,369,096 | 0 |
0 | 0 | I'm trying to import premailer in my project, but it keeps failing at the etree import. I installed the 2.7 binary for lxml. The lxml module imports fine, and it's showing the correct path to the library folder if I log the lxml module, but I can't import etree from it. There's an etree.pyd in the lxml folder but python can't seem to see\read it.
I'm on windows7 64bit.
Does anyone know what's going wrong here? | false | 17,688,959 | 0.033321 | 0 | 0 | 1 | Try to using etree without import it like (lxml.etree() ) I think it function no module
or install it if it a module | 1 | 17,785 | 0 | 6 | 2013-07-17T00:02:00.000 | python,google-app-engine,lxml | ImportError: No module named lxml.etree | 1 | 3 | 6 | 17,689,004 | 0 |
0 | 0 | I'm trying to import premailer in my project, but it keeps failing at the etree import. I installed the 2.7 binary for lxml. The lxml module imports fine, and it's showing the correct path to the library folder if I log the lxml module, but I can't import etree from it. There's an etree.pyd in the lxml folder but python can't seem to see\read it.
I'm on windows7 64bit.
Does anyone know what's going wrong here? | false | 17,688,959 | 0 | 0 | 0 | 0 | Try:
from lxml import etree
or
import lxml.etree <= This worked for me instead of lxml.etree() | 1 | 17,785 | 0 | 6 | 2013-07-17T00:02:00.000 | python,google-app-engine,lxml | ImportError: No module named lxml.etree | 1 | 3 | 6 | 17,689,259 | 0 |
0 | 0 | I'm being trying to
Log into a server using SHH (with Paramiko)
Use that connection like a proxy and route network traffic through it and out to the internet. So say I could set it as my proxy in Urllib2, Mechanize, Firefox, etc.).
Is the second part possible or will I have to have some sort of proxy server running on the server to get this to work? | false | 17,689,822 | 0 | 1 | 0 | 0 | You could implement a SOCKS proxy in the paramiko client that routes connections across the SSH tunnel via paramiko's open_channel method. Unfortunately, I don't know of any out-of-the-box solution that does this, so you'd have to roll your own. Alternatively, run a SOCKS server on the server, and just forward that single port via paramiko. | 0 | 1,036 | 0 | 0 | 2013-07-17T01:51:00.000 | python,ssh,proxy,paramiko,tunnel | Python Proxy Through SSH | 1 | 1 | 1 | 17,690,326 | 0 |
1 | 0 | Currently I am using Web Application(LAMP stack) with REST API to communicate with clients(Python Desktop Application).
Clients can register with server and send states to server through REST API.
Now I need to push notifications to selected clients from web application(server).
My question is how can I send push notifications from server(php) and read it from clients(python). | false | 17,719,300 | 0.099668 | 1 | 0 | 1 | So basically you can query your server from client in some interval (interval ~ 0 == realtime) and ask, if it has some news.
Normally apache can't deal with long waiting connection, because of its thread/fork request processing model.
You can try switch to nginx, because it is using socket multiplexing (select/epoll/kqueue), so it can deal with many concurrent long waiting connections).
Or you can think about node.js and replace your php app with it, which is absolutely done for this purposes.
Nice solution is too some web framework/language + redis pub/sub functionality + node.js. You can normal requests your web application, but have too open connection to node.js server and node.js server will notice your client when needed. If you want to tell node.js about informing some clients, you can do it from your web app through redis pub/sub. | 0 | 1,317 | 0 | 1 | 2013-07-18T09:16:00.000 | php,python,rest | Push Notifications server to selected clients | 1 | 1 | 2 | 17,719,764 | 0 |
0 | 0 | I am quite new to Python and recently I wanted to send some files using Python. I quickly found out about sockets. But I searched for ready-made solution, because I thought client-server communication is such a common use, there must exist some kind of library (or maybe it's just because of my Java background and I got used to it:D). All answers about sending files I found mentioned sockets and that 'you have to write a protocol yourself'.
So here's my question: is there any library, ready protocol for client-server communication in Python (preferably 2.7)? | false | 17,747,053 | 0 | 0 | 0 | 0 | If you use sockets, you can use ssh and then do scp (secured copies). If you are moving files back and forth, that would probably be the easiest way. | 0 | 170 | 0 | 0 | 2013-07-19T13:15:00.000 | python,python-2.7,io,client-server,communication | Sending files in Python | 1 | 1 | 4 | 17,747,132 | 0 |
1 | 0 | Hi I have created an openerp module using Python (eclipse) . I want to add a feature in my form so that admin will be able to create his own fields whenever and whatever he wants . I needed some guidance of how this will be done . As I am new to openerp , any help will be good to me . Thanks
Hopes for advice | false | 17,780,235 | 0 | 0 | 0 | 0 | I can't think of any easy way of doing this. When OpenERP connects to a database it sets up a registry containing all models and all the fields and as part of this, loads the fields into the database, performs database refactoring etc. The idea is that it is simple to inherit existing models and add your fields that way but it does require coding.
I have done something similar where:
I predefined some fields on your model (field1, intfield1, charfield1 etc.).
Provide a model/form so the admin can say use intfield1 and give it a label of 'My Value'
Override fields_view_get on your model and change the XML to include your field with the correct label.
But this is tricky to get right. You will want to spend some time learning the elementtree module to do the XML manipulation in the fields_view_get. | 0 | 228 | 0 | 0 | 2013-07-22T05:02:00.000 | python,eclipse,openerp | how to make dynamic field creation capability in openerp module? | 1 | 1 | 1 | 17,897,177 | 0 |
0 | 0 | I want To Read my Internet router traffic light(Power + adsl ) light to simulate the real time Lights as my modem is located far from my room and my dsl lights keeps disconnecting frequently , So it becomes a pain to check it status every time lights go out...
How Can I Do this in python (I have Read about pyserial But not getting a way to do it ..)?
Thanx | false | 17,815,798 | 0 | 0 | 0 | 0 | A very common way of monitoring router status is by using snmp.
As a summary, for a small home router, it involves at least the following steps:
Enable snmp in your router (this usually is only setup "snmp
communities", they are the passwords for reading and writing). If
your router allows it, I recommend to you to add an IP filter as
well for filtering from what IP addresses snmp queries are allowed.
Read SNMP OIDs (for instance, interface status list, interface usage
stats, etc), you can use snmpwalk for testing.
Once you can read standard values with snmpwalk you can load your
router MIB file into your local MIB files repo (that enables you to
read specific attributes from your router manufacturer).
Setup your monitoring software (nagios is a very popular one). As
you said you will use python, I believe you can use pysnmp. | 0 | 400 | 0 | 1 | 2013-07-23T16:27:00.000 | python,networking,router,modem | Read traffic light from Router to simulate lights on python/shell script | 1 | 1 | 1 | 17,822,488 | 0 |
1 | 0 | Im using python tornado as web server and I have a backend server and a frontend server. I want to create browser-frontend-backend connection. Can anyone help me how to do this? I know how to create websocket connection between frontend and browser but I have no idea how to connect my frontend server to backend server to stream realtime data parsed by my backend server. | false | 17,824,526 | 0 | 0 | 0 | 0 | WebSocket was designed for low-latency bidirectional browser<->service communication. It's placed on top of TCP/IP and brings along some overhead. It was designed to solve all the problems that you simply do not have when it's about front-end<->back-end communication, because there we're talking about a defined environment which is under your control. Hence, I would recommend going back to the basics and do simple TCP/IP communication between your front-end and back-end. | 0 | 1,390 | 1 | 0 | 2013-07-24T03:07:00.000 | python-2.7,websocket,tornado | Websocket connection between two servers | 1 | 1 | 2 | 17,835,810 | 0 |
1 | 0 | I am using bottle for a POC restful service project. would someone kindly let me know what is the best way to decide if the caller wants me to send the response in JSON, XML, or HTML? I have seen some examples of this using request.mimetypes.best_match, but that needs me to import flask. is there a way to do this in bottle itself?
Thanks a lot,
Reza | false | 17,830,806 | 0 | 0 | 0 | 0 | The Request Mime-type(or content-type) is the type of the content being sent to the server - it does not mean this is the same type that should be returned by the server.
The client should know what the server Response type is going to be, and not otherwise - the server should not "guess" what response the client wants. | 0 | 377 | 0 | 0 | 2013-07-24T10:02:00.000 | python,format,return,bottle | python bottle deciding on return mimetype | 1 | 1 | 2 | 17,832,612 | 0 |
1 | 0 | I'm using GAE remote api to access the data store of my app. The authentication to GAE is made using remote_api_stub.ConfigureRemoteApi with an authentication function that returns a user name and a password.
Is there a way for authenticating using an access_token, for example OAuth or OAuth 2.0? | false | 17,857,138 | 0 | 0 | 0 | 0 | You can't use OAuth2 to connect to your app with remote_api_stub/shell. This option is not provided. | 0 | 907 | 1 | 5 | 2013-07-25T11:45:00.000 | python,google-app-engine,oauth-2.0,google-oauth | Google App Engine Remote API + OAuth | 1 | 1 | 2 | 20,981,524 | 0 |
1 | 0 | I have a webpage showing some data. I have a python script that continuously updates the data(fetches the data from database, and writes it to the html page).It takes about 5 minutes for the script to fetch the data. I have the html page set to refresh every 60 seconds using the meta tag. However, I want to change this and have the page refresh as soon as the python script updates it, so basically I need to add some code to my python script that refreshes the html page as soon as it's done writing to it.
Is this possible ? | true | 17,867,471 | 1.2 | 0 | 0 | 1 | Without diving into complex modern things like WebSockets, there's no way for the server to 'push' a notice to a web browser. What you can do, however, it make the client check for updates in a way that is not visible to the user.
It will involve writing Javascript & writing an extra file. When writing your main webpage, add, inside Javascript, a timestamp (Unix timestamp will be easiest here). You also write that same timestamp to a file on the web server (let's call it updatetime.txt). Using an AJAX request on the page, you pull in updatetime.txt & see if the number in the file is bigger than the number stored when you generate the document, refresh the page if you see an updated time. You can alter how 'instantly' the changes get noticed but controlling how quickly you poll.
I won't go into too much detail on writing the code but I'd probably just use $.ajax() from JQuery (even though it's sort of overkill for one function) to make the calls. The trick to putting something on a time in JS is setinterval. You should be able to find plenty of documentation on using both of them already written. | 0 | 3,417 | 0 | 2 | 2013-07-25T19:50:00.000 | python,html,refresh | Refresh Webpage using python | 1 | 1 | 1 | 17,868,479 | 0 |
0 | 0 | I am running a simulation and transmitting data through XML-RPC to a remote client. I'm using a thread to run the XML-RPC part.
But for some reason, the program runs really slow until I a make a request from any of clients that connect. And after I run the very first request, the program then runs fine.
I have a class that inherits from Threading, and that I use in order to start the XML-RPC stuff
I cannot really show you the code, but do you have any suggestions as to why this is happening?
Thanks, and I hope my question is clear enough | false | 17,869,727 | 0 | 1 | 0 | 0 | In Python, due to the GIL, threads doesn't really execute in parallel. If the RPC part is waiting in an active way (loop poling for connection instead of waiting), you most probably will have the behavior you are describing. However, without seeing any code, this is just wild guess. | 0 | 171 | 0 | 0 | 2013-07-25T22:15:00.000 | python,multithreading,performance,request,xml-rpc | XML-RPC Python Slow before first request | 1 | 1 | 1 | 17,869,783 | 0 |
1 | 0 | Both middleware can process Request and Response. But what is the difference? | true | 17,872,753 | 1.2 | 0 | 0 | 21 | While they have almost identical interfaces, they serve different purposes:
Downloader middlewares modify requests and responses or generate requests in response to responses. They don't directly interact with spiders. Some examples are middlewares that implement cookies, caching, proxies, redirects, setting user-agent headers, etc. They just add functionality to the downloader system.
Spider middlewares modify things that pass in and out of spiders, like requests, items, exceptions, and start_requests. They do share some basic functionality with downloader middlewares, but they can't generate requests in response to responses. They stand between the spiders and the downloader. One example is filtering out responses with bad HTTP status codes.
Some middlewares can function as either a downloader middleware or a spider middleware, but they're often trivial and will be forced into one category or the other once you add more complex functionality. | 0 | 3,140 | 0 | 18 | 2013-07-26T04:10:00.000 | python,scrapy,web-crawler | What is the difference between Scrapy's spider middleware and downloader middleware? | 1 | 1 | 1 | 17,872,865 | 0 |
0 | 0 | I'm trying to test with selenium webdriver. My version of selenium is 2.33 and the browser is Firefox. The scripting language is python
Now when I call the method find_element_by_xpath(blabla) If the widget does not exist. The program just gets stuck there with no exception shown. It's just stuck. By the way I have tried find_element_by_id, find_element_by_name, find_elements and changed Firefox to 3.5, 14.0, 21.0, 22.0. The problem always shows up.
Anybody ever got this problem?
I just want an exception not just getting stuck. Help... | false | 17,919,503 | 0 | 0 | 0 | 0 | You can use WebDriverWait function if you are sure that the element is on your document. You should define WebDriverWait at the beginning with from selenium.webdriver.support.ui import WebDriverWait and if you didn't define before from selenium.webdriver.common.by import By, then use WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.ID, "blabla")))
That's all we need to do. I hope this will help you. | 0 | 1,799 | 0 | 1 | 2013-07-29T08:42:00.000 | python,selenium | Selenium Webdriver stuck when find_element method called with a non-existed widget | 1 | 1 | 2 | 18,606,843 | 0 |
0 | 0 | I had a test script with Selenium RC. Now I don't know what could happen during the test but I want the browser to be shut down if the program is killed accidentally. BTW, the script is running in a sub-process.
Now I tried method __del__ but it doesn't work and I don't see if there's any exceptions. So I have no idea where to put try --- except or with --- in.
If I run the script in main-process it works fine. Any help?
Version of Selenium is 2.33. Browser is Firefox 21.0 | false | 17,920,065 | 0 | 0 | 0 | 0 | make the del method return boolean.
True on success, false otherwise.
If it returns false, close the driver in main. | 0 | 73 | 0 | 0 | 2013-07-29T09:13:00.000 | python,selenium | How can I close the browser when program is killed accidentally in selenium? | 1 | 1 | 1 | 17,929,109 | 0 |
0 | 0 | We have a server, written using tornado, which sends asynchronous messages to a client over websockets. In this case, a javascript app running in Chrome on a Mac. When the client is forcibly disconnected, in this case by putting the client to sleep, the server still thinks it is sending messages to the client. Additionally, when the client awakens from sleep, the messages are delivered in a burst.
What is the mechanism by which these messages are queued/buffered? Who is responsible? Why are they still delivered? Who is reconnecting the socket? My intuition is that even though websockets are not request/response like HTTP, they should still require ACK packets since they are built on TCP. Is this being done on purpose to make the protocol more robust to temporary drops in the mobile age? | false | 17,934,427 | 0.197375 | 0 | 0 | 1 | Browsers may handle websocket client messages in a separate thread, which is not blocked by sleep.
Even if a thread of your custom application is not active, when you force it to sleep (like sleep(100)), TCP connection is not closed in this case. The socket handle is still managed by OS kernel and the TCP server still sends the messages until it reaches the TCP client's receive window overflow. And even after this an application on server side can still submit new messages successfully, which are buffered on TCP level on server side until TCP outgoing buffer is overflown. When outgoing buffer is full, an application should get error code on send request, like "no more space". I have not tried myself, but it should behave like this.
Try to close the client (terminate the process), you will see totally different picture - the server will notice disconnect.
Both cases, disconnect and overflow, are difficult to handle on server side for highly reliable scenarios. Disconnect case can be converted to overflow case (websocket server can buffer messages up to some limit on user space while client is being reconnected). However, there is no easy way to handle reliably overflow of transmit buffer limit. I see only one solution - propagate overflow error back to originator of the event, which raised the message, which has been discarded due to overflow. | 0 | 1,868 | 1 | 0 | 2013-07-29T21:22:00.000 | python,websocket,tornado | WebSocket messages get queued when client disconnected | 1 | 1 | 1 | 18,775,244 | 0 |
0 | 0 | I'm trying to make a simple script in python that will scan a tweet for a link and then visit that link.
I'm having trouble determining which direction to go from here. From what I've researched it seems that I can Use Selenium or Mechanize? Which can be used for browser automation. Would using these be considered web scraping?
Or
I can learn one of the twitter apis , the Requests library, and pyjamas(converts python code to javascript) so I can make a simple script and load it into google chrome's/firefox extensions.
Which would be the better option to take? | false | 17,937,010 | 0.049958 | 1 | 0 | 1 | I am not expect in web scraping. But I had some experience with both Mechanize and Selenium. I think in your case, either Mechanize or Selenium will suit your needs well, but also spend some time look into these Python libraries Beautiful Soup, urllib and urlib2.
From my humble opinion, I will recommend you use Mechanize over Selenium in your case. Because, Selenium is not as light weighted compare to Mechanize. Selenium is used for emulating a real web browser, so you can actually perform 'click action'.
There are some draw back from Mechanize. You will find Mechanize give you a hard time when you try to click a type button input. Also Mechanize doesn't understand java-scripts, so many times I have to mimic what java-scripts are doing in my own python code.
Last advise, if you somehow decided to pick Selenium over Mechanize in future. Use a headless browser like PhantomJS, rather than Chrome or Firefox to reduce Selenium's computation time. Hope this helps and good luck. | 0 | 1,466 | 0 | 1 | 2013-07-30T01:39:00.000 | python,selenium-webdriver,browser-automation,pyjamas | Can anyone clarify some options for Python Web automation | 1 | 1 | 4 | 17,937,214 | 0 |
0 | 0 | Sometimes I am not sure when do I have to use one or another. I usually parse all sort of things with Python, but I would like to focus this question on HTML parsing.
Personally I find DOM manipulation really useful when having to parse more than two regular elements (i.e. title and body of a list of news, for example).
However, I found myself in situations where it is not clear for me to build a regex or try to get the desired value simply manipulating strings. A particular fictional example: I have to get the total number of photos of an album, and the only way to get this is parsing the number of photos using this way:
(1 of 190)
So I have to get the '190' from the whole HTML document. I could write a regex for that, although regex for parsing HTML is not exactly the best, or that is what I always understood. On the other hand, using DOM seems overwhelming for me as it is just a simple element. String manipulation seems to be the best way, but I am not really sure if I should proceed like that in such a similar case.
Can you tell me how would you parse these kind of single elements from a HTML document using Python (or any other language)? | false | 18,048,512 | 0.197375 | 0 | 0 | 2 | People shy away from doing regexes to search HTML because it isn't the right tool for the job when parsing tags. But everything should be considered on a case-by-case basis. You aren't searching for tags, you are searching for a well-defined string in a document. It seems to me the simplest solution is just a regex or some sort of XPath expression -- simple parsing requires simple tools. | 1 | 525 | 0 | 0 | 2013-08-04T23:09:00.000 | python,html,regex,parsing,html-parsing | Should I use regex or just DOM/string manipulation? | 1 | 1 | 2 | 18,048,532 | 0 |
0 | 0 | I'm building an authentication server in python and was wondering about how I could secure a connection totally between two peers totally? I cannot see how in any way a malicious user wouldn't be able to copy packets and simply analyze them if he understands what comes in which order.
Admitting a client server schema. Client asks for an Account. Even though SRP, packets can be copied and sent later on to allow login.
Then now, if I add public - private key encryption, how do I send the public key to each other without passing them in an un-encrypted channel?
Sorry if my questions remains noobish or looks like I haven't minded about the question but I really have a hard time figuring out how I can build up an authentication process without having several security holes. | false | 18,065,256 | 0.197375 | 0 | 0 | 2 | What you are asking about is part of what's commonly called "key management." If you google the term, you'll find lots of interesting reading. You may well discover that there are other parts of key management that your solution needs to address, like revocation and rotation.
In the particular part of key management that you're looking at, you need to figure out how to have two nodes trust each other. This means that you have to identify a separate thing that you trust on which to base the nodes' trust. There are two common approaches:
Trusting a third party. This is the model that we use for most websites we visit. When our computers are created, the trusted third party creates the device to already know about and trust certain entities, like Verisign. When we contact a web site over HTTPS, the browser automatically checks if Verisign (or another trusted third party certificate authority) agrees that this is the website that it claims to be. The magic of Public Key Cryptography and how this works is a whole separate topic, which I recommend you investigate (just google for it :) ).
Separate, secure channel. In this model, we use a separate channel, like an administrator who transfers the secret from one node to the other. The admin can do this in any manner s/he wishes, such as encrypted data carried carried via USB stick over sneakernet, or the data can be transferred across a separate SFTP server that s/he has already bootstrapped and can verify that it's secure (such as with his/her own internal certificate authority). Some variations of this are sharing a PGP key on a business card (if you trust that the person giving you the business card is the person with whom you want to communicate), or calling the key-owner over the phone and verbally confirming that the hash of the data you received is the same as the hash of the data they sent.
There are on-line key exchange protocols - you can look them up, probably even on Wikipedia, using the phrase "key exchange", but you need to be careful that they actually guarantee the things you need to determine - like how does the protocol authenticate the other side of the communication channel. For example, Diffie Hellman guarantees that you have exchanged a key without ever exchanging the actual contents of the key, but you don't know with whom you are communicating - it's an anonymous key exchange model.
You also mention that you're worried about message replay. Modern secure communication protocols like SSH and TLS protect against this. Any good protocol will have received analysis about its security properties, which are often well described on Wikipedia.
Oh, and you should not create your own protocol. There are many tomes about how to write secure protocols, analyses of existing protocols and there security properties (or lack thereof). Unless you're planning to become an expert in the topic (which will take many years and thousands of pages of reading), you should take the path of least resistance and just use a well known, well exercised, well respected protocol that does the work you need. | 0 | 1,317 | 0 | 0 | 2013-08-05T18:34:00.000 | python,security | How could i totally secure a connection between two nodes? | 1 | 1 | 2 | 18,065,965 | 0 |
1 | 0 | I have several S3 buckets containing a total of 40 TB of data across 761 million objects. I undertook a project to copy these objects to EBS storage. To my knowledge, all buckets were created in us-east-1. I know for certain that all of the EC2 instances used for the export to EBS were within us-east-1.
The problem is that the AWS bill for last month included a pretty hefty charge for inter-regional data transfer. I'd like to know how this is possible?
The transfer used a pretty simple Python script with Boto to connect to S3 and download the contents of each object. I suspect that the fact that the bucket names were composed of uppercase letters might have been a contributing factor (I had to specify OrdinaryCallingFormat()), but I don't know this for sure. | false | 18,113,426 | 0 | 0 | 1 | 0 | The problem ended up being an internal billing error at AWS and was not related to either S3 or Boto. | 0 | 878 | 0 | 0 | 2013-08-07T20:42:00.000 | python,amazon-web-services,amazon-s3,boto | Boto randomly connecting to different regions for S3 transfers | 1 | 1 | 2 | 18,366,790 | 0 |
0 | 0 | I have a NodeJS-socketIO server that has clients listening from JS, PHP & Python. It works like a charm when the communication happens over plain HTTP/WS channel.
Now, when i try to secure this communication, the websocket transport is not working anymore. It falls back to xhr-polling(long polling) transport. Xhr-polling still works for JS client but not on python which purely depends on socket transport.
Things i tried:
On node, Using https(with commercial certificates) instead of http - Works good for serving pages via Node but not for socketIO
Proxy via HAProxy (1.15-dev19). From HTTPS(HAProxy) to HTTP(Node). Couldn't get Websocket transport working and it falls back to xhr-polling on JS. Python gets 502 on handshake.
Proxy via STunnel (for HTTPS) -> HAProxy(Websocket Proxy) -> Node(SocketIO) - This doesnt work either. Python client still gets 502 on handshake.
Proxy via Stunnel(HTTPS) -> Node(SocketIO) - This doesnt work too. Not sure if STunnel support websocket proxy
node-http-proxy : Throws 500(An error has occurred: {"code":"ECONNRESET"}) on websocket and falls back to xhr-polling
Im sure its a common use case and there is a solution exist. Would really appreciate any help.
Thanks in advance! | true | 18,122,835 | 1.2 | 0 | 0 | 1 | My case seems to be a rare one. I built this whole environment on a EC2 instance based on Amazon Linux. As almost all the yum packages are not up to date, i had to install pretty much every yum packages from source. By doing so i could have missed configuration unchanged/added. Or HAProxy required lib could have been not the latest.
In any case, i tried building the environment again on ubuntu 12.04 based EC2 instance. HAProxy worked like a charm with a bit of configuration tweaks. I can now connect my SocketIO server from JS, Python & PHP over SSL without any problem. I could also create a Secured TCP Amazon ELB that listens on 443 and proxy it to non-standard port (8xxx).
Let me know if anyone else encounters a similar problem, I will be happy to help! | 0 | 1,394 | 0 | 5 | 2013-08-08T09:47:00.000 | python,node.js,ssl,websocket,socket.io | NodeJS - SocketIO over SSL with websocket transport | 1 | 1 | 1 | 18,315,811 | 0 |
0 | 0 | Im using requests to routinely download a webpage and check it for updates, but recently ive been getting these errors :
HTTPConnectionPool(host='somehost', port=someport): Max retries
exceeded with url: someurl (Caused by : [Errno
10060] A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond)
Now this script has been running for weeks with this issue never coming up. Could it be that the site administrator has started blocking my proxy's IP?
I should add that its not against the TOS of the site to scrape it.
Can anyone help me figure out whats the reason for this?
Thanks | false | 18,125,212 | 0.197375 | 0 | 0 | 1 | The remote connection timed out.
The host you are trying to connect to is not answering; it is not refusing connections, it is just not responding at all to connection attempts.
Perhaps the host is overloaded or down? It could also be caused by the site blocking your IP address by dropping the packets (a firewall DROP rule instead of a REJECT rule).
You can try to connect to the site from a different IP address; if those connections work fine, but not from the original address, there is a higher likelihood that you are deliberately being blocked. | 0 | 6,095 | 0 | 1 | 2013-08-08T11:47:00.000 | python,http,python-requests | socket Errno 10060 | 1 | 1 | 1 | 18,125,252 | 0 |
1 | 0 | I have a web application which acts as an interface to an offsite server which runs a very long task. The user enters information and hits submit and then chrome waits for the response, and loads a new webpage when it receives it. However depending on the network, input of the user, the task can take a pretty long time and occasionally chrome loads a "no data received page" before the data is returned (though the task is still running).
Is there a way to put either a temporary page while my task is thinking or simply force chrome to continue waiting? Thanks in advance | false | 18,127,128 | 0 | 0 | 0 | 0 | Let's assume:
This is not a server issue, so we don't have to go fiddle with Apache, nginx, etc. timeout settings.
The delay is minutes, not hours or days, just to make the scenario manageable.
You control the web page on which the user hits submit, and from which user interaction is managed.
If those obtain, I'd suggest not using a standard HTML form submission, but rather have the submit button kick off a JavaScript function to oversee processing. It would put up a "please be patient...this could take a little while" style message, then use jQuery.ajax, say, to call the long-time-taking server with a long timeout value. jQuery timeouts are measured in milliseconds, so 60000 = 60 seconds. If it's longer than that, increase your specified timeout accordingly. I have seen reports that not all clients will allow super-extra-long timeouts (e.g. Safari on iOS apparently has a 60-second limitation). But in general, this will give you a platform from which to manage the interactions (with your user, with the slow server) rather than being at the mercy of simple web form submission.
There are a few edge cases here to consider. The web server timeouts may indeed need to be adjusted upward (Apache defaults to 300 seconds aka 5 minutes, and nginx less, IIRC). Your client timeouts (on iOS, say) may have maximums too low for the delays you're seeing. Etc. Those cases would require either adjusting at the server, or adopting a different interaction strategy. But an AJAX-managed interaction is where I would start. | 0 | 13,355 | 0 | 8 | 2013-08-08T13:20:00.000 | python,google-chrome,web-applications,flask | Time out issues with chrome and flask | 1 | 1 | 3 | 18,130,320 | 0 |
0 | 0 | How can I get the number of milliseconds since epoch?
Note that I want the actual milliseconds, not seconds multiplied by 1000. I am comparing times for stuff that takes less than a second and need millisecond accuracy. (I have looked at lots of answers and they all seem to have a *1000)
I am comparing a time that I get in a POST request to the end time on the server. I just need the two times to be in the same format, whatever that is. I figured unix time would work since Javascript has a function to get that | false | 18,169,099 | 1 | 1 | 0 | 30 | time.time() * 1000 will give you millisecond accuracy if possible. | 0 | 72,159 | 0 | 36 | 2013-08-11T05:37:00.000 | python,benchmarking | Comparing times with sub-second accuracy | 1 | 1 | 6 | 18,169,127 | 0 |
1 | 0 | What would be a node.js templating library that is similar to Jinja2 in Python? | false | 18,175,466 | 0.039979 | 0 | 0 | 1 | The ejs is the npm module which you are looking for.
This is the name written in my package.json file --> "ejs": "^3.1.3"
EJS is a simple templating language that lets you generate HTML markup with plain JavaScript.(Credits: Ejs website) | 0 | 14,114 | 0 | 22 | 2013-08-11T18:49:00.000 | python,node.js,jinja2,language-comparisons | Templating library in node.js similar to Jinja2 in Python? | 1 | 1 | 5 | 63,772,013 | 0 |
1 | 0 | I am working on a project which would animate points on a plain by certain methods. I intend to compute the movements of the points in python on server-side and to do the visualization on client-side by a javascript library (raphaeljs.com).
First I thought of the following: Running the process(python) and saving the states of the points into an xml file, than load that from javascript and visualize. Now I realized that maybe it would run for infinity thus I would need a realtime data exchange between the visualization part and the computing part.
How would you do that? | true | 18,212,995 | 1.2 | 0 | 0 | 0 | First of all, I would suggest JSON rather than XML to be used for exchange format, it is much easier to parse JSON at the javascript side.
Then, speaking about the architecture of your app, I think that it is better to write a server web application in Python to generate JSON content on the fly than to modify and serve static files (at least that is how such things are done usually).
So, that gives us three components of your system:
A client app (javascript).
A web application (it does not matter what framework or library you prefer: django, gevent, even Twisted will work fine, as well as some others). What it should do is, firstly, giving the state of the points to the client app when it requests, and, secondly, accepting updates of the points' state from the next app and storing them in a database (or in global variables: that strongly depends on how you run it, a single process gevent app may use variables, when an app running withing a multi-process web server should use a database).
An app performing calculations that periodically publishes the points' state by sending it to the web app, probably as JSON body in a POST request. This one most likely should be a separate app due to the typical environment of the web applications: usually it is a problem to perform background processes in a web app, and, anyway, the way this can be done strongly depends on the environment you run your app in.
Of course, this architecture is based on "server publishes data, clients ask for data" model. That model is simple and easy to implement, and the main problem with it is that the animation may not be as smooth as one may want. Also you are not able to notify clients immediately if some changes require urgent update of client's interface. However, smoothness and immediate client notifications are usually hard to implement when a javascript client runs within a browser. | 0 | 288 | 0 | 0 | 2013-08-13T15:19:00.000 | javascript,python,plot,visualization,data-visualization | connect server-side computing with client-side visualization | 1 | 1 | 1 | 18,213,493 | 0 |
0 | 0 | I have a series of local gif files. I was wondering how I would be able to open this series of local gifs using the webbrowser module. I am, by the way, running on Mac OS X Snow Leapord. Whenever I try to use the webbrowser.open('file:gif_name') snippet, my computer throws the error 0:30: execution error: Bad name for file. some object (-37). Any help would be greatly appreciated! | true | 18,222,808 | 1.2 | 0 | 0 | 1 | Use this line: webbrowser.open('file://'+os.getcwd()+'/gif_name.gif') and change the default app to view pictures to Chrome. | 0 | 231 | 0 | 0 | 2013-08-14T03:39:00.000 | python,browser,gif | Opening Series of Gifs in Chrome Using webbrowser | 1 | 1 | 1 | 18,229,495 | 0 |
0 | 0 | I recently acquired a Go Pro Hero 3. Its working fine but when i attempt to stream live video/audio it gitches every now and then.
Initially i just used vlc to open the m3u8 file, however when that was glitchy i downloaded the android app and attempted to stream over that.
It was a little better on the app.
I used wireshark and i think the cause of it is its simply not transferring/buffering fast enough. Tried just to get everything with wget in loop, it got through 3 loops before it either: caught up (possible but i dont think so ... though i may double check that) or fell behind and hence timed out/hung.
There is also delay in the image, but i can live with that.
I have tried lowering the resolution/frame rate but im not sure if it is actually doing anything as i can't tell any difference. I think it may be just the settings for recording on the go pro. Either way, it didn't work.
Essentially i am looking for any possible methods for removing this 'glitchiness'
My current plan is to attempt writing something in python to get the files over UDP (no TCP overhead).
Ill just add a few more details/symptoms:
The Go Pro is using the Apple m3u8 streaming format.
At anyone time there are 16 .ts files in the folder. (26 Kb each)
These get overwritten in a loop (circular buffer)
When i stream on vlc:
Approx 1s delay - streams fine for ~0.5s, stops for a little less than that, then repeats.
What i think is happening is the file its trying to transfer gets overwritten which causes it to timeout.
Over the android App:
Less delay and shorter 'timeouts' but still there
I want to write a python script to try get a continuous image. The files are small enough that they should fit in a single UDP packet (i think ... 65Kb ish right?)
Is there anything i could change in terms of wifi setting on my laptop to improve it too?
Ie some how dedicate it to that?
Thanks,
Stephen | false | 18,227,789 | 0.379949 | 0 | 0 | 2 | I've been working on creating a GoPro API recently for Node.js and found the device very glitchy too. Its much more stable after installing the latest gopro firmware (3.0.0).
As for streaming, I couldnt get around the wifi latency and went for a record and copy approach. | 0 | 9,059 | 1 | 2 | 2013-08-14T09:22:00.000 | python,video-streaming,wifi,gopro | Go Pro Hero 3 - Streaming video over wifi | 1 | 1 | 1 | 20,150,294 | 0 |
0 | 0 | I am sending coordinates of points to a visualizational client-side script via TCP over the internet. I wonder which option I should use:
concat the coordinates into a large string and send them together, or
send them one by one
I don't know which one the faster is. I have some other questions too:
Which one should I use?
Is there a maximum size of packet of TCP? (python: maximum size of string for client.send(string))
As it is a visualization project should I use UDP instead of TCP?
Could you please tell me a bit about lost packet? When do they occur? How to deal with them?
Sorry for the many questions, but I really struggle with this issue... | true | 18,269,786 | 1.2 | 0 | 0 | 0 | When you send a string, that might be sent in multiple TCP packets. If you send multiple strings, they might all be sent in one TCP packet. You are not exposed to the packets, TCP sockets are just a constant stream of data. Do not expect that every call to recv() is paired with a single call to send(), because that isn't true. You might send "abcd" and "efg", and might read in "a", "bcde", and "fg" from recv().
It is probably best to send data as soon as you get it, so that the networking stack has as much information about what you're sending, as soon as possible. It will decide exactly what to do. You can send as big a string as you like, and if necessary, it will be broken up to send over the wire. All automatically.
Since in TCP you don't deal with packets, things like lost packets also aren't your concern. That's all handled automatically -- either the data gets through eventually, or the connection closes.
As for UDP - you probably don't want UDP. :) | 0 | 2,040 | 0 | 1 | 2013-08-16T09:11:00.000 | python,sockets,tcp,udp,package | send one big packet of many small over TCP/UDP? | 1 | 1 | 2 | 18,271,774 | 0 |
0 | 0 | I recently started working on a project using just vim as my text editor with a virtualenv setup. I installed a few API's on this virtualenv from GitHub. Eventually, the project got a little bigger than vim could handle so I had to move the project to an IDE.
I chose Aptana Studio 3. When I started up Aptana, I pointed the project directory to the virtualenv folder that I had created to house my project. I then pointed the interpreter at the Python executable in App/bin (created from virtualenv)/python2.7. When I started reworking the code to make sure I had everything mapped correctly, I was able to import the API's that I had installed just fine. CherryPy came through with no problems, but I've been having an issue with importing a module that I believe is part of the stdlib--urlparse. At first, I thought it was that my python interpreter was 2.7.1 rather than 2.7.5 (I found the documentation in the 2.7.5 section with no option to review 2.7.1), but my terminal is using 2.7.1 and is able to import the module without any errors (I'm using OSX, Mountain Lion). I am also able to import the module when I activate the virtualenv and run my python interpreter. But when I plug "from urlparse import parse_qsl" into Aptana, I'm getting an error: "Unresolved_import: parse_qsl".
Should I have pointed this at a different interpreter and, if so, will I need to reinstall the API modules I had been working with in the new interpreter? | true | 18,301,534 | 1.2 | 0 | 0 | 0 | Update: I finally ended up restarting the project. It turns out that not all of the standard Python tools are selected when you select the virtualenv interpreter. After I selected all of the python tools from the list (just after choosing the interpreter), I was able to get access to the entire standard library.
Do NOT just import the modules into your project. Many of the stdlib modules are interdependent and the import function will only import a module into your main project directory, not a libary! | 0 | 183 | 0 | 0 | 2013-08-18T16:58:00.000 | python-2.7,aptana3,python-import,pythonpath,urlparse | Aptana Python stdlib issue with virtualenv | 1 | 1 | 1 | 18,302,731 | 0 |
0 | 0 | I'm working on a Python tool for wide distribution (as .exe/.app) that will email reports to the user. Currently (in testing), I'm using smtplib to build the message and send it via GMail, which requires a login() call. However, I'm concerned as to the security of this - I know that Python binaries aren't terribly secure, and I'd rather not have the password stored as plaintext in the executable.
I'm not terribly familiar with email systems, so I don't know if there's something that could securely be used by the .exe. I suppose I could set up a mail server without authentication, but I'm concerned that it'll end up as a spam node. Is there a setup that will allow me to send mail from a distributed Python .exe/.app without opening it to potential attacks? | false | 18,317,455 | 0 | 1 | 0 | 0 | One possible solution is to create a web backend mantained by you which accepts a POST call and sends the passed message only to authorized addresses.
This way you can also mantain the list of email addresses on your server.
Look at it like an online error alerter. | 0 | 241 | 0 | 0 | 2013-08-19T15:23:00.000 | python,email,smtp | Securely Send Email from Python Executable | 1 | 1 | 1 | 18,317,530 | 0 |
0 | 0 | I am getting this dump occasionally from OpenERP, but it seems harmless. The code serves HTTP; is this dump what happens when a connection is dropped?
Exception happened during processing of request from ('10.100.2.71', 42799)
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 582, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python2.7/SocketServer.py", line 640, in __init__
self.finish()
File "/usr/lib/python2.7/SocketServer.py", line 693, in finish
self.wfile.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe | true | 18,322,667 | 1.2 | 0 | 0 | 2 | This just means that the underlying TCP connection was abruptly dropped. In this case it means that you are trying to write data to a socket that has already been closed on the other side (by the client). It is harmless, it means that while your server was sending an HTTP response to the client (browser) she stopped the request (closed the browser for example). | 0 | 354 | 1 | 0 | 2013-08-19T20:36:00.000 | python,openerp | dump from OpenERP Python is harmless? | 1 | 1 | 1 | 18,322,809 | 0 |
0 | 0 | I'm writing a Python script which will be run on many different servers. A vital part of the script relies on the paramiko module, but it's likely that the servers do not have the paramiko package installed already. Everything needs to automated, so all the user has to do is run the script and everything will be completed for them. They shouldn't need to manually install anything.
I've seen that people recommend using Active Python / PyPM, but again, that requires an installation.
Is there a way to download and install Paramiko (and any package) from a Python script? | false | 18,362,568 | 0 | 1 | 0 | 0 | Wrap your Python program in a shell script that checks that paramiko is installed and if it isn't installs it before running your program. | 0 | 5,173 | 0 | 2 | 2013-08-21T16:11:00.000 | python,paramiko | How to download/install paramiko on any sever from Python script? | 1 | 1 | 2 | 18,363,186 | 0 |
1 | 0 | I want to know how I can get content of a certain element by a dynamic id/name in embeded python codes in web2py view page?
Basically I want something like:
{{for task in tasks:}}
...
{{=TEXTAREA(task['remark'], _name='remark'+str(task['id']), _id='remark'+str(task['id']), _rows=2)}}
{{=A('OK', _class='button', _href=URL('update_remark', vars=dict(task_id=task['id'], new_remark=['remark'+str(task['id'])])))}}
What I want the ['remark'+str(task['id'])] do is to get the content automatically but obviously it won't work, I'm wondering how I can achieve this? Is there any API that can help?
Thanks in advance! | true | 18,386,023 | 1.2 | 0 | 0 | 0 | I used a Form to achieve this. Working quite well. | 0 | 79 | 0 | 0 | 2013-08-22T16:24:00.000 | javascript,python,web2py | How to get content of element in embeded python codes in web2py view | 1 | 1 | 1 | 18,414,025 | 0 |
0 | 0 | If you have a service that uses a specific port, and you have multiple computers on the same ip addess, how is this handled? Should the service specify to which computer on the ip address the information should be send? What if both computers on the same ip use the same service, but request different information?
Also, if a client is on a dynamic ip, how should the service detect that the ip has been changed, but the client (and the session) is the same? Should clients identify themselves for every request (much like cookies over http)? | false | 18,429,298 | 0.099668 | 0 | 0 | 1 | You have many questions, I'll try to respond to them one by one.
If you have a service that uses a specific port, and you have multiple computers on the same ip addess, how is this handled?
Someone mentioned that multiple computers cannot have the same IP address. In the original IP model, this is true, though today such address sharing (through NAT) is common. But even in the original model, your question makes sense if you reformulate it slightly:
"If you have a service that uses a specific port, and you have multiple clients on the same ip address, how is this handled?"
There can be multiple client processes on the same host (thus sharing the same IP address) trying to contact the same server (using the same destination address+port combination). This was natural at the time IP was developed, as most machines powerful enough to connect to the network were multi-user machines. That's why TCP (and UDP) have port numbers on both sides (source and destination, or client and server). Client processes typically don't specify the source port when contacting a server, but an "ephemeral" source port is allocated to the socket by the host operating system for the lifetime of the socket (connection). So this is how the server distinguishes between clients from the same address: by their source ports.
NAT maps different hosts (with different "internal" IP addresses) to the same "external" IP addresses, but it also allocates unique source ports to outgoing packets. So the server sees this just like the original case (multiple client processes from the same "host"/IP address). The NAT then "demultiplexes" the server's responses to the different internal hosts.
Should the service specify to which computer on the ip address the information should be send? What if both computers on the same ip use the same service, but request different information?
The server does this by sending responses to the same address+port combination that the different clients used as source address/port. This is mostly handled automatically by the socket API. As described above, the two clients will get separate connections, and the server hopefully handles these as separate "sessions" and doesn't confuse requests between these sessions.
Also, if a client is on a dynamic ip, how should the service detect that the ip has been changed, but the client (and the session) is the same? Should clients identify themselves for every request (much like cookies over http)?
Now, this is a whole can of worms. If a service wants to "survive" client IP address changes, then it will have to use some other identifier. HTTP (session) cookies are a good example. TCP connections are broken by address changes - this is normal, as such changes weren't envisioned as part of normal operation when TCP/IP was designed. There have been attempts at making TCP/IP more robust against such changes, such as Mobile IP, MPTCP, and possibly SCTP, but none of these have really entered the mainstream yet. Basing your protocol on HTTP(S) and using session cookies may be your best bet. | 0 | 2,393 | 0 | 0 | 2013-08-25T13:05:00.000 | python,sockets,networking,socketserver | Multiple clients from same IP | 1 | 2 | 2 | 18,432,587 | 0 |
0 | 0 | If you have a service that uses a specific port, and you have multiple computers on the same ip addess, how is this handled? Should the service specify to which computer on the ip address the information should be send? What if both computers on the same ip use the same service, but request different information?
Also, if a client is on a dynamic ip, how should the service detect that the ip has been changed, but the client (and the session) is the same? Should clients identify themselves for every request (much like cookies over http)? | false | 18,429,298 | 0 | 0 | 0 | 0 | I don't think I fully understand what you've said. There is no way that multiple computers will be on the same IP, this is not how the internet works.. There are protocols which hadels such things.
Did you mean that you're a server and multiple computers try connect to you?
If so, you listen in a port and when you get a connection you open a new thread for the service of that computer and the main loop still listening | 0 | 2,393 | 0 | 0 | 2013-08-25T13:05:00.000 | python,sockets,networking,socketserver | Multiple clients from same IP | 1 | 2 | 2 | 18,429,538 | 0 |
1 | 0 | I have been working on developing a module in OpenERP 7.0. I have been using Python and the Eclipse IDE for development. I wanted to know the difference between self.browse() and self.pool.get() in OpenERP development.
Thanks. | true | 18,437,202 | 1.2 | 0 | 0 | 8 | self.pool.get is used to get the Singleton instance of the orm model from the registry pool for the database in use. self.browse is a method of the orm model to return a browse record.
As a rough analogy, think of self.pool.get as getting a database cursor and self.browse as a sql select of a records by Id. Note if you pass a browse an integer you get a single browse record, if you pass a list of ids you get a list of browse records. | 0 | 11,039 | 0 | 4 | 2013-08-26T05:27:00.000 | python,odoo | What's the difference between self.browse() and self.pool.get() in OpenERP development? | 1 | 1 | 2 | 18,457,696 | 0 |
1 | 0 | i have to basically make a program that take a user-input web address and parses html to find links . then stores all the links in another HTML file in a certain format. i only have access to builtin python modules (python 3) . im able to get the HTML code from the link using urllib.request and put that into a string. how would i actually go about extracting links from this string and putting them into a string array? also would it be possible to identify links (such as an image link / mp3 link) so i can put them into different arrays (then i could catagorize them when im creating the output file) | false | 18,455,991 | 0.099668 | 0 | 0 | 1 | try to use HTML.Parser library or re library
they will help you to do that
and i think you can use regex to do it
r'http[s]?://[^\s<>"]+|www.[^\s<>"]+ | 0 | 718 | 0 | 1 | 2013-08-27T02:27:00.000 | python,html,python-3.x,html-parsing | Extracting links from HTML in Python | 1 | 1 | 2 | 18,456,494 | 0 |
1 | 0 | I'm trying to check if some xml in a django app has certain elements/nodes and if not just to skip that code block. I'm checking for the elements existance using hasattr(), which should return false if the element doesn't exist:
if hasattr(product.ItemAttributes, 'ListPrice') \ and hasattr(product.Offers.Offer.OfferListing, 'PercentageSaved') \
and hasattr(product.LargeImage, 'URL'):
Except in my case it's throwing an attribute error:
AttributeError at /update_products/
no such child: {http://webservices.amazon.com/AWSECommerceService/2011-08-01}LargeImage
I don't understand why it's throwing an error instead of just returning false and letting me skip the code block? | true | 18,474,222 | 1.2 | 0 | 0 | 3 | The error is complaining about LargeImage. That's being caused by this expression: product.LargeImage. You might want to check for that first, or even better, put this in a try/except block. | 0 | 361 | 0 | 0 | 2013-08-27T19:50:00.000 | python,django | hasattr() throws AttributeError | 1 | 1 | 1 | 18,474,264 | 0 |
1 | 0 | I am a programming newbie, and I recently installed Python + Django, and successfully created a very small web app. Everything works fine, but I am puzzled about 4 digits that appear after HTTP status codes in Eclipse's console following any request I make to my server.
Example: [27/Aug/2013 22:53:32] "GET / HTTP/1.1" 200 1305
What does 1305 represent here and in every other request? | true | 18,475,321 | 1.2 | 0 | 0 | 1 | It's the size of the response, in bytes.
Note that this has nothing to do with Eclipse, it's just the way Django's runserver formats its output. | 0 | 45 | 0 | 0 | 2013-08-27T20:56:00.000 | python,django,eclipse,http,pydev | Digits in Eclipse's console after HTTP status codes | 1 | 1 | 1 | 18,476,086 | 0 |
0 | 0 | All_shortest_paths is not working in version 1.6 and i would like to update it to version 1.7. Is there a simple update command i can use? | false | 18,495,789 | 0.53705 | 0 | 0 | 3 | Just type
pip install networkx --upgrade
This would get you the latest release of Networkx ( 1.10 as of now). | 0 | 3,283 | 0 | 0 | 2013-08-28T18:41:00.000 | python,networkx | How do i update the networkx from version 1.6 to 1.7? | 1 | 1 | 1 | 32,207,696 | 0 |
0 | 0 | I have a default installation of Elasticsearch which I am trying to query from a third party server. However, it seems that by default this is blocked.
Is anyone please able to tell me how I can configure Elasticsearch so that I can query it from a different server? | false | 18,499,338 | 0.039979 | 0 | 0 | 1 | There is no restriction by default, ElasticSearch expose a standard HTTP API on the port 9200.
From your third party server, are you able to: curl http://es_hostname:9200/? | 0 | 15,468 | 0 | 5 | 2013-08-28T22:29:00.000 | python,security,lucene,debian,elasticsearch | Allowing remote access to Elasticsearch | 1 | 2 | 5 | 18,514,779 | 0 |
0 | 0 | I have a default installation of Elasticsearch which I am trying to query from a third party server. However, it seems that by default this is blocked.
Is anyone please able to tell me how I can configure Elasticsearch so that I can query it from a different server? | false | 18,499,338 | 0.119427 | 0 | 0 | 3 | In config/elasticsearch.yml, put network.host: 0.0.0.0.
And also add Inbound Rule in firewall for your ElasticSearch port(9200 ByDefault).
It worked in ElasticSearch version 2.3.0 | 0 | 15,468 | 0 | 5 | 2013-08-28T22:29:00.000 | python,security,lucene,debian,elasticsearch | Allowing remote access to Elasticsearch | 1 | 2 | 5 | 37,047,491 | 0 |
0 | 0 | This is likely too general a question but, googlin' around for hours, I haven't found anything.
I have a web application based on zope/plone/python where zope/plone is used, among other things, as a soap and xml-rpc web server.
However sometimes (when response is "big") my xml-rpc response is truncated(*) as if the xml-rpc protocol could not handle more than "x" characters (or bytes).
Is anyone aware of this?
Bonus question:
If you were in my shoes, what would you look for during "investigation"?
I've also added "python" tag because zope/plone components are written in this language and, maybe, there are some pythoners that could help me.
(*) Received to caller (that is onto another network, for example) truncated. | false | 18,532,363 | 0.099668 | 0 | 0 | 1 | Be aware that control characters in response content could block the parsing of the xml and, for example, just cut data. At least this is what happened to me.
Maybe I'm a little bit late, anyway I spent so much time today on this issue that I would like to share what I've finally found.
At the beginning I've tried to check if there is some limits somewhere: in xml-rpc server php code, in my apache http server, in the python xml-rpc client (base on incution xml-rpc library)... but found nothing.
Than I begun searching for hidden control characters in the response content and finally found a record separator ASCII character. Deleted and all worked well. | 0 | 2,236 | 0 | 1 | 2013-08-30T11:56:00.000 | python,plone,xml-rpc,zope | Does xml-rpc have some kind of "response length limit"? | 1 | 1 | 2 | 27,062,942 | 0 |
0 | 0 | I need to match both formats like: user:[email protected]:3847 and 111.23.123.78:2938, how do you do that(match valid proxy only)?
And by the way, is there such module(validate proxy formats) in python already? | false | 18,546,053 | 0 | 0 | 0 | 0 | The solution I'm currently using to accept http / https only, username:password optionally and host as an IP or domain name is the following:
^(?:https?:\/\/)(?:(\w+)(?::(\w*))@)?([a-zA-Z0-9][a-zA-Z0-9-_]{0,61}[a-zA-Z0-9]{0,1}\.([a-zA-Z]{1,6}|[a-zA-Z0-9-]{1,30}\.[a-zA-Z]{2,3})|((?:\d{1,3})(?:\.\d{1,3}){3}))(?::(\d{1,5}))$
I hope it helps | 1 | 3,078 | 0 | 0 | 2013-08-31T08:14:00.000 | python,regex | How to perfectly match a proxy with regex? | 1 | 1 | 4 | 61,012,316 | 0 |
0 | 0 | Ok so what I'd like to do is have a game written in python and for all the multiplayer to be handled socket.io just because these are two things I'm fairly familiar with and I wanna keep possibilities for a web version or web app for the game open
So what I'm wondering is, how exactly do I do this and would it be better to embed a javascript parser on the client side or contact node.js from python directly | true | 18,555,139 | 1.2 | 0 | 0 | 0 | Assuming your Python objects are simple enough (not instances of classes, say), just send a JSON representation (json.dumps()) of them to the socket.io side. I am assuming you can parse JSON on the client side if needed. | 0 | 347 | 0 | 0 | 2013-09-01T04:10:00.000 | python,node.js,socket.io | Sending python objects to node.js via socket.io | 1 | 1 | 1 | 18,555,399 | 0 |
0 | 0 | I am pretty new to python and sometimes things that seems pretty easy become lot complicated than expected
I am currently using a bytes buffer to read from a socket:
data = self.socket.recv(size)
and then process part of that buffer and need remove it
The thing is that I've been looking for a way to do that and didn't find a clue in the whole evening, I am pretty sure that I am not getting any fair results because of the words involved, or maybe it's not possible
I tried with "del" but got an error saying that it's not supported
Am I doing it the wrong way? Maybe someone could lead me the right way? :) | false | 18,563,018 | 0.26052 | 0 | 0 | 4 | I think I found the proper way, just looking from another perspective:
self.data = self.data[Index:]
just copying what I need to itself again | 0 | 23,868 | 0 | 7 | 2013-09-01T21:14:00.000 | python,python-3.x | How to remove a range of bytes from a bytes object in python? | 1 | 1 | 3 | 18,563,064 | 0 |
1 | 0 | I have a web socket server running on an Ubuntu 12.04 EC2 Instance. My web socket server is written in Python, I am using Autobahn WebSockets.
I have a JavaScript client that uses WebRTC to capture webcam frames and send them to the web socket server.
My webserver (where the JavaScript is hosted) is not deployed on EC2. The python web socket server only do video frame processing and is running over TCP and port 9000.
My Problem:
The JS client can connect to the web socket and the server receives and processes the webcam frames. However, after 5 or 6 minutes the client stops sending the frames and displaying the following message:
WebSocket connection to 'ws://x.x.x.x:9000/' failed: Failed to send
WebSocket frame.
When I print the error data I got "undefined".
Of course, this never happens when I run the server on my local testing environment. | false | 18,647,154 | 0.197375 | 0 | 0 | 1 | This could be caused by to a Chrome Extension. | 0 | 1,419 | 1 | 0 | 2013-09-05T22:46:00.000 | python,amazon-web-services,amazon-ec2,websocket | JavaScript client failed to send WebSocket frame to WebSocket server on Amazon EC2 | 1 | 1 | 1 | 20,952,263 | 0 |
0 | 0 | I have a class that implements the asyncore module, it serves as a client that connects to HOST_A. The problem is I need to pass the data received from HOST_A to a HOST_B. So i'm wondering if asyncore can create two socket connections to HOST_A and HOST_B.Is this possible? | false | 18,655,290 | 0 | 0 | 0 | 0 | You can make one connection per socket at the same time only. But this doesn't mean that you can't do what you want to, however you'll need 2 sockets for 2 connections. | 1 | 283 | 0 | 0 | 2013-09-06T10:12:00.000 | python,sockets,asyncore | Can python asyncore module handle/connect more than one socket? | 1 | 1 | 1 | 18,655,376 | 0 |
0 | 0 | Can ipython notebook be used when not connected to the internet?
My installation doesn't open a web browser tab if not online.
Thanks! | false | 18,704,175 | 0.664037 | 0 | 0 | 4 | Yes, it should work without needing an internet connection. If a browser tab doesn't open automatically, open a browser and go to the URL it gives you in the terminal where you started the notebook (by default, this is http://127.0.0.1:8888/ ). It uses the 'loopback' network interface, which stays within your own computer. | 1 | 24,898 | 0 | 3 | 2013-09-09T17:59:00.000 | ipython,offline | Using ipython notebook offline | 1 | 1 | 1 | 18,708,798 | 0 |
1 | 0 | I am trying to figure out how to set a cookie just before a redirect from Cherrypy. My situation is this:
when a user logs in, I would like to set a cookie with the users username for use in
client-side code (specifically, inserting the users name into each
page to show who is currently logged in).
The way my login system works is that after a successful login, the user is redirected to whatever page they were trying to access before logging in, or the default page. Technically they are redirected to a different domain, since the login page is secure while the rest of the site is not, but it is all on the same site/hostname. Redirection is accomplished by raising a cherrypy.HTTPRedirect(). I would like to set the cookie either just before or just after the redirect, but when I tried setting cherrypy.response.cookie[<tag>]=<value> before the redirect, it does nothing. At the moment I have resorted to setting the cookie in every index page of my site, in the hopes that that will cover most of the redirect options, but I don't like this solution. Is there a better option, and if so what? | true | 18,746,272 | 1.2 | 0 | 0 | 2 | To answer my own question: It would appear that if I add cherrypy.response.cookie[<tag>]['path'] = '/' after setting the cookie value, it works as desired. | 0 | 816 | 0 | 2 | 2013-09-11T16:13:00.000 | python,redirect,cookies,cherrypy | Python/Cherrypy: set cookie on redirect | 1 | 1 | 1 | 18,746,486 | 0 |
1 | 0 | At my office we had PDFs designed using Adobe LiveCycle Designer that allows you to import xml data into the form to populate it. I would like to know if I could automate the process of importing the xml data into the form using python.
Ideally I would like it if I didn't have to re-create the form using python since the form itself is quite complex. I've looked up several different modules and they all seem to be able to read pdfs or create them from scratch, but not populate them.
Is there a python module out there that would have that kind of functionality?
Edit: I should mention that I don't have access to LiveCycle. | true | 18,792,536 | 1.2 | 0 | 0 | 0 | Perhaps what you are looking for is whether Adobe LiveCycle Designer support command-line arguments to do that. You could then automate this with python by issuing the command-line, hum, commands. | 0 | 935 | 0 | 2 | 2013-09-13T18:03:00.000 | python,xml,pdf | Import XML data into a PDF form using Python | 1 | 1 | 1 | 18,794,296 | 0 |
0 | 0 | After trying to find some help on the internet related to flash testing through selenium, all I find is FlexUISelenium package available for Selenium RC. I DO NOT find any such package available for Selenium Webdriver.
I am working with python and selenium webdriver and I do not see any packages available to automate flash applications. Is there any such package available at all for webdriver? If not, how do I start automating a flash application in webdriver? | false | 18,799,033 | 0 | 0 | 0 | 0 | Use flashselenium or sikuli for flash object testing. | 0 | 2,568 | 0 | 0 | 2013-09-14T06:29:00.000 | python,flash,selenium,webdriver | How to perform flash object testing in selenium webdriver with python? | 1 | 1 | 1 | 18,804,048 | 0 |
1 | 0 | I am scraping a web page with Scrapy. I wrote my spider and it works just fine, it scrapes a list of Items on a page (let's call it this the Main page).
In the Main page every Item I consider has a link that leads to the detail Item page (let's call it this way) where detailed information about every item is found.
Now I want to scrape the details pages too, but the spider would be different, there are different information to be found in different places. Is it possible to tell scrapy to look for links in a particular place and then scrape those pages linked with another spider I am going to define?
I hope my explanation was clear enough. Thanks | false | 18,823,407 | 0 | 0 | 0 | 0 | Identify the pattern first, then write the scraper for each pattern and then depending upon the link you are tracing use the relevant scraper function. | 0 | 185 | 0 | 2 | 2013-09-16T08:22:00.000 | python,scrapy | Scrape follow link with different scraper | 1 | 1 | 3 | 18,831,329 | 0 |
0 | 0 | I would like to be able to read messages from a specific user in skype using skype4py then send an automated response based upon the message back to the skype chat window. That way a user could message me and get an automated response saying that I'm currently busy or whatever. I really just need to know how to read and send skype chat using skype4py in python. Thanks for your time. | false | 18,836,983 | 0 | 0 | 0 | 0 | I do not want to give you all the answer so that you can improve your coding skills but I will give you some clues:
1)Use boolean values for being activated and deactivated
2)Set a command that activates and deactivates
3) set a value that if reaceived or sent chat and true/false then reply.
Gave you a lot of clues! Good look!. | 0 | 2,426 | 0 | 2 | 2013-09-16T20:44:00.000 | python,skype4py | Using python and skype4py to receive and send chat | 1 | 1 | 1 | 20,256,552 | 0 |
0 | 0 | i just want to know that in python socket programming, when to use socket_PF_PACKET and when to use socket.AF_INET, what are difference between both of them? | true | 18,838,451 | 1.2 | 0 | 0 | 6 | Use AF_INET if you want to communicate using Internet protocols: TCP or UDP. This is by far the most common choice, and almost certainly what you want.
Use PF_PACKET if you want to send and receive messages at the most basic level, below the Internet protocol layer, for example because you are implementing the protocol yourself. Your process must run as root (or with a special capability) to use PF_PACKET. This is a very advanced option. If you have to ask this question, you want AF_INET, not PF_PACKET. | 0 | 8,603 | 0 | 5 | 2013-09-16T22:36:00.000 | python,sockets,python-2.7,python-3.x | difference between socket.PF_PACKET and socket.AF_INET in python | 1 | 1 | 1 | 18,838,540 | 0 |
0 | 0 | Is the following possible in python using the mechanize module ?
Having 2 threads in a program, both accessing the same web server, but one of them is actually logged into the server with a user/pass, while the other thread is just browsing the same web server without logging in.
I see that if I login to a webserver (say X) using mozilla, and then I open chrome I am not logged in automatically and I have to login again in chrome. I want to have the same behaviour in a python multithreaded program, where one thread is logged in and the other is not.
What would be a suitable way to do this ?
Thanks for any tips ! | true | 18,879,888 | 1.2 | 0 | 0 | 1 | Simply use two different instances of mechanize.Browser. As both use their own chain of handlers, they don't share cookies, logins, etc... It doesn't really matter if you use them from different threads or not, they're completely isolated in any case. | 1 | 71 | 0 | 0 | 2013-09-18T18:41:00.000 | python,multithreading,web-services,web,mechanize | A multi-threaded python program with a strange requirement :) | 1 | 1 | 1 | 18,880,269 | 0 |
1 | 0 | I use selenium, python webdriver to run my test application. I also have some selenium html tests that I would like to add to my application. This html tests are changing quite ofen so I can not just convert those tests to python webdriver and add it to my app. I think I need somehow run those tests without changes from my python webdriver app. How can I do it? | true | 18,888,367 | 1.2 | 1 | 0 | 0 | Use pySelenese module for python - it parses html test and let you run it. | 0 | 141 | 0 | 0 | 2013-09-19T07:12:00.000 | python,selenium,webdriver | Execute selenium html tests from webdriver tests | 1 | 1 | 2 | 18,890,907 | 0 |
0 | 0 | Changed Selenium IDE source code format by following these steps:
Options>>Formats>>Python2/UT/RC>>Ok
Format>>Python2/UT/RC
Recorded code in IDE stop
Now playback button is not enabled. Try to export code to Python2/UT/RC to Eclipse(Python) enabled but there also it is not working, when trying to execute it is opening a box with "Ant" and close.
Please help. | false | 18,921,546 | 0 | 0 | 0 | 0 | Just you need to revert the steps to work.
Options>Formats>HTML
It should work. | 0 | 179 | 0 | 0 | 2013-09-20T16:27:00.000 | python,selenium-webdriver,selenium-rc,selenium-ide | Selenium IDE playback is not working | 1 | 1 | 1 | 18,952,607 | 0 |
0 | 0 | How to calculate round trip time for the communication between client and server in tcp connections. | false | 18,930,996 | 0.197375 | 0 | 0 | 1 | Assuming you have control over both client and server, send a message to the server with a time-stamp on it and have the server merely return the timestamp back to the client. When the client receives this back, it compares the current timestamp to the one in the payload and voila, the difference is the time it took for a round-trip. | 0 | 1,348 | 0 | 0 | 2013-09-21T09:22:00.000 | python-2.7 | Tcp round trip time calculation using python | 1 | 1 | 1 | 18,931,016 | 0 |
0 | 0 | Is it possible to fetch user profile photo on an MS Exchange network using Python?
Currently users signup with their company domain, and I'd like to fetch their profile photo automatically. Maybe something similar to gravatar, but for Microsoft networks? | false | 18,962,170 | 0 | 0 | 0 | 0 | The data is actually stored in the Active Directory. I don't remember the AD attribute off the top of my head, but on the MAPI level, the property can be retrieved from the PR_EMS_AB_THUMBNAIL_PHOTO (PidTagThumbnailPhone) property. | 1 | 452 | 0 | 2 | 2013-09-23T14:39:00.000 | python,outlook,exchange-server,gravatar,avatar | Fetch Microsoft Exchange / Outlook profile photo using Python | 1 | 1 | 1 | 18,970,588 | 0 |
0 | 0 | I want to generate a signature in Node.js. Here is a python example:
signature = hmac.new(SECRET, msg=message, digestmod=hashlib.sha256).hexdigest().upper()
I have this:
signature = crypto.createHmac('sha256', SECRET).update(message).digest('hex').toUpperCase()
What am I doing wrong? | true | 18,993,675 | 1.2 | 1 | 0 | 0 | Checked the node manuals as well. It looks correct to me. What about the ; in the end of the chain? | 0 | 1,076 | 0 | 0 | 2013-09-24T23:15:00.000 | python,node.js,hmac,digest | Nodejs equivalent of Python HMAC signature? | 1 | 1 | 1 | 18,993,775 | 0 |
0 | 0 | Is it possible to automatically install the xmlsec1 requirement of PySAML2 using pip?
The current project requires many packages and all are installed using pip and a requirements.txt file. I am now starting a SAML SSO implementation and need to install PySAML2. However, all the docs state that xmlsec1 needs to be installed as a requirement, and the pip install did not install it.
Is it possible to install xmlsec1 using pip? I see that PIL and pycrypto can successfully install external libs, so I am wondering as to why xmlsec1 cannot be installed using pip as part of PySAML2 dependencies. | false | 19,011,091 | 0 | 0 | 0 | 0 | Someone would need to create a pypi package containing a xmlsec1 binary.
Such package doesn't exist yet because it's:
quite unnatural - xmlsec1 is C application, not a python lib
hard - it has to be cross platform which is more hassle in C apps than in Python
python bindings should be written around xmlsec1 for a package to be at least somewhat relevant to pypi.
It shouldn't be impossible, and I'd love to be able to type "pip install xmlsec1" and see it doing all hard work. Unfortunately so far noone bothered implementing it. | 1 | 3,036 | 0 | 1 | 2013-09-25T17:06:00.000 | python,saml,saml-2.0 | Installing pysaml2 with pip - the xmlsec1 requirement | 1 | 1 | 2 | 19,993,471 | 0 |
0 | 0 | I'm running Python 2.6.5 on ec2 and I've replaced the old ftplib with the newer one from Python2.7 that allows importing of FTP_TLS. Yet the following hangs up on me:
from ftplib import FTP_TLS
ftp = FTP_TLS('host', 'username', 'password')
ftp.retrlines('LIST') (Times out after 15-20 min)
I'm able to run these three lines successfully in a matter of seconds on my local machine, but it fails on ec2. Any idea as to why this is?
Thanks. | false | 19,011,517 | 0 | 0 | 0 | 0 | If you're still having trouble could you try ruling out Amazon firewall problems. (I'm assuming you're not using a host based firewall.)
If your EC2 instance is in a VPC then in the AWS Management Console could you:
ensure you have an internet gateway
ensure that the subnet your EC2 instance is in has a default route (0.0.0.0/0) configured pointing at the internet gateway
in the Security Group for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
in the Network ACLs for both inbound and outbound allow All Traffic from all sources (0.0.0.0/0)
If your EC2 instance is NOT in a VPC then in the AWS Management Console could you:
in the Security Group for inbound allow All Traffic from all sources (0.0.0.0/0)
Only do this in a test environment! (obviously)
This will open your EC2 instance up to all traffic from the internet. Hopefully you'll find that your FTPS is now working. Then you can gradually reapply the security rules until you find out the cause of the problem. If it's still not working then the AWS firewall is not the cause of the problem (or you have more than one problem). | 0 | 659 | 0 | 1 | 2013-09-25T17:33:00.000 | python,amazon-web-services,ftp,amazon-ec2,ftplib | EC2 fails to connect via FTPS, but works locally | 1 | 1 | 2 | 19,076,508 | 0 |
0 | 0 | I am building a multiplayer game, so once the server started i want to broadcast server name continuously so that client can know that there are some server is running. I don't want to give IP address and port number to connect to server. can someone help me to broadcast server name.
its an app not an web app. | false | 19,032,609 | 0 | 0 | 0 | 0 | If the location of your server is constant, why wouldn't you just define the server ip address in your code and have your script connect to it? The user would never have to see the ip address of your server. | 0 | 6,087 | 0 | 3 | 2013-09-26T15:38:00.000 | python,sockets,python-2.7,socketserver,python-sockets | Broadcasting socket server in python | 1 | 1 | 2 | 19,033,825 | 0 |
0 | 0 | Consider following problem:
I have a gtk / tk app which displays content from a website in a List(Store). I want to do the following things in order:
display the window & start downloading
show a progress bar
on completion of the downloads add the data into the list(Store)
This is the condition: the user has to be able to interact with the app while it is downloading. That means that the program is in the window's mainloop during the entire download.
What does not work:
urllib.urlopen() waits for the entire download to complete
Popen() does not allow the communication I want between the two threads
How to notify the program that the download has complete is the biggest question
Since I am event driven anyway because of Tk/Gtk I might as well use signals
My preferred way of solving this would be registering an additional signal "dl_done" and sending that signal to gtk when the download has finished. Is that even possible?
Any suggestions are apreciated! | true | 19,079,107 | 1.2 | 0 | 0 | 1 | A simple solution is:
to share a Queue object between the Gtk thread and the download thread
when a download is complete, you put the data in the queue (eg. a tuple with the
download URL and the downloaded contents) from the download thread
in the Gtk thread, you set up a glib timer checking periodically if something
new is in the queue (say, every 100 milliseconds for example) thanks to the "get_nowait"
method of the Queue object.
You can have multiple download threads, if needed. | 0 | 97 | 0 | 0 | 2013-09-29T14:20:00.000 | python,multithreading | Multithreading url requests in python | 1 | 1 | 1 | 19,079,260 | 1 |
1 | 0 | I'm trying to implement a data driven test approach using Selenium (Python) but I've run into an issue selecting dynamic values from multiple combo boxes. I'm currently aware of one option, using method driver.execute_script("JAVASCRIPT TO GET COMBO BOX OPTION") but hard coding the values defeats the purpose of automated data driven testing. Is there any other solution?
P.S Please let me know if there is any additional info needed.
Thanks,
Eric | false | 19,096,111 | 0 | 0 | 0 | 0 | I think this should $("#id").val() give you the value i guess | 0 | 252 | 0 | 0 | 2013-09-30T13:56:00.000 | javascript,python,selenium,selenium-webdriver,data-driven-tests | Selecting combo box values | 1 | 1 | 2 | 19,096,169 | 0 |
1 | 0 | the idea is that, say a developer has a set of tests to run against locahost:8000 and he has hardcoded that in his tests.
When we setup a proxy in a browser, the browser handles the proxy so that users only care about typing localhost:8000 instead of localhost:proxy_port. Browser actually sends request and receives response from the proxy port.
Can we simulate such so that the tests don't have to change to localhost:proxy_port (and the proxy server knows to route to port 8000). instead, the developer can continue to run as localhost:8000 in his tests, but when he's running his tests, the request automatically goes through the proxy server.
PS: Also without changing the port of the server. Since the assumption is that the port 8000 is running as application server and changing it to another port can break other things! So saying "change proxy server port to 8000 and my webapp server to 80001" doesn't solve the whole problem. | false | 19,108,182 | 0 | 0 | 0 | 0 | Set the HTTP_PROXY environment variable (and export it), and Python will honour that (as far as the standard library is used). | 0 | 486 | 0 | 0 | 2013-10-01T04:45:00.000 | python,testing,proxy | Can we simulate a browser proxy mechanism in a Python script? | 1 | 1 | 1 | 19,108,770 | 0 |
1 | 0 | I am building a gevent application in which I use gevent.http.HTTPServer. The application must support CORS, and properly handle HTTP OPTIONS requests. However, when OPTIONS arrives, HTTPServer automatically sends out a 501 Not Implemented, without even dispatching anything to my connection greenlet.
What is the way to work around this? I would not want to introduce an extra framework/web server via WSGI just to be able to support HTTP OPTIONS. | false | 19,114,113 | 0 | 0 | 0 | 0 | Practically the only option in this situation is to switch to using WSGI. I ended up switching to pywsgi.WSGIServer, and the problem solved itself.
It's important to understand that switching to WSGI in reality introduces very little (if any) overhead, giving you so many benefits that the practical pros far outweigh the hypothetical cons. | 0 | 374 | 0 | 1 | 2013-10-01T10:40:00.000 | python,http,cors,gevent | CORS with gevent | 1 | 1 | 1 | 21,741,160 | 0 |
0 | 0 | One of the values in a JSON file I'm parsing is Wroc\u00c5\u0082aw. How can I turn this string into a unicode object that yields "Wrocław" (which is the correct decoding in this case)? | true | 19,161,501 | 1.2 | 0 | 0 | 1 | It looks your JSON hasn't the right encoding because neither \u00c5 nor \u0082aw yields the characters you're expecting in any encoding.
But you'd maybe try to encode this value in UTF8 or UTF16 | 1 | 10,963 | 0 | 5 | 2013-10-03T14:10:00.000 | python,json,unicode | Reading JSON: what encoding is "\u00c5\u0082"? How do I get it to a unicode object? | 1 | 1 | 2 | 19,161,687 | 0 |
1 | 0 | I need to make a Web Crawling do requests and bring the responses complete and quickly, if possible.
I come from the Java language. I used two "frameworks" and neither fully satisfied my intent.
The Jsoup had the request/response fast but wore incomplete data when the page had a lot of information. The Apache HttpClient was exactly the opposite of this, reliable data but very slow.
I've looked over some of Python modules and I'm testing Scrapy. In my searches, I was unable to conclude whether it is the fastest and brings the data consistently, or is there some other better, even more verbose or difficult.
Second, Python is a good language for this purpose?
Thank you in advance. | true | 19,171,483 | 1.2 | 0 | 0 | 5 | +1 votes for Scrapy. For the past several weeks I have been writing crawlers of massive car forums, and Scrapy is absolutely incredible, fast, and reliable. | 0 | 724 | 0 | 0 | 2013-10-04T00:58:00.000 | python,web-crawler,scrapy | Python Crawling - Requests faster | 1 | 1 | 2 | 19,171,556 | 0 |
0 | 0 | I have background in Java and I am new to Python. I want to make sure I understand correctly Python terminology before I go ahead.
My understanding of a module is: a script which can be imported by many scripts, to make reading easier. Just like in java you have a class, and that class can be imported by many other classes.
My understanding of a library is: A library contains many modules which are separated by its use.
My question is: Are libraries like packages, where you have a package e.g. called food, then:
chocolate.py
sweets.py
biscuts.py
are contained in the food package?
Or do libraries use packages, so if we had another package drink:
milk.py
juice.py
contained in the package. The library contains two packages?
Also, an application programming interface (API) usually contains a set of libraries is this at the top of the hierarchy:
API
Library
Package
Module
Script
So an API will consist off all from 2-5? | false | 19,198,166 | 0.049958 | 0 | 0 | 1 | I will try to answer this without using terms the earliest of beginners would use,and explain why or how they used differently, along with the most "official" and/or most understood or uniform use of the terms.
It can be confusing, and I confused myself thinking to hard, so don't think to much about it. Anyways context matters, greatly.
Library- Most often will refer to the general library or another collection created with a similar format and use. The General Library is the sum of 'standard', popular and widely used Modules, witch can be thought of as single file tools, for now or short cuts making things possible or faster. The general library is an option most people enable when installing Python. Because it has this name "Python General Library" it is used often with similar structure, and ideas. Witch is simply to have a bunch of Modules, maybe even packages grouped together, usually in a list. The list is usually to download them. Generally it is just related files, with similar interests. That is the easiest way to describe it.
Module- A Module refers to a file. The file has script 'in it' and the name of the file is the name of the module, Python files end with .py. All the file contains is code that ran together makes something happen, by using functions, strings ect.
Main modules you probably see most often are popular because they are special modules that can get info from other files/modules.
It is confusing because the name of the file and module are equal and just drop the .py. Really it's just code you can use as a shortcut written by somebody to make something easier or possible.
Package- This is a termis used to generally sometimes, although context makes a difference. The most common use from my experience is multiple modules (or files) that are grouped together. Why they are grouped together can be for a few reasons, that is when context matters.
These are ways I have noticed the term package(s) used. They are a group of Downloaded, created and/or stored modules. Which can all be true, or only 1, but really it is just a file that references other files, that need to be in the correct structure or format, and that entire sum is the package itself, installed or may have been included in the python general library. A package can contain modules(.py files) because they depend on each other and sometimes may not work correctly, or at all. There is always a common goal of every part (module/file) of a package, and the total sum of all of the parts is the package itself.
Most often in Python Packages are Modules, because the package name is the name of the module that is used to connect all the pieces. So you can input a package because it is a module, also allows it to call upon other modules, that are not packages because they only perform a certain function, or task don't involve other files. Packages have a goal, and each module works together to achieve that final goal.
Most confusion come from a simple file file name or prefix to a file, used as the module name then again the package name.
Remember Modules and Packages can be installed. Library is usually a generic term for listing, or formatting a group of modules and packages. Much like Pythons general library. A hierarchy would not work, APIs do not belong really, and if you did they could be anywhere and every ware involving Script, Module, and Packages, the worl library being such a general word, easily applied to many things, also makes API able to sit above or below that. Some Modules can be based off of other code, and that is the only time I think it would relate to a pure Python related discussion. | 1 | 108,126 | 0 | 119 | 2013-10-05T13:08:00.000 | python,module,package | Whats the difference between a module and a library in Python? | 1 | 1 | 4 | 56,387,348 | 0 |
1 | 0 | I am trying to access historical google page rankings or alexa rankings over time to add some weightings on a search engine I am making for fun. This would be a separate function that I would call in Python (ideally) and pass in the paramaters of the URL and how long I wanted to get the average over, measured in days and then I could just use that information to weight my results!
I think it could be fun to work on, but I also feel that this may be easy to do with some trick of the APIs some guru might be able to show me and save me a few sleepless weeks! Can anyone help?
Thanks a lot ! | false | 19,215,815 | 0.049958 | 1 | 0 | 1 | Alexa (via AWS) charges to use their API to access Alexa rankings. The charge per query is micro so you can get hundreds of thousands of ranks relatively cheaply. I used to run a few search directories that indexed Alexa rankings over time, so I have experience with this. The point is, you're being evil by scraping vast amounts of data when you can pay for the legitimate service.
Regarding PageRank... Google do not provide a way to access this data. The sites that offer to show your PageRank use a trick to get the PageRank via the Google Toolbar. So again, this is not legitimate, and I wouldn't count on it for long-term data mining, especially not in bulk quantities.
Besides, PageRank counts for very little these days, since Google now relies on about 200 other factors to rank search results, as opposed to just measuring sites' link authority. | 0 | 3,663 | 0 | 1 | 2013-10-07T01:17:00.000 | python,google-api,google-search-api,pagerank,alexa | Possible to get alexa information or google page rankings over time? | 1 | 1 | 4 | 19,393,738 | 0 |
1 | 0 | What are some methods to make relative urls absolute in scraped content so that the scraped html appears like the original and css are not broken?
I found out <base> tag may help. But how can I find out what the original base of the URL is?
I don't care about interactions with the links, but do want them to appear correct.
Assume a site 'example.com/blog/new/i.html' i scrape that has 2 resources
< link src="/style/style.css" >
< link src="newstyle.css" >.
Now if i set base as 'example.com/blog/new/i.html' wont the first one break | false | 19,231,985 | 0 | 0 | 0 | 0 | Keep track of the url of each page you scraped. One way would be to save it with the full URL as a filename. Then, you can resolve relative urls as per the HTML spec. | 0 | 358 | 0 | 1 | 2013-10-07T18:29:00.000 | javascript,python,html | How best to handle relative urls in scraped content? | 1 | 1 | 2 | 19,233,083 | 0 |
1 | 0 | As we login stackoverflow,there's a session created between the browser and server which only expired after we manually close the browser or clean cookies. But howto doing this by a programming way on CLIENT SYSTEM during all browser behavior acts normally ? Like nothing happened and just need another login action.
Ok! just curiosity :)
I don't know if this could possibly be done .
Any tips would be appropriated. Danke! | false | 19,262,267 | 0 | 0 | 0 | 0 | No. The server has no idea when a browser closes. Because the connection between the browser and the server is stateless, when a user closes a tab or shuts down the whole application, the server is unaware of it. It doesn't even destroy the session when you "manually close the browser or clean cookies". The Session does not expire until it times out.
Sessions can be destroyed programatically (I suspect, I don't use Python), for example, when a user clicks the "Log Out" button you should be destroying their session programatically, but if they just close the tab... you can't.
Using session cookies and having relatively short session timeouts in what you should be doing. Session cookies will be orphaned by the browser when the user closes a tab or the app, so even if they open it right back up, they will need to reauthenticate. And having a short session timeout means that their sessions will not be sitting idle, taking up memory, and waiting to be hijacked on your server. | 0 | 66 | 0 | 0 | 2013-10-09T02:55:00.000 | python,security,session,cookies,browser | is there A chance to destroy a session via scripts(like python) before the IE/Chome exit? Not using browser options | 1 | 1 | 1 | 19,262,430 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.