Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
When binding to a socket in python the value for host can be '' which means all interfaces. Or it can be a string containing a real ip address eg '192.168.1.5'. So its possible to bind to all or 1 interface. What if I have 3 interfaces and I want to bind to only 2 of them. Is this possible? What value do I give host, I have tired a list, a tuple, a comma separated string.
true
14,243,196
1.2
0
0
1
Unfortunately, it's not possible to bind to the subset of interfaces using socket module. This module provides access to BSD socket interface, which allow to specify only single address while binding. For this single address a special value of INADDR_ANY in C exists to allow for binding to all interfaces (Python translates empty string to this value). If you want to bind to more than one, but not all, interfaces using socket module, you'll need to create multiple sockets.
0
885
0
1
2013-01-09T17:41:00.000
python,sockets
Python Sockets Bind to 2 out of 3 network interfaces
1
1
1
14,243,915
0
1
0
After completing the oAuth handshake with Intuit Anywhere (AI), I use the API to get the HTML for the blue dot menu. Sometimes, the expected HTML is returned. Other times, I get this message This API requires Authorization. 22 2013-01-10T15:32:33.43741Z Typically, this message is returned when the oAuth token is expired. However, on the occasions when I get it, I can click around in my website for a bit or do a refresh, and the expected HTML is returned. I checked the headers being sent and, in both cases (i.e., when the expected HTML is returned, and an error is returned), the request is exactly the same. I wouldn't be surprised if this was a bug in Intuit's API, but I'm trying to rule out any other possibilities first. Please let me know if you have any thoughts on how to fix this. Thanks. Update: It seems the problem is occurring only when I do a refresh. This seems to be the case both in Firefox and Safari on OSX. It sounds to be like a Javascript caching issue.
false
14,261,512
0
0
0
0
I received this error as well and am posting this as pointer for other who stumble upon this. Error Code 22 (Authentication required) for me meant that the OAuth signature was wrong. This was confusing because I couldn't find this error listed in the Quickbooks documents for reconnect. I was signing the request as a "POST" request instead of a "GET" request which is what Quickbooks requires for calls to the reconnect endpoint.
0
377
0
2
2013-01-10T15:36:00.000
python,intuit-partner-platform
Sometimes getting "API requires authorization" from intuit anywhere api after a fresh oAuth handshake
1
1
1
27,932,175
0
0
0
I have a server, onto which I want to use Python, that is behind a company firewall. I do not want to mess with it and the only thing I can do is to make a firewall exception for specific URL/domains. I also want to access packages located on PYPI, using pip or easy_install. Therefore, do you know which URL should I ask to be listed in the exception rules for the firewall, except *.pypi.python.org?
true
14,277,088
1.2
0
0
13
You need to open up your firewall to the download locations of any package you need to install, or connect to a proxy server that has been given access. Note that the download location is not necessarily on PyPI. The Python package index is a metadata service, one that happens to also provide storage for the indexed packages. As such, not all packages indexed on PyPI are actually downloaded from PyPI, the download location could be anywhere on the internet. I'd say you start with opening pypi.python.org, then as individual package installions fail, check their PyPI page and add the download location listed for those.
0
29,142
0
28
2013-01-11T11:19:00.000
python,pip,firewall
what url should I authorize to use pip behind a firewall?
1
2
2
14,277,298
0
0
0
I have a server, onto which I want to use Python, that is behind a company firewall. I do not want to mess with it and the only thing I can do is to make a firewall exception for specific URL/domains. I also want to access packages located on PYPI, using pip or easy_install. Therefore, do you know which URL should I ask to be listed in the exception rules for the firewall, except *.pypi.python.org?
false
14,277,088
0.462117
0
0
5
I've solved it adding these domains to the firewall whitelist: pypi.python.org pypi.org pythonhosted.org
0
29,142
0
28
2013-01-11T11:19:00.000
python,pip,firewall
what url should I authorize to use pip behind a firewall?
1
2
2
67,416,056
0
0
0
I'm new to python so please excuse me if question doesn't make sense in advance. We have a python messaging server which has one file server.py with main function in it. It also has a class "*server" and main defines a global instance of this class, "the_server". All other functions in same file or diff modules (in same dir) import this instance as "from main import the_server". Now, my job is to devise a mechanism which allows us to get latest message status (number of messages etc.) from the aforementioned messaging server. This is the dir structure: src/ -> all .py files only one file has main In the same directory I created another status server with main function listening for connections on a different port and I'm hoping that every time a client asks me for message status I can invoke function(s) on my messaging server which returns the expected numbers. How can I import the global instance, "the_server" in my status server or rather is it the right way to go?
false
14,370,402
0.197375
1
0
2
Unless your "status server" and "real server" are running in the same process (that is, loosely, one of them imports the other and starts it), just from main import the_server in your status server isn't going to help. That will just give you a new, completely independent instance of the_server that isn't doing anything, which you can then report status on. There are a few obvious ways to solve the problem. Merge the status server into the real server completely, by expanding the existing protocol to handle status-related requests, as Peter Wooster suggestions. Merge the status server into the real server async I/O implementation, but still listening on two different ports, with different protocol handlers for each. Merge the status server into the real server process, but with a separate async I/O implementation. Store the status information in, e.g., a mmap or a multiprocessing.Array instead of directly in the Server object, so the status server can open the same mmap/etc. and read from it. (You might be able to put the Server object itself in shared memory, but I wouldn't recommend this even if you could make it work.) I could make these more concrete if you explained how you're dealing with async I/O in the server today. Select (or poll/kqueue/epoll) loop? Thread per connection? Magical greenlets? Non-magical cooperative threading (like PEP 3156/tulip)? Even just "All I know is that we're using twisted/tornado/gevent/etc., so whatever that does" is enough.
1
128
0
0
2013-01-17T00:37:00.000
python,client-server,messaging
Python server interaction
1
2
2
14,370,602
0
0
0
I'm new to python so please excuse me if question doesn't make sense in advance. We have a python messaging server which has one file server.py with main function in it. It also has a class "*server" and main defines a global instance of this class, "the_server". All other functions in same file or diff modules (in same dir) import this instance as "from main import the_server". Now, my job is to devise a mechanism which allows us to get latest message status (number of messages etc.) from the aforementioned messaging server. This is the dir structure: src/ -> all .py files only one file has main In the same directory I created another status server with main function listening for connections on a different port and I'm hoping that every time a client asks me for message status I can invoke function(s) on my messaging server which returns the expected numbers. How can I import the global instance, "the_server" in my status server or rather is it the right way to go?
true
14,370,402
1.2
1
0
2
You should probably use a single server and design a protocol that supports several kinds of messages. 'send' messages get sent, 'recv' message read any existing message, 'status' messages get the server status, 'stop' messages shut it down, etc. You might look at existing protocols such as REST, for ideas.
1
128
0
0
2013-01-17T00:37:00.000
python,client-server,messaging
Python server interaction
1
2
2
14,370,495
0
0
0
I am trying to access a subset of files from a directory in Dropbox. That directory has more than 25k files (about 200k and growing) and so my initial attempt at building a list of filenames from client.metadata isn't workable. How can one get around this? I can access the filenames from my local copy and periodically update that list. However, because this is a script that a few people in my lab will use, I hoped for something that did not rely on my local copy of Dropbox.
true
14,381,019
1.2
0
0
0
According to the dropbox api, the max amount of files returned from the /metadata api call is 25,000. There is a way to limit the amount of files returned from the api call, but there does not seem a way to list file entries starting from X to Y (like getting entries 100 to 200, then 200 to 300), it seems the only way is to separate these large number of files into folders
0
259
0
0
2013-01-17T14:17:00.000
python,dropbox,dropbox-api
Access files from a large directory Dropbox API
1
1
1
14,384,514
0
0
0
I need to login to a website. Navigate to a report page. After entering the required information and clicking on the "Go" button(This is a multipart/form-data that I am submitting), there's a pop up window asking me to save the file. I want to do it automatically by python. I search the internet for a couple of days, but can't find a way in python. By using urllib2, I can process up to submitting multipart form, but how can I get the name and location of the file and download it? Please Note: There is no Href associated with the "Go" button. After submitting the form, a file-save dialog popup asking me where to save the file. thanks in advance
false
14,400,767
0
0
0
0
There are python bindings to Selenium, that are helping in scripting simulated browser behavior allowing to do really complex stuff with it - take a look at it, should be enough for what you need.
0
346
0
0
2013-01-18T14:15:00.000
python,download,popupwindow
Using python to download a file after clicking the submit button
1
1
1
14,401,005
0
0
0
Need a way to extract a domain name without the subdomain from a url using Python urlparse. For example, I would like to extract "google.com" from a full url like "http://www.google.com". The closest I can seem to come with urlparse is the netloc attribute, but that includes the subdomain, which in this example would be www.google.com. I know that it is possible to write some custom string manipulation to turn www.google.com into google.com, but I want to avoid by-hand string transforms or regex in this task. (The reason for this is that I am not familiar enough with url formation rules to feel confident that I could consider every edge case required in writing a custom parsing function.) Or, if urlparse can't do what I need, does anyone know any other Python url-parsing libraries that would?
false
14,406,300
0.028564
0
0
1
Using the tldexport works fine, but apparently has a problem while parsing the blogspot.com subdomain and create a mess. If you would like to go ahead with that library, make sure to implement an if condition or something to prevent returning an empty string in the subdomain.
1
36,432
0
56
2013-01-18T19:33:00.000
python,parsing,url,urlparse
Python urlparse -- extract domain name without subdomain
1
1
7
18,302,950
0
0
0
Is there any method to let me know the values of an object's attributes? For example, info = urllib2.urlopen('http://www.python.org/') I wanna know all the attributes' values of info. Maybe I don't know what are the attributes the info has. And str() or list() can not give me the answer.
false
14,423,868
0.07983
0
0
2
You can use vars(info) or info.__dict__. It will return the object's namespace as a dictionary in the attribute_name:value format.
1
306
0
0
2013-01-20T11:10:00.000
python
how to reveal an object's attributes in Python?
1
1
5
14,423,880
0
1
0
I've been searching a week how check if a checkbox is checked in Selenium WebDriver with Python, but I find only algorithms from JAVA. I've read the WebDriver docs and it doesn't have an answer for that. Anyone have a solution?
false
14,442,636
1
0
0
6
I'm using driver.find_element_by_name("< check_box_name >").is_selected()
0
62,290
0
40
2013-01-21T16:09:00.000
python,selenium-webdriver
How can I check if a checkbox is checked in Selenium Python WebDriver?
1
1
6
18,076,075
0
0
0
I'm creating a Python class as an API to objects managed remotely via REST. The REST API includes a call that returns a list of dictionaries that define properties of the remote objects. I'm in the process of writing code to dynamically add these properties as attributes of the corresponding Python objects (ie: when the class is instantiated, a REST query is made for a list of properties each of which is then added as an attribute on the instance.) However, this brought back memories of PHP code I'd once glanced at that dynamically added attributes to objects and thinking "have you heard of dictionaries??" Given my background is rooted in C/C++/Java this is a somewhat foreign idea to me, but is perhaps par for the course in Python (and PHP). So, when is it appropriate to dynamically add attributes to an object rather than using a dictionary? From what I've read in related material, it seems to me an API is a legitimate case.
true
14,466,695
1.2
0
0
2
disclaimer: I know nothing about REST ... That said, I would typically be a little hesitant to add attributes to an object willy-nilly for the following reasons: You might accidentally replace some data that you don't want to replace due to a namespace clash dictionaries are easier to inspect rather than having to go through obj.__dict__ or vars(obj) or something similar. There is precedence for adding attributes in the standard library however ... That's basically what argparse does to populate the returned namespace. I think that perhaps it is worth asking the question: "Will the user know which attributes are going to be added to the object?". If the answer to this question is yes, and you're not worried about the afore mentioned namespace conflicts, then perhaps a simple object is appropriate, otherwise, I'd just use a dictionary.
1
191
0
0
2013-01-22T19:41:00.000
python,class,rest,attributes
When is it appropriate to dynamically add attributes to a Python object?
1
1
2
14,466,768
0
1
0
I have two servers. Basically Server #1 runs a script throughout the day and when its done I need to send a notification to Server #2 to get it to run some other scripts. I am currently leaning on Amazon AWS tools and using Python so I was wondering if someone could recommend a simple, secure and easy to program way of: Setting up a flag on Server #1 when it is finished running its script Polling this flag from Server #2 Run scripts on Server #2 when the flag is active Remove the flag from Server #1 when the scripts have finished running on Server #2 Should I be using Amazon SNS or SQS? Alternatively, are these both a poor choice, and if so can you recommend anything better? I am leaning towards AWS tools because I already have boto installed and I like the ease of use.
false
14,499,893
0.099668
0
0
1
Assuming Server #1 is running a script through cron, then there should be no reason you can't just use ssh to remotely change Server #2. I believe if you use the elastic-ip addresses it might not count as bandwidth usage. Barring that, I'd use SNS. The model would instead be something like: Server #1 notifies Server #2 (script starting) Server #1 starts running script (optional) Server #1 notifies Server #2 of progress Server #1 notifies Server #2 (script complete), starting Server #2's scripts Server #2 notifies Server #1 when it's complete In this case you'd set up some sort of simple webserver to accept the notifications. Simple CGI scripts would cut it though aren't the most secure option. I'd only bring SQS into the picture if two many scripts were trying to run at once. If you are chunking it "all of Server #1, then Server #2" it's a level you don't really need.
0
479
0
2
2013-01-24T11:09:00.000
python,linux,amazon-web-services,debian
Communicate between two servers - Amazon SQS / SNS?
1
1
2
14,500,253
0
0
0
I am using Python, and I want to analyze audio files from internet streaming media (for example Youtube, Soundcloud, etc.) Is there a universal way to do so? There is a pre-loading for every music or video, there must be a way to access it? How? I want to run this script on an external server, that might be relevant to the answer. Thanks
true
14,500,656
1.2
0
0
-1
All sound you "hear" on your pc has to run through your soundcard, Maybe somehow write a script that "records" the sounds runnning through the device. Maybe u can use Pymedia module?
0
65
0
1
2013-01-24T11:47:00.000
python,audio,audio-streaming
Cross-platform audio import from different sources
1
1
1
14,503,589
0
1
0
I want to crawl a website having multiple pages and when a page number is clicked it is dynamically loaded.How to screen scrape it? i.e as the url is not present as href or a how to crawl to other pages? Would be greatful if someone helped me on this. PS:URL remains the same when different page is clicked.
false
14,503,078
0.033321
0
0
1
if you are using google chrome, you can check the url which is dynamically being called in network->headers of the developer tools so based on that you can identify whether it is a GET or POST request. If it is a GET request you can find the parameters straight away from the url. If it is a POST request you can find the parameters from form data in network->headers of the developer tools.
0
4,658
0
3
2013-01-24T13:58:00.000
python,web-crawler
How to crawl a web site where page navigation involves dynamic loading
1
1
6
14,503,228
0
0
0
I wrote the following function for use in a collection of socket utilities I use for testing. (Using Python 2.7.3, if that's important.) The main reason I pulled the select library into it was so I could implement a timeout instead of just waiting forever for a response. The problem I've found though is that the response is getting truncated at 2048 characters, despite using 64K as the maximum size for the .recv() method on the socket. This truncation didn't happen before I involved select. It happily pulled through 64K and even more when I set the maximum size higher. I've looked through some online resources on select and I find any information on this apparent cap on the size of the received data. I.e. no information that it exists, let alone how to modify it. Can anyone point me to a way to overcome this 2K limit? import socket,select MAXSIZE = 65535 TIMEOUT = 10 def transientConnect(host,port,sendData): error,response = False,'' try: sendSocket = socket.socket() sendSocket.connect((host,port)) sendSocket.send(sendData) gotData = select.select([sendSocket],[],[],TIMEOUT) if (gotData[0]): response = sendSocket.recv(MAXSIZE) else: error = True response = '*** TIMEOUT ***\nNo response from host.' sendSocket.close() except Exception, errText: error,response = True,'*** SOCKET ERROR ***\n'+str(errText) return (error,response)
true
14,506,025
1.2
0
0
1
The corrected, tested, and working function: MAXSIZE = 65535 TIMEOUT = 10 def transientConnect(host,port,sendData): error,response = False,'' try: sendSocket = socket.socket() sendSocket.connect((host,port)) sendSocket.send(sendData) gotData = select.select([sendSocket],[],[],TIMEOUT) if (gotData[0]): response = sendSocket.recv(MAXSIZE) while True: #Once data starts arriving, use a shorter timeout gotData2 = select.select([sendSocket],[],[],0.5) if (gotData2[0]): moreData = sendSocket.recv(MAXSIZE) response += moreData else:break else: error = True response = '*** TIMEOUT ***\nNo response from host.' sendSocket.close() except Exception, errText: error,response = True,'*** SOCKET ERROR ***\n'+str(errText) return (error,response)
0
1,405
0
2
2013-01-24T16:25:00.000
sockets,python-2.7,client
Server response on socket being truncated at 2K
1
1
2
14,514,022
0
0
0
I have a weather station supplying me data every 2.5 seconds. (using weewx) I want to show this live on my website using highcharts to plot live data. Currently i can pickup the messages from the redis channel 'weather' using Predis just to test. The issue is that the data is only sent every 2.5, so when a users opens the php site he sometimes has to wait 2.5 seconds for the chart to appear. Do you have any suggestions to get around this issue?
false
14,508,976
0
1
0
0
Store the data manually the very first time (while developing the software). Every 2.5 seconds of running, use polling to check for updated data. If the data is updated, then update the data currently stored. When the user logs on, you plot the chart with the values in the database.
0
238
0
0
2013-01-24T19:15:00.000
php,python,redis
Redis: Live data via channel
1
2
2
14,511,844
0
0
0
I have a weather station supplying me data every 2.5 seconds. (using weewx) I want to show this live on my website using highcharts to plot live data. Currently i can pickup the messages from the redis channel 'weather' using Predis just to test. The issue is that the data is only sent every 2.5, so when a users opens the php site he sometimes has to wait 2.5 seconds for the chart to appear. Do you have any suggestions to get around this issue?
true
14,508,976
1.2
1
0
0
What you should do is have a second listener dump data into a key current_weather every time an event comes across. When you first load the page, pull from that key to build the chart, then start listening for updates.
0
238
0
0
2013-01-24T19:15:00.000
php,python,redis
Redis: Live data via channel
1
2
2
14,511,705
0
0
0
I am currently working on an app that syncs one specific folder in a users Google Drive. I need to find when any of the files/folders in that specific folder have changed. The actual syncing process is easy, but I don't want to do a full sync every few seconds. I am condisering one of these methods: 1) Moniter the changes feed and look for any file changes This method is easy but it will cause a sync if ANY file in the drive changes. 2) Frequently request all files in the whole drive eg. service.files().list().execute() and look for changes within the specific tree. This is a brute force approach. It will be too slow if the user has 1000's of files in their drive. 3) Start at the specific folder, and move down the folder tree looking for changes. This method will be fast if there are only a few directories in the specific tree, but it will still lead to numerous API requests. Are there any better ways to find whether a specific folder and its contents have changed? Are there any optimisations I could apply to method 1,2 or 3.
true
14,511,669
1.2
0
0
1
As you have correctly stated, you will need to keep (or work out) the file hierarchy for a changed file to know whether a file has changed within a folder tree. There is no way of knowing directly from the changes feed whether a deeply nested file within a folder has been changed. Sorry.
0
547
0
3
2013-01-24T22:06:00.000
python,google-drive-api
How can I find if the contents in a Google Drive folder have changed
1
2
2
14,556,472
0
0
0
I am currently working on an app that syncs one specific folder in a users Google Drive. I need to find when any of the files/folders in that specific folder have changed. The actual syncing process is easy, but I don't want to do a full sync every few seconds. I am condisering one of these methods: 1) Moniter the changes feed and look for any file changes This method is easy but it will cause a sync if ANY file in the drive changes. 2) Frequently request all files in the whole drive eg. service.files().list().execute() and look for changes within the specific tree. This is a brute force approach. It will be too slow if the user has 1000's of files in their drive. 3) Start at the specific folder, and move down the folder tree looking for changes. This method will be fast if there are only a few directories in the specific tree, but it will still lead to numerous API requests. Are there any better ways to find whether a specific folder and its contents have changed? Are there any optimisations I could apply to method 1,2 or 3.
false
14,511,669
0
0
0
0
There are a couple of tricks that might help. Firstly, if your app is using drive.file scope, then it will only see its own files. Depending on your specific situation, this may equate to your folder hierarchy. Secondly, files can have multiple parents. So when creating a file in folder-top/folder-1/folder-1a/folder-1ai. you could declare both folder-1ai and folder-top as parents. Then you simply need to check for folder-top.
0
547
0
3
2013-01-24T22:06:00.000
python,google-drive-api
How can I find if the contents in a Google Drive folder have changed
1
2
2
20,685,131
0
0
0
I can't import socket module into my program. When I import it said, "AttributeError: 'module' object has no attribute "AF_INET'. I think there is a problem with my python virtual machine.
false
14,545,128
0
0
0
0
Rule of thumb in users' created python modules, never give your modules the same python's standard modules names; instant conflict will occur. eg. If you name your socket app "socket.py", and started your application, similar problem to what you showed will appear.
1
117
0
0
2013-01-27T06:49:00.000
python,python-2.7
Issue with socket module?
1
1
2
14,545,808
0
0
0
When doing a scrape of a site, which would be preferable: using curl, or using Python's requests library? I originally planned to use requests and explicitly specify a user agent. However, when I use this I often get an "HTTP 429 too many requests" error, whereas with curl, it seems to avoid that. I need to update metadata information on 10,000 titles, and I need a way to pull down the information for each of the titles in a parallelized fashion. What are the pros and cons of using each for pulling down information?
true
14,552,088
1.2
0
0
3
Since you want to parallelize the requests, you should use requests with grequests (if you're using gevent, or erequests if you're using eventlet). You may have to throttle how quickly you hit the website though since they may do some ratelimiting and be refusing you for requesting too much in too short a period of time.
0
2,673
0
4
2013-01-27T20:47:00.000
python,curl,python-requests
Using curl vs Python requests
1
3
3
14,552,426
0
0
0
When doing a scrape of a site, which would be preferable: using curl, or using Python's requests library? I originally planned to use requests and explicitly specify a user agent. However, when I use this I often get an "HTTP 429 too many requests" error, whereas with curl, it seems to avoid that. I need to update metadata information on 10,000 titles, and I need a way to pull down the information for each of the titles in a parallelized fashion. What are the pros and cons of using each for pulling down information?
false
14,552,088
0
0
0
0
I'd go for the in-language version over an external program any day, because it's less hassle. Only if it turns out unworkable would I fall back to this. Always consider that people's time is infinitely more valuable than machine time. Any "performance gains" in such an application will probably be swamped by network delays anyway.
0
2,673
0
4
2013-01-27T20:47:00.000
python,curl,python-requests
Using curl vs Python requests
1
3
3
14,552,220
0
0
0
When doing a scrape of a site, which would be preferable: using curl, or using Python's requests library? I originally planned to use requests and explicitly specify a user agent. However, when I use this I often get an "HTTP 429 too many requests" error, whereas with curl, it seems to avoid that. I need to update metadata information on 10,000 titles, and I need a way to pull down the information for each of the titles in a parallelized fashion. What are the pros and cons of using each for pulling down information?
false
14,552,088
0.132549
0
0
2
Using requests would allow you to do it programmatically, which should result in a cleaner product. If you use curl, you're doing os.system calls which are slower.
0
2,673
0
4
2013-01-27T20:47:00.000
python,curl,python-requests
Using curl vs Python requests
1
3
3
14,552,201
0
0
0
So I'm trying to run a search query through the Twitter API in Python. I can get it to return up to 100 results using the "count" parameter. Unfortunately, version 1.1 doesn't seem to have the "page" parameter that was present in 1.0. Is there some sort of alternative for 1.1? Or, if not, does anyone have any suggestions for alternative ways to get a decent amount of tweets returned for a subject. Thanks. Update with solution: Thanks to the Ersin below. I queried as a normally would for a page, and when it's return I would check for the id of the oldest tweet. I'd then use this as the max_id in the next URL.
true
14,592,874
1.2
1
0
2
I think you should use "since_id" parameter in your url. since_id provides u getting pages that older than since_id. So, for the next page you should set the since_id parameter as the last id of your current page.
0
297
0
0
2013-01-29T21:44:00.000
python,twitter
Returning more than one page in Python Twitter search
1
1
1
14,593,070
0
0
0
I'm running some fairly simple tests using browsermob and selenium to open firefox browsers and navigate through a random pages. Each firefox instance is supposed to be independent and none of them share any cookies or cache. On my mac osx machine, this works quite nicely. The browsers open, navigate through a bunch of pages and then close. On my windows machine, however, even after the firefox browser closes, the tmp** folders remain and, after leavin the test going on for a while, they begin to take up a lot of space. I was under the impression that each newly spawned browser would have its own profile, which it clearly does, but that it would delete the profile it made when the browser closes. Is there an explicit selenium command I'm missing to enforce this behaviour? Additionally, I've noticed that some of the tmp folders are showing up in AppData/Local/Temp/2 and that many others are showing up in the folder where I started running the script...
true
14,612,294
1.2
0
0
8
On your mac, have you looked in /var/folders/? You might find a bunch of anonymous*webdriver-profile folders a few levels down. (mine appear in /var/folders/sm/jngvd6s57ldb916b7h25d57r0000dn/T/) Also, are you using driver.close() or driver.quit()? I thought driver.quit() cleans up the temp folder, but I could be wrong.
0
3,222
0
5
2013-01-30T19:41:00.000
python,firefox,selenium
Selenium not deleting profiles on browser close
1
1
1
14,612,967
0
0
0
my apologies if this is a trivial question. I've recently begun doing some android programming and I'm writing a simple app that allows you to use your android device as a controller for your windows PC. Specifically it allows the user to do things like turn off the machine, make it sleep, reboot it etc etc. I'm currently using a python library called CherryPy as a server on the windows machine to execute the actual win32api calls to perform the desired function. What i'm not sure about is how to discover (dynamically) which machine on the network is actually hosting the server. Everything is working fine if I hardcode my machines public IP into the android app, but obviously that is far less than ideal. I've considered having the user manually enter their machines public IP in the app, but if there's a way to, say, broadcast a quick message to all machines on the WiFi and check for a pre-canned response that my Python server would send out, that'd be wonderful. Is that possible? Thanks in advance guys.
false
14,635,036
0.197375
0
0
2
Try sending a UDP packet to the special broadcast address 255.255.255.255. Every device in the network should receive a copy of that packet (barring firewalls), and you can arrange to have the server reply to the packet with its identity.
0
1,597
0
6
2013-01-31T20:59:00.000
java,android,python
Broadcast a message to all available machines on WiFi
1
1
2
14,635,099
0
1
0
We test an application developed in house using a python test suite which accomplishes web navigations/interactions through Selenium WebDriver. A tricky part of our web testing is in dealing with a series of pdf reports in the app. We are testing a planned upgrade of Firefox from v3.6 to v16.0.1, and it turns out that the way we captured reports before no longer works, because of changes in the directory structure of firefox's temp folder. I didn't write the original pdf capturing code, but I will refactor it for whatever we end up using with v16.0.1, so I was wondering if there' s a better way to save a pdf using Python's selenium webdriver bindings than what we're currently doing. Previously, for Firefox v3.6, after clicking a link that generates a report, we would scan the "C:\Documents and Settings\\Local Settings\Temp\plugtmp" directory for a pdf file (with a specific name convention) to be generated. To be clear, we're not saving the report from the webpage itself, we're just using the one generated in firefox's Temp folder. In Firefox 16.0.1, after clicking a link that generates a report, the file is generated in "C:\Documents and Settings\ \Local Settings\Temp\tmp*\cache*", with a random file name, not ending in ".pdf". This makes capturing this file somewhat more difficult, if using a technique similar to our previous one - each browser has a different tmp*** folder, which has a cache full of folders, inside of which the report is generated with a random file name. The easiest solution I can see would be to directly save the pdf, but I haven't found a way to do that yet. To use the same approach as we used in FF3.6 (finding the pdf in the Temp folder directory), I'm thinking we'll need to do the following: Figure out which tmp*** folder belongs to this particular browser instance (which we can do be inspecting the tmp*** folders that exist before and after the browser is instantiated) Look inside that browser's cache for a file generated immedaitely after the pdf report was generated (which we can by comparing timestamps) In cases where multiple files are generated in the cache, we could possibly sort based on size, and take the largest file, since the pdf will almost certainly be the largest temp file (although this seems flaky and will need to be tested in practice). I'm not feeling great about this approach, and was wondering if there's a better way to capture pdf files. Can anyone suggest a better approach? Note: the actual scraping of the PDF file is still working fine.
true
14,651,973
1.2
1
0
0
We ultimately accomplished this by clearing firefox's temporary internet files before the test, then looking for the most recently created file after the report was generated.
0
1,728
0
2
2013-02-01T17:40:00.000
python,pdf,selenium,webdriver,selenium-webdriver
Capturing PDF files using Python Selenium Webdriver
1
1
1
14,760,698
0
1
0
While processing html using Beautifulsoup, the < and > were converted to &lt;and &gt;, since the tag anchor were all converted, the whole soup lost its structure, any suggestion?
false
14,669,283
0
0
0
0
It can be due to an invalid character (due to charset encoding/decoding), therefor BeautifulSoup has issues to parse the input. I solve it by passing my string directly to BeautifulSoup without doing any encoding/decoding. In my case, I was trying to convert UTF-16 to UTF-8 myself.
0
8,456
0
7
2013-02-03T03:42:00.000
python,html,parsing,beautifulsoup
< > changed to < and > while parsing html with beautifulsoup in python
1
1
2
55,544,880
0
1
0
My Flask applications has to do quite a large calculation to fetch a certain page. While Flask is doing that function, another user cannot access the website, because Flask is busy with the large calculation. Is there any way that I can make my Flask application accept requests from multiple users?
false
14,672,753
0.291313
0
0
3
For requests that take a long time, you might want to consider starting a background job for them.
0
72,055
0
65
2013-02-03T13:02:00.000
python,flask
Handling multiple requests in Flask
1
1
2
14,673,087
0
0
0
So I have a webpage that has some javascript that gets executed when a link is clicked. This javascript opens a new window and calls some other javascript which requests an xml document which it then parses for a url to pass to a video player. How can I get that xml response using selenium?
true
14,711,161
1.2
0
0
1
Short answer, unless the xml is posted to the page, you can't. Long answer, you can use Selenium to do JS injection on the page so that the xml document is replicated to some hidden page element you can expect, or stored to a file locally that you can open. This is, of course, assuming that the xml document is actually retrieved client side; if this is all serverside, you'll need to integrate with the backend or emulate the call yourself. Oh, and one last option to explore would be to proxy the browser Selenium is driving, then inspect the traffic for the response containing the xml. Though more complicated, that actually could be argued to be the best solution, since you aren't modifying the system under test to test it.
0
149
0
0
2013-02-05T15:36:00.000
python,selenium
Handling responses with Python bindings for Selenium
1
1
1
14,711,223
0
0
0
We run a bunch of Python test scripts on a group of test stations. The test scripts interface with hardware units on these test stations, so we're stuck running one test script at a time per station (we can't virtualize everything). We built a tool to assign tests to different stations and report test results - this allows us to queue up thousands of tests and let these run overnight, or for any length of time. Occasionally, what we've found is that test stations will drop out of the cluster. When I remotely log into them, I get a black screen, then they reboot, then upon logging in I'm notified that windows XP had a "serious error". The Event Log contains a record of this error, which states Category: (102) and Event ID: 1003. Previously, we found that this was caused by the creation of hundreds of temporary Firefox profiles - our tests use selenium webdriver to automate website interactions, and each time we started a new browser, a temporary Firefox profile was created. We added a step in the cleanup between each test that empties these temporary Firefox profiles, but we're still finding that stations drop out sometime, and always with this serious error and record in the Event Log. I would like to find the root cause of this problem, but I don't know how to go about doing this. I've tried searching for information about how to read event log entries, but I haven't turned up anything that helps. I'm open to any suggestions for ways to go about debugging this issue.
false
14,760,751
0
1
0
0
I've experienced similar problems before with Firefox. The rare times that we managed to catch a machine in the act it was just not closing browser sessions. Hence the BSOD eventually. Obviously this was a bug in either webdriver, firefox, or XP (which we were also using). We solved it by aggressively killing every firefox process between each individual test. This worked for us. And because you are not running tests in parallel it would work for you as well. By agressively I mean putting an axe through it. The windows equivalent of killall -9 firefox. Because these sessions were unresponsive. As to the root cause? The problem did not occur with specific versions of Firefox. But we never actually managed to debug it properly. Debugging was very difficult because it wasn't reproducible under short test runs and once the issue arose it really did cause a hard crash.
0
855
0
6
2013-02-07T20:50:00.000
python,selenium,webdriver,selenium-webdriver
Python selenium webdriver tests causing "serious error" when run in large batches on windows XP
1
1
1
21,198,235
0
0
0
I am very new to AWS SQS queues and I am currently playing around with boto. I noticed that when I try to read a queue filled with messages in a while loop, I see that after 10-25 messages are read, the queue does not return any message (even though the queue has more than 1000+ messages). It starts populating another set of 10-25 messages after a few seconds or on stopping and restarting the the program. while true: read_queue() // connection is already established with the desired queue. Any thoughts on this behaviour or point me in the right direction. Just reiterating I am just couple of days old to SQS !! Thanks
false
14,776,751
0.197375
0
0
2
Long polling is more efficient because it allows you to leave the HTTP connection open for a period of time while you wait for more results. However, you can still do your own polling in boto by just setting up a loop and waiting for some period of time between reading the queue. You can still get good overall throughput with this polling strategy.
0
1,560
0
3
2013-02-08T16:11:00.000
python,amazon-web-services,boto,amazon-sqs
Reading data consecutively in a AWS SQS queue
1
1
2
14,778,171
0
1
0
I am making a download manager. And I want to make the download manager check the md5 hash of an url after downloading the file. The hash is found on the page. It needs to compute the md5 of the file ( this is done), search for a match on the html page and then compare the WHOLE contents of the html page for a match. my question is how do i make python return the whole contents of the html and find a match for my "md5 string"?
true
14,815,856
1.2
0
0
1
import urllib and use urllib.urlopen for getting the contents of an html. import re to search for the hash code using regex. You could also use find method on the string instead of regex. If you encounter problems, then you can ask more specific questions. Your question is too general.
0
168
0
0
2013-02-11T15:59:00.000
python,html,compare,match
Matching contents of an html file with keyword python
1
1
2
14,815,973
0
0
1
I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user. I tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python) Let me know if there is any other approach.
false
14,826,245
0
0
0
0
I'd give you a code sample if I better understood what your current data structures look like, but this sounds like a job for a pandas dataframe groupby (in case you don't feel like actually using a database as others have suggested).
0
177
0
2
2013-02-12T05:48:00.000
python,data-structures
Search in Large data set
1
2
4
14,827,656
0
0
1
I have a list of user:friends (50,000) and a list of event attendees (25,000 events and list of attendees for each event). I want to find top k friends with whom the user goes to the event. This needs to be done for each user. I tried traversing lists but is computationally very expensive. I am also trying to do it by creating weighted graph.(Python) Let me know if there is any other approach.
false
14,826,245
0
0
0
0
Can you do something like this. Im assuming friends of a user is relatively less, and the events attended by a particular user is also much lesser than total number of events. So have a boolean vector of attended events for each friend of the user. Doing a dot product and those that have max will be the friend who most likely resembles the user. Again,.before you do this..you will have to filter some events to keep the size of your vectors manageable.
0
177
0
2
2013-02-12T05:48:00.000
python,data-structures
Search in Large data set
1
2
4
14,826,472
0
0
0
url = "www.someurl.com" request = urllib2.Request(url,header={"User-agent" : "Mozilla/5.0"}) contentString = urllib2.url(request).read() contentFile = StringIO.StringIO(contentString) for i in range(0,2): html = contentFile.readline() print html The above code runs fine from commandline but if i add it to a cron job it throws the following error: File "/usr/lib64/python2.6/urllib2.py", line 409, in _open '_open', req) File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib64/python2.6/urllib2.py", line 1186, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib64/python2.6/urllib2.py", line 1161, in do_open raise URLError(err) urllib2.URLError: I did look at some tips on the other forums and tried it but it has been of no use. Any help will be much appreciated.
false
14,827,296
0.197375
1
0
1
The environment variables that were used by crontab and from the command line were different. I fixed this by adding */15 * * * * . $HOME/.profile; /path/to/command. This made the crontab to pick up enivronment variables that were specified for the system.
0
603
0
1
2013-02-12T07:10:00.000
python,urllib2,python-2.6
Urllib2 runs fine if i run the program independently but throws error when i add it to a cronjob
1
1
1
14,847,414
0
1
0
Use Amazon SWF to communicate messages between servers? On server A I want to run a script A When that is finished I want to send a message to server B to run a script B If it completes successfully I want it to clear the job from the workflow queue I’m having a really hard time working out how I can use Boto and SWF in combination to do this. I am not after some complete code but what I am after is if anyone can explain a little more about what is involved. How do I actually tell server B to check for the completion of script A? How do I make sure server A wont pick up the completion of script A and try and run script B (since server B should run this)? How do I actually notify SWF of script A completion? Is thee a flag, or a message, or what? I’m pretty confused about all of this. What design should I use?
false
14,829,562
0.049958
0
0
1
You can use SNS, When script A is completed it should trigger SNS, and that will trigger a notification to Server B
0
3,687
0
9
2013-02-12T09:44:00.000
python,linux,amazon-web-services,boto,amazon-swf
Using Amazon SWF To communicate between servers
1
2
4
14,829,925
0
1
0
Use Amazon SWF to communicate messages between servers? On server A I want to run a script A When that is finished I want to send a message to server B to run a script B If it completes successfully I want it to clear the job from the workflow queue I’m having a really hard time working out how I can use Boto and SWF in combination to do this. I am not after some complete code but what I am after is if anyone can explain a little more about what is involved. How do I actually tell server B to check for the completion of script A? How do I make sure server A wont pick up the completion of script A and try and run script B (since server B should run this)? How do I actually notify SWF of script A completion? Is thee a flag, or a message, or what? I’m pretty confused about all of this. What design should I use?
false
14,829,562
0.244919
0
0
5
I don't have any example code to share, but you can definitely use SWF to coordinate the execution of scripts across two servers. The main idea with this is to create three pieces of code that talk to SWF: A component that knows which script to execute first and what to do once that first script is done executing. This is called the "decider" in SWF terms. Two components that each understand how to execute the specific script you want to run on each machine. These are called "activity workers" in SWF terms. The first component, the decider, calls two SWF APIs: PollForDecisionTask and RespondDecisionTaskCompleted. The poll request will give the decider component the current history of an executing workflow, basically the "where am i" state information for your script runner. You write code that looks at these events and figure out which script should execute. These "commands" to execute a script would be in the form of a scheduling of an activity task, which is returned as part of the call to RespondDecisionTaskCompleted. The second components you write, the activity workers, each call two SWF APIs: PollForActivityTask and RespondActivityTaskCompleted. The poll request will give the activity worker an indication that it should execute the script it knows about, what SWF calls an activity task. The information returned from the poll request to SWF can include single execution-specific data that was sent to SWF as part of the scheduling of the activity task. Each of your servers would be independently polling SWF for activity tasks to indicate the execution of the local script on that host. Once the worker is done executing the script, it calls back to SWF through the RespondActivityTaskCompleted API. The callback from your activity worker to SWF results in a new history being handed out to the decider component that I already mentioned. It will look at the history, see that the first script is done, and schedule the second one to execute. Once it sees that the second one is done, it can "close" the workflow using another type of decision. You kick off the whole process of executing the scripts on each host by calling the StartWorkflowExecution API. This creates the record of the overall process in SWF and kicks out the first history to the decider process to schedule the execution of the first script on the first host. Hopefully this gives a bit more context on how to accomplish this type of workflow using SWF. If you haven't already, I would take a look at the dev guide on the SWF page for additional info.
0
3,687
0
9
2013-02-12T09:44:00.000
python,linux,amazon-web-services,boto,amazon-swf
Using Amazon SWF To communicate between servers
1
2
4
14,881,688
0
0
0
I installed paramiko in my Ubuntu box "sudo apt-get install python-paramkio". But when import the paramiko module i am getting error. ImportError:No Module named paramiko When i list the python modules using help('modules'). i couldn't find paramiko listed.
false
14,830,722
0
1
0
0
To use python libraries, you must have development version of python like python2.6-dev, which can be installed using sudo apt-get install python2.6-dev. Then you may install any additional development libraries that you want in your code to run. Whatever you install using sudo apt-get install python-paramkio or python setup.py install will then be available to you.
0
4,876
0
0
2013-02-12T10:47:00.000
python,ubuntu,import,paramiko
paramiko installation " Unable to import ImportError"
1
1
1
14,832,457
0
0
0
I know, using Python, a webpage can be visited. But is it also possible to visit a webpage with a new IP address each time?
false
14,837,339
0.099668
0
0
1
Do an online search for a listing of "Proxy services" from the internet. You can then loop through them as proxies in Python. There are companies that maintain open proxy networks , and across multiple continents , to help people get past GEO-IP restrictions. You can also configure different servers you control on the internet to act as remote proxies for your needs. Don't use TOR for this project, which has legitimate uses and needs... and has bad bandwidth already. There are rarely any legitimate uses for doing stuff like this. There are also shady publishers of "Open Proxy" lists - basically a daily updated list of wrongly configured apache instances that can reroute requests. Those are often used by people trying to increase ad impressions / pagecounts or fake online voting contests. ( which are the two main things people want to do with open proxies and repeated 'new' visits )
1
9,857
0
0
2013-02-12T16:38:00.000
python,ip-address,urllib
Can I visit a webpage many times with a new IP address each time using Python?
1
1
2
14,837,808
0
0
0
So for my application, I'm using urllib.getproxies() to detect proxy settings. The function runs well when I call it from a python shell. But when my application runs as a service (and only when it runs as a service), urllib.getproxies() returns me an empty dictionary. I'm using windows 2008 R2 and python 2.7. Do you guys have any idea where it could come from ? Thanks
false
14,862,706
0
0
0
0
So the answer is that on windows proxy system settings are stored in the registry under HKEY_CURRENT_USER So as the service runs under a special user it can't find it in its HKEY_CURRENT_USER. The solutions: 1. Run the service under another user. 2. Read the proper user registry
0
813
0
1
2013-02-13T20:41:00.000
python,windows,service,proxy,urllib
Python urllib.getproxies() on windows doesn't work when run as a service
1
1
1
14,879,550
0
1
0
I am toying around with BeautifulSoup and I like it so far. The problem is the site I am trying to scrap has a lazyloader... And it only scraps one part of the site. Can I have a hint as to how to proceed? Must I look at how the lazyloader is implemented and parametrize anything else?
true
14,868,003
1.2
0
0
1
It turns out that the problem itself wasn't BeautifulSoup, but the dynamics of the page itself. For this specific scenario that is. The page returns part of the page, so headers need to be analysed and sent to the server accordingly. This isn't a BeautifulSoup problem itself. Therefore, it is important to take a look at how the data is loaded on a specific site. It's not always a "Load a whole page, process the whole page" paradigm. In some cases, you need to load part of the page and send a specific parameter to the server in order to keep loading the rest of the page.
0
1,587
0
3
2013-02-14T04:49:00.000
python,python-2.7,lazy-loading,beautifulsoup
Crawling a page using LazyLoader with Python BeautifulSoup
1
1
1
16,251,642
0
1
0
I am trying to check if a user disconnects from my site, how would I go about doing this? I am doing this in order to check if a user is "online" or not.
true
14,886,400
1.2
0
0
2
Basically you can't tell that the user has left your site on the server-side. The common way to do what you want to achieve is to use a time limit after the last known request as a cutoff between the online/offline states. To make this more accurate you can have a script on the client-side that does regular AJAX polling, if you must consider that a user is online long after their last request while your site is still open in a tab. If you must check that the user has the tab active, make that request conditional on recent mouse or keyboard events.
0
1,304
0
1
2013-02-15T00:13:00.000
python,flask
Checking if a user disconnects using Flask
1
1
1
14,915,333
0
0
0
Working on a python scraper/spider and encountered a URL that exceeds the char limit with the titled IOError. Using httplib2 and when I attempt to retrieve the URL I receive a file name too long error. I prefer to have all of my projects within the home directory since I am using Dropbox. Anyway around this issue or should I just setup my working directory outside of home?
false
14,886,640
0.132549
0
0
2
As you apparently have passed '.cache' to the httplib.Http constructor, you should change this to something more appropriate or disable the cache.
0
5,706
0
2
2013-02-15T00:39:00.000
python,python-2.7,ubuntu-12.04
Ubuntu encrypted home directory | Errno 36 File Name too long
1
1
3
14,886,804
0
1
0
Kindly help me in configuring the socketio in my django module. Am using windows7 OS File wsgi.py Sample Code - from socketio import SocketIOServer Error - Unresolved import:SocketIOServer Am new to python and Django Frameworks.!
false
14,888,428
0
0
0
0
I think what you want is from socketio.server import SocketIOServer
0
475
0
0
2013-02-15T04:49:00.000
django,python-2.7,socket.io,gevent,gevent-socketio
socketio in python
1
2
2
14,888,521
0
1
0
Kindly help me in configuring the socketio in my django module. Am using windows7 OS File wsgi.py Sample Code - from socketio import SocketIOServer Error - Unresolved import:SocketIOServer Am new to python and Django Frameworks.!
false
14,888,428
0.099668
0
0
1
Try this: pip install socketIO-server
0
475
0
0
2013-02-15T04:49:00.000
django,python-2.7,socket.io,gevent,gevent-socketio
socketio in python
1
2
2
53,155,835
0
0
0
I am having a hard time installing lxml(3.1.0) on python-3.3.0. It installs without errors and I can see the lxml-3.1.0-py3.3-linux-i686.egg in the correct folder (/usr/local/lib/python3.3/site-packages/), but when I try to import etree, I get this: from lxml import etree Traceback (most recent call last): File "", line 1, in ImportError: /usr/local/lib/python3.3/site-packages/lxml-3.1.0-py3.3-linux-i686.egg/lxml/etree.cpython-33m.so: undefined symbol: xmlBufContent I did try to install with apt-get, I tried "python3 setup.py install" and I did via easy_install. I have to mention that I have 3 versions installed (2.7, 3.2.3 and 3.3.0.), but I am too much of a beginner to tell if this has to do with it. I did search all over, but I could not find any solution to this. Any help is greatly appreciated! best, Uhru
false
14,910,250
0.197375
0
0
1
You should probably mention the specific operating system you're trying to install on, but I'll assume it's some form of Linux, perhaps Ubuntu or Debian since you mention apt-get. The error message you mention is typical on lxml when the libxml2 and/or libxslt libraries are not installed for it to link with. For whatever reason, the install procedure does not detect when these are not present and can give the sense the install has succeeded even though those dependencies are not satisfied. If you issue apt-get install libxml2 libxml2-dev libxslt libxslt-dev that should eliminate this error.
0
2,277
1
1
2013-02-16T12:23:00.000
lxml,importerror,python-3.3
lxml on python-3.3.0 ImportError: undefined symbol: xmlBufContent
1
1
1
14,927,230
0
1
0
I'm trying to create a list of dicts with two data items. The page I'm looking at has 37 matches for //div[@id='content']/*[self::p or self::h2]/a[2]; however, it only has 33 matches for //div[@id='content']/*[self::p or self::h2]/a[contains(@href,'game')]/img[@src] The two xpaths have //div[@id='content']/*[self::p or self::h2] in common. I effectively only want to get the element matched for the first xpath if the second xpath is matched, and leave the 4 without the second element behind. I'm hoping that this can be accomplished with xpath but if not, could use some advice on writing a function that achieves this in python.
false
14,944,900
0
0
0
0
You could do the matching in XPath, and then simply take the resulting nodes parent in Python.
0
153
0
0
2013-02-18T20:39:00.000
python,xpath
conditional xpath? need xpath if more specific xpath is matched
1
1
3
14,946,219
0
0
0
I can see messages have a sent time when I view them in the SQS message view in the AWS console. How can I read this data using Python's boto library?
false
14,945,604
0.291313
1
0
3
When you read a message from a queue in boto, you get a Message object. This object has at attribute called attributes. It is a dictionary of attributes that SQS keeps about this message. It includes SentTimestamp.
0
4,489
0
7
2013-02-18T21:29:00.000
python,amazon-web-services,boto,amazon-sqs
SQS: How can I read the sent time of an SQS message using Python's boto library
1
1
2
14,967,271
0
1
0
I have an Amazon Ubuntu instance which I stop and start (not terminate). I was wondering if it is possible to run a script on start and stop of the server. Specifically, I am looking at writting a python boto script to take my RDS volume offline when the EC2 server is not running. Can anyone tell me if this is possible please?
true
14,962,414
1.2
1
0
1
It is possible. You just have to write an init script and setup proper symbolic links in /etc/rc#.d directories. It will be started with a parameter start or stop depending on if machine is starting up or shutting down.
0
1,561
1
1
2013-02-19T16:26:00.000
python,linux,amazon-web-services,amazon-ec2,amazon-rds
Running a script on EC2 start and stop
1
1
1
14,966,165
0
0
0
Is it possible to run more than one python script at the same time? I try to start a second instance of IDLE, however I get an error message: "Socket Error: No connection could be made because the target machine actively refused it." and then "IDLEs subprocess didn't make a connection...." Thanks
false
14,997,729
0
0
0
0
I have ended up running python scripts in a CMD shell, while editing others in IDLE. There's probably a better way, but this works.
0
623
0
1
2013-02-21T00:31:00.000
python,arcgis,python-idle
Multiple python sessions with IDLE
1
1
1
16,342,517
0
1
0
I m using Google Sites API in my python code deployed over Google App Engine. I have came across a problem: Google Sites API allows to create a site, add users to site(access permission),etc.., however we get status:200 form the API that site is being created and same for adding users to the Google Site, but when i go to sites.google.com to access that site it still says 'Creating your site' I can see a site locked in wait state for almost a week. We don't have any specific steps to reproduce it, this has random appearances. Please suggest what is the correct solution or if there is no perfect solution than suggest a workaround for the same.
false
15,004,797
0
0
0
0
I think you can try to reach Google API's developers to tell them about this bug, they might not be aware of it.
0
103
0
1
2013-02-21T14:30:00.000
python,google-app-engine,google-sites
Even though the Google Sites API creates a site, we cannot still access the site
1
1
1
15,004,843
0
0
0
when user logs in to his desktop windows os authenticates him against Active Directory Server. so Whenever he accesses a web page he should not be thrown a login page for entering his userid or password.Instead, his userid and domain need to be captured from his desktop and passed to the web server.(let him enter password after that) Is this possible in python to get username and domain of of client? win32api.GetUserName() gives the username of the server side. Thanks in advance
false
15,024,894
0
0
0
0
What you want to do is called Single sign on (SSO) and it's much easier to implement on actual web server than Django. So, you should check how to do SSO on Apache/Nginx/whateverYouAreUsing, then the web server will forward the authenticated username to your django app.
0
2,471
0
3
2013-02-22T13:01:00.000
django,python-2.7
how to get username and domain of windows logged in client using python code?
1
1
3
32,207,711
0
0
0
is there a way to check if a chat is a group chat? Or at least to find out how many users there are in a group. Like by checking for the user number, if it is 2, then it is obviously 1-1 (Single), but if it as anything else, it would be a group chat.
false
15,049,661
0
0
0
0
The Type property of the chat object will be either chatTypeDialog or chatTypeMultiChat with the latter being a group chat. You can safely ignore the other legacy enumeration values.
1
1,405
0
0
2013-02-24T07:33:00.000
python,skype,skype4py
Skype4Py Check If Group Chat
1
1
3
15,052,112
0
1
0
I know how to grab a sources HTML but not PHP is it possible with the built in functions?
false
15,057,651
0
0
0
0
PHP scripts are run server-side and produce a HTML document (among other things). You will never see the PHP source of a HTML document when requesting a website, hence there is no way for Python to grab it either. This isn't even Python-related.
0
128
0
0
2013-02-24T22:52:00.000
python,python-3.3
Python grabbing pages source with PHP in it
1
1
2
15,057,740
0
0
0
How do I get the play length of an ogg file without downloading the whole file? I know this is possible because both the HTML5 tag and VLC can show the entire play length immediately after loading the URL, without downloading the entire file. Is there a header or something I can read. Maybe even the bitrate, which I can divide by the file size to get an approximate play length?
false
15,059,902
0
0
0
0
This is just not possible without download the data itself. You could specify the related information as part of the S3 metadata of the related key. So could write to introspect the metadata before actually downloading the data.
0
1,333
0
10
2013-02-25T04:04:00.000
python,ogg
Getting the length of a ogg track from s3 without downloading the whole file
1
1
3
15,060,046
0
0
0
Using Redis with our Python WSGI application, we are noticing that at certain intervals, Redis stops responding to requests. After some time, however, we are able to fetch the values stored in it again. After seeing this situation and checking the status of the Redis service, it is still online. If it is of any help, we are using the redis Python package and using StrictRedis as a connection class, with a default ConnectionPool. Any ideas on this will be greatly appreciated. If there is more information which will help better diagnose the problem, let me know and I'll update ASAP. Thanks very much!
true
15,081,502
1.2
0
0
0
More data about your redis setup and the size of the data set would be useful. That said I would venture to guess your Eedis server is configured to persist data to disk (the default). If so you could be seeing your Redis node getting a bit overwhelmed whenit forks off a copy of itself to save the data set to disk. If this is the case and you do need to persist to disk then I would recommend a second instance be stood up and configured to be a shave to the first one and persist to disk. The master you would then configure to not persist to disk. In this configuration you should see the writable node be fully responsive at all times. You could even go so far as to setup a Non-persisting slave for read-only access. But without more details on your configuration, resources, and usage it is merely an educated guess.
0
122
0
2
2013-02-26T04:43:00.000
python,rest,web-applications,redis
Redis server responding intermittently to Python client
1
1
1
15,084,434
0
1
0
I have a small python program to help my colleagues to analyse some tsv data. The data is not big, usually below 20MB. I developed the GUI with PyQT. I want to change this desktop program to a web app so my colleagues don't have to upgrade the program every time I change it or fix a bug. They can just go to a website and use Chrome as the GUI. So how do I do this? I spend most of my time developing desktop program and just know some basic web developing knowledges. I have read some framework such as flask and web2py, and know how to use HTML and Javascript to make buttons but no clue to achieve my purpose. Can someone give me a practical way to do this? It'd be better if the user don't have to upload the local data to the server. Maybe just download the python code from server then execute in Chrome. Is this possible? Thanks.
true
15,101,770
1.2
0
0
0
The whole point of a web application is that the GUI is written in HTML, CSS, and JavaScript, not Python. However, it talks to a web service, which can be written in Python. For a well-written desktop app, the transition should be pretty easy. If you've already got a clean separation between the GUI part and the engine part of your code, the engine part will only need minor changes (and maybe stick it behind, e.g., a WSGI server). You will have to rewrite the GUI part for the web, but in a complex app, that should be the easy part. However, many desktop GUI apps don't have such a clean separation. For example, if you have button handlers that directly do stuff to your model, there's really no way to make that work without duplicating the model on both sides (the Python web service and the JS client app) and synchronizing the two, which is a lot of work and leads to a bad experience. In that case, you have to decide between rewriting most of your app from scratch, or refactoring it to the point where you can web-service-ify it. If you do choose to go the refactoring route, I'd consider adding a second local interface (maybe using the cmd module to build a CLI, or tkinter for an alternate GUI), because that's much easier to do. Once the same backend code can support your PyQt GUI and your cmd GUI, adding a web interface is much easier.
0
218
0
0
2013-02-26T23:55:00.000
javascript,python,google-chrome,web-applications,tsv
How to setup a web app which can handle local data without uploading the data? Use python
1
3
3
15,101,984
0
1
0
I have a small python program to help my colleagues to analyse some tsv data. The data is not big, usually below 20MB. I developed the GUI with PyQT. I want to change this desktop program to a web app so my colleagues don't have to upgrade the program every time I change it or fix a bug. They can just go to a website and use Chrome as the GUI. So how do I do this? I spend most of my time developing desktop program and just know some basic web developing knowledges. I have read some framework such as flask and web2py, and know how to use HTML and Javascript to make buttons but no clue to achieve my purpose. Can someone give me a practical way to do this? It'd be better if the user don't have to upload the local data to the server. Maybe just download the python code from server then execute in Chrome. Is this possible? Thanks.
false
15,101,770
0.066568
0
0
1
No, you cannot run Python code in a web browser.[1] You'd have to port the core of your application to JavaScript to do it all locally. Just do the upload. 20MB isn't all that much data, and if it's stored on the server then they can all look at each others' results, too. [1] There are some tools that try to transpile Python to JavaScript: pyjs compiles directly, and Emscripten is an entire LLVM interpreter in JS that can run CPython itself. I wouldn't really recommend relying on these.
0
218
0
0
2013-02-26T23:55:00.000
javascript,python,google-chrome,web-applications,tsv
How to setup a web app which can handle local data without uploading the data? Use python
1
3
3
15,101,809
0
1
0
I have a small python program to help my colleagues to analyse some tsv data. The data is not big, usually below 20MB. I developed the GUI with PyQT. I want to change this desktop program to a web app so my colleagues don't have to upgrade the program every time I change it or fix a bug. They can just go to a website and use Chrome as the GUI. So how do I do this? I spend most of my time developing desktop program and just know some basic web developing knowledges. I have read some framework such as flask and web2py, and know how to use HTML and Javascript to make buttons but no clue to achieve my purpose. Can someone give me a practical way to do this? It'd be better if the user don't have to upload the local data to the server. Maybe just download the python code from server then execute in Chrome. Is this possible? Thanks.
false
15,101,770
0
0
0
0
If I get your point correctly, you want Web connection, so your python program updated on server, client get it before using it. Data store on local to avoid upload big file. You can write a python program to check a server location to get your latest program if needed. You need a url / server file for program version / created date/time information to determine if you need to update or not. After get latest python program, then start this python program to run locally. With this said, What you need is to update your program to add below features: Access your server, to get latest version information Check against current version to see if you need to download latest program Download latest version and use that to run locally. Does this solve your problem?
0
218
0
0
2013-02-26T23:55:00.000
javascript,python,google-chrome,web-applications,tsv
How to setup a web app which can handle local data without uploading the data? Use python
1
3
3
32,979,390
0
0
0
It seems the socket connection through paramiko (v1.10.0) is not stable. I have two computers. The python code is on the PC one. The connection sometime is successful and sometime is not (Same code). When the PC paramiko code fails (socket.error, 10060), I use my Mac via terminal ssh login the server and everything is fine. I use set_missing_host_key_policy in the code. But the Mac has the key I guess. I typed yes when login at the first time. If the unstable connection is caused by the hotkey, how do I get the host key? From the server or somewhere in my local folder (win7)?
false
15,105,183
0.197375
0
0
1
Try switching off Windows firewall. It's a network error, it should not be because of SSH key problems. Error Code 10060: Connection timeout Background: The gateway could not receive a timely response from the website you are trying to access. This might indicate that the network is congested, or that the website is experiencing technical difficulties.
0
3,123
0
0
2013-02-27T05:56:00.000
python,sockets,ssh,paramiko
python paramiko module socket.error, errno 10060
1
1
1
15,105,597
0
0
0
I have a web server which I want to send data to the client when the server detects a change in the data base. I want the client to receive the data without polling. What is the best way to achieve this? (I read a bit about SSE - server sent events, but not sure this is the way to go) Thanks
false
15,128,863
0
0
0
0
If polling is not an option, you can think about the better technique, WebSockets, if your server supports it (and your target browser!). One way or another, you need a connection opened by the client to the server, though.
0
172
0
2
2013-02-28T06:24:00.000
javascript,jquery,python,mysql,server-sent-events
java script server async events
1
2
3
15,128,905
0
0
0
I have a web server which I want to send data to the client when the server detects a change in the data base. I want the client to receive the data without polling. What is the best way to achieve this? (I read a bit about SSE - server sent events, but not sure this is the way to go) Thanks
false
15,128,863
0.066568
0
0
1
Yes, Server-Sent Events is appropriate technology for listening to changes on the server. If you're only listening, then SSE is better (faster, lightweight, HTTP-compatible) than a 2-way WebSocket.
0
172
0
2
2013-02-28T06:24:00.000
javascript,jquery,python,mysql,server-sent-events
java script server async events
1
2
3
18,561,947
0
0
0
An SVG file is basically an XML file so I could use the string <?xml (or the hex representation: '3c 3f 78 6d 6c') as a magic number but there are a few opposing reason not to do that if for example there are extra white-spaces it could break this check. The other images I need/expect to check are all binaries and have magic numbers. How can I fast check if the file is an SVG format without using the extension eventually using Python?
false
15,136,264
0.132549
0
0
2
You could try reading the beginning of the file as binary - if you can't find any magic numbers, you read it as a text file and match to any textual patterns you wish. Or vice-versa.
0
5,432
0
14
2013-02-28T13:03:00.000
python,xml,svg,file-format,magic-numbers
How can I say a file is SVG without using a magic number?
1
1
3
15,136,487
0
0
0
I am trying to find the presence of an element by finding the length of the jquery. How can we capture the length of a webelement in a variable, so that I can check the value of the variable to make decision. Or is there any other way to accomplish the same result. I am using python, selenium webdriver and JQuery. Thanks in advance.
false
15,187,345
0
0
0
0
We can Use this Funda If driver.findElements(By.id(id)).size() > 0 Then Element Present; Else Element Not Present Same Applies for Value. Please Let me know is my Funda OK or NOT.
0
467
0
0
2013-03-03T15:42:00.000
jquery,python,selenium-webdriver
When using python and selenium how to find the presence of an element based on id and value
1
1
2
22,935,427
0
1
0
We have developed a web based application, with user login etc, and we developed a python application that have to get some data on this page. Is there any way to communicate python and system default browser ? Our main goal is to open a webpage, with system browser, and get the HTML source code from it ? We tried with python webbrowser, opened web page succesfully, but could not get source code, and tried with urllib2, in that case, i think we have to use system default browser's cookie etc, and i dont want to this, because of security.
false
15,226,643
0
0
0
0
Have a look at the nltk module---they have some utilities for looking at web pages and getting text. There's also BeautifulSoup, which is a bit more elaborate. I'm currently using both to scrape web pages for a learning algorithm---they're pretty widely used modules, so that means you can find lots of hints here :)
0
2,468
0
0
2013-03-05T14:43:00.000
python,pyqt
python open web page and get source code
1
1
3
15,227,528
0
0
0
I'm going to run a small server inheriting from Python.SocketServer.TCPServer. In case the code is too slow, I could make it threaded or just expand the queue size, and it will take a bit longer to run. Are there any dangers in setting the request_queue_size to a large value (say a few thousands)? The only very important thing in my programme is that I don't lose any data.
true
15,268,997
1.2
0
0
3
Depending on your OS, the maximum value for request_queue_size will be 128 (that's the default maximum size on Linux and Mac OS X, at least). Higher values won't give an error but will just silently be limited. Higher values won't necessarily be a problem I think, although I would seriously consider using some form of threading (or, if appropriate, an asynchronous solution using gevent or something similar). The SocketServer module provides mixins for creating threaded or forking socket servers.
0
995
0
4
2013-03-07T10:41:00.000
python
Is it dangerous to have a very long request_queue_size in Python.SocketServer.TCPServer?
1
1
1
15,269,253
0
0
0
I am using Python 2.7 and Jenkins. I am writing some code in Python that will perform a checkin and wait/poll for Jenkins job to be complete. I would like some thoughts on around how I achieve it. Python function to create a check-in in Perforce-> This can be easily done as P4 has CLI Python code to detect when a build got triggered -> I have the changelist and the job number. How do I poll the Jenkins API for the build log to check if it has the appropriate changelists? The output of this step is a build url which is carrying out the job How do I wait till the Jenkins job is complete? Can I use snippets from the Jenkins Rest API or from Python Jenkins module?
true
15,297,322
1.2
0
0
4
You can query the last build timestamp to determine if the build finished. Compare it to what it was just before you triggered the build, and see when it changes. To get the timestamp, add /lastBuild/buildTimestamp to your job URL As a matter of fact, in your Jenkins, add /lastBuild/api/ to any Job, and you will see a lot of API information. It even has Python API, but I not familiar with that so can't help you further However, if you were using XML, you can add lastBuild/api/xml?depth=0 and inside the XML, you can see the <changeSet> object with list of revisions/commit messages that triggered the build
0
15,591
0
10
2013-03-08T15:21:00.000
python-2.7,jenkins
Wait until a Jenkins build is complete
1
1
8
15,340,482
0
1
0
I use to program on python. I have started few months before, so I am not the "guru" type of developer. I also know the basics of HTML and CSS. I see few tutorials about node.js and I really like it. I cannot create those forms, bars, buttons etc with my knowledge from html and css. Can I use node.js to create what user see on browser and write with python what will happen if someone push the "submit" button? For example redirect, sql write and read etc. Thank you
false
15,315,984
0
0
0
0
I think you're thinking about this problem backwards. Node.js lets you run browser Javascript without a browser. You won't find it useful in your Python programming. You're better off, if you want to stick with Python, using a framework such as Pyjamas to write Javascript with Python or another framework such as Flask or Twisted to integrate the Javascript with Python.
0
4,502
0
2
2013-03-09T21:12:00.000
python,node.js
using python and node.js
1
1
3
15,316,002
0
0
0
Is it possible to make selenium use the TOR browser? Does anyone have any code they could copy-paste?
false
15,316,304
0
0
0
0
I looked into this, and unless I'm mistaken, on face value it's not possible. The reason this cannot be done is because: Tor Browser is based on the Firefox code. Tor Browser has specific patches to the Firefox code to prevent external applications communicating with the Tor Browser (including blocking the Components.Interfaces library). The Selenium Firefox WebDriver communicates with the browser through Javascript libraries that are, as aforementioned, blocked by Tor Browser. This is presumably so no-one outside of the Tor Browser either on your box or over the internet knows about your browsing. Your alternatives are: Use a Tor proxy through Firefox instead of the Tor Browser (see the link in the comments of the question). Rebuild the Firefox source code with the Tor Browser patches excluding those that prevent external communication with Tor Browser. I suggest the former.
0
46,677
0
32
2013-03-09T21:49:00.000
python,selenium,tor
Open tor browser with selenium
1
1
11
21,777,926
0
0
0
I am working on a super simple socket program and I have code for the client and code for the server. How do I run both these .py files at the same time to see if they work ?
false
15,329,649
0
0
0
0
For newbies like myself: do not open the client script/file from the first opened IDLE Shell, with the server script already running in the Editor window, but open another IDLE Shell window, in which you open/run this client script.
0
14,635
0
7
2013-03-11T00:53:00.000
python,sockets,python-idle
How to run two modules at the same time in IDLE
1
2
2
71,399,314
0
0
0
I am working on a super simple socket program and I have code for the client and code for the server. How do I run both these .py files at the same time to see if they work ?
true
15,329,649
1.2
0
0
15
You can run multiple instances of IDLE/Python shell at the same time. So open IDLE and run the server code and then open up IDLE again, which will start a separate instance and then run your client code.
0
14,635
0
7
2013-03-11T00:53:00.000
python,sockets,python-idle
How to run two modules at the same time in IDLE
1
2
2
15,329,711
0
0
0
I need to make a simple p2p vpn app, after a lot of searches I found a tun/tap module for python called PYTUN that is used to make a tunnel. How can I use this module to create a tunnel between 2 remote peers? All the attached doc only show how to make the tunnel interface on your local computer and config it, but it does not mention how to connect it to the remote peer.
false
15,346,908
0
0
0
0
pytun is not sufficient for this. It serves to connect your Python application to a system network interface. In effect, you become responsible for implementing that system network interface. If you want traffic that is routed over that network interface to traverse an actual network, then it is the job of your Python program to do the actual network operations that move the data from host A to host B. This is probably a lot of work to do well. I suggest you use an existing VPN tool instead.
0
2,900
0
2
2013-03-11T19:27:00.000
python,networking
how to use the tun/tap python module (pytun) to create a p2p tunnel?
1
1
2
17,349,828
0
1
0
I want to write a little program that will give me an update whenever a webpage changes. Like I want to see if there is a new ebay listing under a certain category and then send an email to myself. Is there any clean way to do this? I could set up a program and run it on a server somewhere and have it just poll ebay.com every couple of minutes or seconds indefinitely but I feel like there should be a better way. This method could get dicey too if I wanted to monitor a variety of pages for updates.
true
15,348,326
1.2
0
0
0
There is no 'clean' way to do this. You must relay on CURL or file_get_context() with context options in order to simply get data from URL, and, in order to notify you when content of URL is changed, you must store in database snapshots of page you are listening. Lately you are comparing new version of crawled content with earlier created snapshot and, if change in significant parts of DOM are detected, that should be trigger for your mail notifier function.
0
68
0
0
2013-03-11T20:49:00.000
php,python,web
Create an listener on a 3rd party's web page
1
1
1
15,348,563
0
1
0
I'm currently searching what is the rml file generating the header in openerp 7. I can't find it... I have found server/openerp/addons/base/report/corporate_defaults.xml but no... Or maybe there is a cache caching the rml befort the report generation ? Thanks by advance !
true
15,352,183
1.2
0
0
2
You can find header/footer of rml report in res_company_view.xml file server side. The file path is : server/openerp/addons/base/res/res_company_view.xml And the value of this header footer set default from: server/openerp/addons/base/res/res_company.py Regards
0
2,744
0
3
2013-03-12T02:21:00.000
python,header,report,footer,openerp
Change openerp 7 header/footer with rml
1
1
1
15,353,498
0
0
0
I have an app that consists of a local "server" and a GUI client. The server is written in Python, while the GUI is meant to be changable, and is written in Flex 4. The client queries the local server for information and displays it accordingly. Both applications are meant to be run on the same computer, and will only communicate locally. At the moment, the client and the Python server communicates via a basic socket, the client writes a request to the socket, and the socket returns some data. However, since I'm writing a desktop app, I thought it might be easier to maintain and to polish a system that uses standard streams instead of a socket. The server would listen for raw_input() continuously, and output according to anything written to stdin, and the client, in this case, would use AIR's NativeProcess class to read and write to stdout and stdin, rather than using sockets. The client and server are separate processes, but are meant to be started at more or less the same time. I don't really have need for complex networking, just local cross-language communication. What are the pros and cons of each approach? What would I gain or lose from using sockets, vs what I would gain or lose from using standard streams to communicate? Which one is more efficient? Which one is easier to maintain?
false
15,368,272
1
0
0
12
On UNIX-like platforms, using stdin/stdout is a socket, so there's no difference. That is, if you launch a process with its stdout redirected, that'll typically be done with socketpair, so making your own UNIX-domain socket to communicate is unnecessary. The Python classes for handling stdin/stdout won't give you access to the full flexibility of the underlying socket though, so you'll have to set it up yourself I think if you want to do a half-close, for example (Python can't offer that cross-platform because the Windows sys::stdin can't be a socket, for example, nor is it always on UNIX.) The nice thing about a local TCP connection is that it's cross-platform, and gives predictable semantics everywhere. If you want to be able to close your output and still read from the input, for example, it's much simpler to do this sort of thing with sockets which are the same everywhere. Even without that, for simplicity's sake, using a TCP socket is always a reasonable way to work around the wacky Windows mess of named pipes, although Python shields you from that reasonably well.
0
1,968
0
8
2013-03-12T17:40:00.000
python,sockets,air,stdout,stdin
Sockets vs Standard Streams for local client-server communication
1
1
1
15,369,287
0
0
0
I need create a real-time client-server application (like Dropbox). Client application should listen one channel of data. I can do it with python? What solutions, technologies, modules exists in python for this task?
false
15,375,832
0.197375
0
0
1
Yes, you can do it with Python. You could do it with PHP, Bash, JavaScript, Ruby, C, C++, C#, Java, Haskell, Go, Assembler, Perl, Pascal, Oberon, Prolog, Lisp, or Caml if you like too. Most interfaces to sockets fall into one of two categories: Blocking interfaces Event interfaces There is no way to know which is right for your application without knowing what your application does.
0
549
0
0
2013-03-13T02:10:00.000
python,client-server,real-time
python real-time client-server application
1
1
1
15,375,850
0
0
0
I have two Python files that I want to prevent anyone from executing unless it's from the server itself. The files implement a function that increases an amount of money to a user. What I need is to make this file not public to the web, so that if someone tries to call this file, the file would refuse this request, unless the call is from the server. Does anyone know how I can do this? My first idea was to check for the IP address but a lot of people can spoof their IP. Example Let's say I have this file: function.py, a function in this file will accept a new amount of money and increase the appropriate balance in the database. When someone tries to post data to this file, and this person is outside the server (lets say from 244.23.23.0) the file will be in-accessible. Whereas, calling the function from the server itself will be accepted. So files can access other files on the server, but external users cannot, with the result that no one can execute this file unless it's called from the server. This is really important to me, because it's related to real money. Also, the money will come from PayPal IPN. And actually, if there was a way to prevent access unless it was coming from PayPal, that would be an amazing way to secure the app. OK, as far as what I have tried: Put the database in a cloud SQL using Google [https://developers.google.com/cloud-sql/] Try to check the IP of the incoming request, in the file Thanks for any and all help.
false
15,381,202
0
0
0
0
.htaccess, chmod or you could use a key defined by yourself... You have several possibilies. Edit: Anyway, if the file only contains a function. Nobody can use it from an external http request, unless you actually call it in this file: function();
0
538
0
0
2013-03-13T09:20:00.000
php,python,security
Restrict execution of Python files only to server (prevent access from browser)
1
1
5
15,381,241
0
0
0
How to send custom headers in the first handshake that occurs in the WebSocket protocol? I want to use custom header in my initial request "**X-Abc-Def : xxxxx" WebSocket clients are Python & Android client.
false
15,381,414
0.291313
0
0
3
@ThiefMaster got it perfect. But if you want to add custom headers, you can do that with the argument header instead of headers. Hope this helps
0
9,510
0
3
2013-03-13T09:31:00.000
python,websocket
Sending custom headers in websocket handshake
1
1
2
38,860,584
0
1
0
This is not so much of a specific question, but more a general one. I'm redoing one of my old projects and to make it more user friendly I want the following: The project will be running on my home server, with a flask/python backend. User using my website will be coming from my companies intranet. Would it be possible to load an intranet page in a iframe on my website. So in short, is it possible to load an intranet page from an internet-page that has no access to said intranet.
false
15,391,618
0.197375
0
0
2
Yes. Any page that the user can browse to normally can be loaded in an iframe.
0
301
0
0
2013-03-13T16:55:00.000
javascript,jquery,python,html,intranet
Internet app calls intranet page
1
2
2
15,391,677
0
1
0
This is not so much of a specific question, but more a general one. I'm redoing one of my old projects and to make it more user friendly I want the following: The project will be running on my home server, with a flask/python backend. User using my website will be coming from my companies intranet. Would it be possible to load an intranet page in a iframe on my website. So in short, is it possible to load an intranet page from an internet-page that has no access to said intranet.
true
15,391,618
1.2
0
0
3
Of course you can load it in an iframe, you don't need access to the page from the internet for that - the client needs it. Yet, the intranet application might request not to be viewed in a frame.
0
301
0
0
2013-03-13T16:55:00.000
javascript,jquery,python,html,intranet
Internet app calls intranet page
1
2
2
15,391,680
0
0
0
Is there any quick way in Python to get the MAC address of the default gateway? I can't make any ARP requests from the Linux machine I'm running my code on, so it has to come directly from the ARP table.
false
15,407,354
0.132549
0
0
2
You can read from /proc/net/arp and parse the content, that will give you couples of known IP-MAC addresses. The gateway is probably known at all times, if not you should ping it, and an ARP request will be automatically generated. You can find the default gw in /proc/net/route
0
5,009
1
3
2013-03-14T10:59:00.000
python,linux,ip,mac-address,arp
Python: get MAC address of default gateway
1
2
3
15,407,559
0
0
0
Is there any quick way in Python to get the MAC address of the default gateway? I can't make any ARP requests from the Linux machine I'm running my code on, so it has to come directly from the ARP table.
false
15,407,354
0.132549
0
0
2
Are you using Linux? You could parse the /proc/net/arp file. It contains the HW address of your gateway.
0
5,009
1
3
2013-03-14T10:59:00.000
python,linux,ip,mac-address,arp
Python: get MAC address of default gateway
1
2
3
15,407,512
0
0
0
To cut the story short: is there any way to get IP of untrusted client calling remote proxy object, using Pyro4? So person A is calling my object from IP 123.123.123.123 and person B from IP 111.111.111.111 is there any way to distinguish them in Pyro4, assuming that they cannot be trusted enough to submit their own IP.
false
15,407,510
0
0
0
0
Here is my solution to my problem: since I didn't really need to get specific addresses of clients using pyro, just to distinguish clients in specific subnet (in my classrom) from other ips (students working from home). I just started two pyro clients on two ports and then filtered traffic using a firewall.
0
354
0
1
2013-03-14T11:07:00.000
python,pyro
Get caller's IP in Pyro4 application
1
1
2
15,440,686
0
0
0
So I know how to download Excel files from Google Drive in .csv format. However, since .csv files do not support multiple sheets, I have developed a system in a for loop to add the '&grid=tab_number' to the file download url so that I can download each sheet as its own .csv file. The problem I have run into is finding out how many sheets are in the excel workbook on the Google Drive so I know how many times to set the for loop for.
true
15,456,709
1.2
0
1
0
Ended up just downloading with xlrd and using that. Thanks for the link Rob.
0
95
0
0
2013-03-17T01:39:00.000
python,excel,google-drive-api
Complicated Excel Issue with Google API and Python
1
1
1
15,505,507
0
0
0
I am trying to import results from a Google Search results rss/xml feed into my website but every time I run the python script I get a message from Google: Our systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests and not a robot. The script uses urllib to download pages and works with other rss feeds. Doesn't really make sense as I thought rss feeds were supposed to be consumed by software (bots), I left the script over the weekend and ran on Monday morning but still got the message so I am not hitting their servers too much. I can load the feed in my browser though and I can also download the feed using wget on the server?
false
15,478,126
0.53705
0
0
3
You could use some HTTP sniffer (like fiddler) or any protocol sniffer (tcpdump, wireshark) to sniff your network traffic to Google and check if your urllib request and wget/browser requests differ. Also check and compare all the cookies and HTTP-headers of both requests. And remember , that for IPs with big number of requests to Google - google sends captcha every N requests , so if you need to parse it's content - you possibly need to use some proxies for Google parsing.
0
1,138
0
2
2013-03-18T13:33:00.000
python,rss,urllib,bots
How to fetch a google search results feed with a python script and not be identified as a bot?
1
1
1
15,478,540
0
1
0
I'm having trouble with S3 files. I have some python code using boto that uploads file to S3, and I want to write to a log file links to the files I created for future reference. I can't seem to find a way to generate a link that works to only people that authenticated. I can create a link using the generate_url method, but then anybody who clicks on that link can access the file. Any other of creating the url, creates a link that doesn't work even if I'm logged in (Get an XML with access denied). Anybody knows of a way of doing this? Preferably permanent links, but I can do with only temporary links that expires after given time Thanks, Ophir
false
15,495,949
0.197375
0
0
1
No, there really isn't any way to do this without putting some sort of service between the people clicking on the links and the S3 objects. The reason is that access to the S3 content is determined by your AWS access_key and secret_key. There is no way to "login" with these credentials and logging into the AWS web console uses a different set of credentials that are only useful for the console. It does not authenticate you with the S3 service.
0
415
0
1
2013-03-19T09:42:00.000
python,file,amazon-s3,boto
Creating links to private S3 files which still requires authentication
1
1
1
15,503,402
0
1
0
I want to be able to get the list of all URLs that a browser will do a GET request for when we try to open a page. For eg: if we try to open cnn.com, there are multiple URLs within the first HTTP response which the browser recursively requests for. I'm not trying to render a page but I'm trying to obtain a list of all the urls that are requested when a page is rendered. Doing a simple scan of the http response content wouldn't be sufficient as there could potentially be images in the css which are downloaded. Is there anyway I can do this in python?
false
15,513,699
0
0
0
0
I guess you will have to create a list of all known file extensions that you do NOT want, and then scan the content of the http response, checking with "if substring not in nono-list:" The problem is all href's ending with TLDs, forwardslashes, url-delivered variables and so on, so i think it would be easier to check for stuff you know you dont want.
0
996
0
2
2013-03-20T01:28:00.000
python,http,http-headers
How can I extract the list of urls obtained during a HTML page render in python?
1
1
2
15,513,793
0
1
0
I would like to know if is there a way to validate that a request (say a POST or a GET) was made over https, I need to check this in a webapp2.RequestHandler to invalidate every request that is not sent via https best regards
false
15,515,299
0
0
0
0
If you are using GAE Flex (where the secure: directive doesn't work), the only way I've found to detect this (to redirect http->https myself) is to check if request.environ['HTTP_X_FORWARDED_PROTO'] == 'https'
0
428
1
1
2013-03-20T04:24:00.000
google-app-engine,python-2.x
how to check the request is https in app engine python
1
1
3
44,672,760
0
0
0
I am trying to create a sample python script using Selenium IDE 1.10.0 with Firefox version 19.0.2. I am able to create the script, but during run time i'm getting exception: "INFO - Got result: Failed to start new browser session: Error while launching browser on session null" So my question is that can i run the generated script against Firefox version 19.0.2. If yes then why i'm getting this error, if not then please provide me your input. Thanks in Advance Abhishek
false
15,516,354
0
0
0
0
Selenium IDE 1.10.0 release notes says it only supports upto Firefox 17 so you may face issue with v19. will you re-try downgrading Firefox please?
0
934
0
1
2013-03-20T05:58:00.000
python,selenium
Does Selenium IDE 1.10.0 support Firefox 19.0.2
1
1
1
15,517,460
0
1
0
Please help me solve following case: Imagine a typical classified category page. A page with list of items. When you click on items you land on internal pages.Now currently my crawler scrapes all these URLs, further scrapes these urls to get details of the item, check to see if the initial seed URL as any next page. If it has, it goes to the next page and do the same. I am storing these items in a sql database. Let say 3 days later, there are new itmes in the Seed URL and I want to scrap only new items. Possible solutions are: At the time of scraping each item, I check in the database to see if the URL is already scraped. If it has, I simply ask Scrapy to stop crawling further. Problem : I don't want to query database each time. My database is going to be really large and it will eventually make crawling super slow. I try to store last scraped URL and pass it on in the beginning, and the moment it finds this last_scraped_url it simply stops the crawler. Not possible, given the asynchronous nature of crawling URLs are not scraped in the same order they are received from seed URLs. ( I tried all methods to make it in orderly fashion - but that's not possible at all ) Can anybody suggest any other ideas ? I have been struggling over it for past three days. Appreciate your replies.
false
15,530,071
0.379949
0
0
2
Before trying to give you an idea... I must say I would try first your database option. Databases are made just for that and, even if your DB gets really big, this should not do the crawling significantly slow. And one lesson I have learned: "First do the dumb implementation. After that, you try to optimize." Most of times when you optimize first, you just optimize the wrong part. But, if you really want another idea... Scrapy's default is not to crawl the same url two times. So, before start the crawling you can put the already scraped urls (3 days before) in the list that Scrapy uses to know which urls were already visited. (I don't know how to do that.) Or, simpler, in your item parser you can just check if the url was already scraped and return None or scrape the new item accordingly.
0
1,365
0
3
2013-03-20T17:03:00.000
python,screen-scraping,scrapy
Scrapy Case : Incremental Update of Items
1
1
1
25,308,766
0
0
0
I've created an XMPP chat client in python. Chat generally works except it seems that Google Talk 'blocks' some messages when sending from my chat client to a user using Google Talk. For example, if I send the same one word message 'hi' multiple times to gtalk user, it only displays it once. However, when sending that same sequence of messages to a user on iChat or on Adium, all of the 'hi's get shown. Sometimes, Google Talk also doesn't display the first 1-2 incoming messages from my client. Otherwise, chatting works. My client never has any trouble with incoming chats. Thoughts?
true
15,552,788
1.2
0
0
1
In case it helps anyone, I figured it out. You just need to specify an id attribute in each chat message. They can be random id's but each message should have a different one. I assume gtalk was 'blocking' repeated messages b/c it couldn't tell if the messages were distinct or just repeats without an id.
0
277
0
1
2013-03-21T16:18:00.000
python,xmpp,chat
XMPP Chat Client - Some IM messages to Google Talk don't get received
1
1
1
15,570,548
0
0
0
I wonder, what is the advantage of using selenium for automation if at the end of the test he emits no reports where the test passed or failed?
false
15,578,942
1
1
0
6
Selenium isn't actually a testing framework, it's a browser driver. You don't write tests in Selenium any more than you write GUI apps in OpenGL. You usually write tests in a unit testing framework like unittest, or something like nose or lettuce built on top of it. Your tests then use Selenium to interact with a browser, as they use a database API to access the DB or an HTTP library to communicate with web services.
0
590
0
0
2013-03-22T20:02:00.000
python,selenium
Selenium Webdriver Testing - Python
1
2
3
15,579,077
0
0
0
I wonder, what is the advantage of using selenium for automation if at the end of the test he emits no reports where the test passed or failed?
false
15,578,942
0
1
0
0
Its up to the discretion of the user what to do with the selenium webdriver automation and how to report the test results. Selenium webdriver will give you the power to control your web browser and to automate your web application tests. Same as how you have to program in any other automation tool the conditions for checking your pass or fail criteria for any tests, in Selenium also it has to be programmed.It is totally up to the programmer how to report their results and the template to be followed. You will have to write your own code to format and store the test results.
0
590
0
0
2013-03-22T20:02:00.000
python,selenium
Selenium Webdriver Testing - Python
1
2
3
15,594,935
0
1
0
I have implemented GCM using my own sever. Now I'm trying to do the same using Python 2.7 in Google App Engine. How can I get the IP address for the server hosting my app? (I need it for API Key). Is IP-LookUp only option? And if I do so will the IP address remain constant?
true
15,686,853
1.2
0
0
4
You can check the IP easily by doing a ping from the command line to the domain name, as in "ping appspot.com". With this you will obtain the response from the real IP. Unfortunately this IP will change over time and won't make your GCM service work. In order to make it work you only need to leave the allowed IPs field blank.
0
907
1
4
2013-03-28T16:13:00.000
google-app-engine,python-2.7,google-cloud-messaging
API Key for GCM from GAE
1
1
2
17,506,596
0
0
0
I have a Python script which produces some some data. I would like to stream it to an HTTP server using POST. That is, I don't want to accumulate the data in a buffer and then send it -- I just want to send it as it's created. There will be a lot. The apparently obvious way to do this would be to open the HTTP connection in some way that return a writeable file object, write the data to that object, and then close the connection. However, it's not obvious to me that this is supported in any of the libraries I looked at (urllib2, httplib, and requests). How can I accomplish this?
false
15,710,399
-0.099668
0
0
-1
You wont get a writeable file object via urllib2 / urllib. The return value is a "file like" object which supports iteration and reading. If you can read the contents and create your own file object for writing.
0
147
0
0
2013-03-29T19:45:00.000
python,http,post
How can I get a writeable file object for an HTTP POST in Python?
1
1
2
15,710,412
0
0
0
I have a large local file. I want to upload a gzipped version of that file into S3 using the boto library. The file is too large to gzip it efficiently on disk prior to uploading, so it should be gzipped in a streamed way during the upload. The boto library knows a function set_contents_from_file() which expects a file-like object it will read from. The gzip library knows the class GzipFile which can get an object via the parameter named fileobj; it will write to this object when compressing. I'd like to combine these two functions, but the one API wants to read by itself, the other API wants to write by itself; neither knows a passive operation (like being written to or being read from). Does anybody have an idea on how to combine these in a working fashion? EDIT: I accepted one answer (see below) because it hinted me on where to go, but if you have the same problem, you might find my own answer (also below) more helpful, because I implemented a solution using multipart uploads in it.
true
15,754,610
1.2
0
0
6
There really isn't a way to do this because S3 doesn't support true streaming input (i.e. chunked transfer encoding). You must know the Content-Length prior to upload and the only way to know that is to have performed the gzip operation first.
0
15,998
0
17
2013-04-02T01:22:00.000
python,amazon-s3,gzip,boto
How to gzip while uploading into s3 using boto
1
1
3
15,763,863
0
0
0
I wrote a little download manager in python, and now i want to "catch" downloads from Chrome Firefox and Explorer so they will download with it, instead of each built-in download manager of the browser itself. I want to know when each of the browsers are starting a file download, so i can prevent the default behavior and use my own manager. All i need to start a download myself is the file url of course. I know that in the past there were popular download managers such as "Get Right" that did exactly this. I want to do something similar. Any ideas how would i go about doing this?
false
15,760,825
0
0
0
0
Each of the web browsers e.g. Firefox, Chrome, IE, Safari, all have plugin models. If you are going to truly integrate your manager into the web browsers, you will need to examine their plugin models, and make your download manager work through that.
0
233
0
4
2013-04-02T09:40:00.000
internet-explorer,google-chrome,firefox,python-2.7,download
Catch downloads from Chrome/Firefox/Explorer with python
1
1
1
28,467,360
0
0
0
I have an email interface client, and I am using IMAP for my requests. I want to be able to show, in real-time, basic email data, for a list view. As in, for example, the GMail list view. For that, I need to do an IMAP request to obtain the subject of all emails, the date of all emails, etc. This works so far. The problem is, I want to also show the first characters of the body text. If I use the BODYSTRUCTURE call to obtain the index of the text/HTML part it takes too long (for emails with thousands of characters it might take well over a second per email, while using only the subject/date/etc calls takes about 0.02 seconds max. I tried using the BODY[INDEX]<0.XYZ> bytes where XYZ is the number of the first bytes we want to obtain, but to my dismay, it takes as long as using the BODY[INDEX] call. Sometimes even more. Is there another way to obtain the first text characters, but in a quick manner? If I want to list 300 emails on my interface I cannot afford to spend 1 second per email just to obtain the first text characters. I'm using Python with imaplib for this, even though probably not relevant.
false
15,792,128
0.197375
1
0
2
If you really want to fetch the beginning of the first textual part of a message, you will have to parse the BODYSTRUCTURE. After you obtain the part ID of the desired textual part, use the BODY[number]<0.size> syntax. The suggestion given in the other answer will fail on multipart messages (like if you have a text/plain and text/html, which is most common format today.
0
1,047
0
3
2013-04-03T15:54:00.000
python,imap,imaplib
Obtain partial IMAP text part
1
1
2
15,875,488
0