Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I want to send it and forget it. The http rest service call I'm making takes a few seconds to respond. The goal is to avoid waiting those few seconds before more code can execute. I'd rather not use python threads I'll use twisted async calls if I must and ignore the response.
false
3,486,372
0.066568
0
0
1
You are going to have to implement that asynchronously as HTTP protocol states you have a request and a reply. Another option would be to work directly with the socket, bypassing any pre-built module. This would allow you to violate protocol and write your own bit that ignores any responses, in essence dropping the connection after it has made the request.
1
2,585
0
6
2010-08-15T05:49:00.000
python,http
How can I make an http request without getting back an http response in Python?
1
2
3
3,486,383
0
0
0
I want to send it and forget it. The http rest service call I'm making takes a few seconds to respond. The goal is to avoid waiting those few seconds before more code can execute. I'd rather not use python threads I'll use twisted async calls if I must and ignore the response.
false
3,486,372
0
0
0
0
HTTP implies a request and a reply for that request. Go with an async approach.
1
2,585
0
6
2010-08-15T05:49:00.000
python,http
How can I make an http request without getting back an http response in Python?
1
2
3
3,486,378
0
0
0
Here is what I would like to do, and I want to know how some people with experience in this field do this: With three POST requests I get from the http server: widgets and layout and then app logic (minimal) data Or maybe it's better to combine the first two or all three. I'm thinking of using pyqt. I think I can load .ui files. I can parse json data. I just think it would be rather dangerous to pass code over a network to be executed on the client. If someone can hijack the connection, or can change the apps setting to access a bogus server, that is nasty. I want to do it this way because it keeps all the clients up-to-date. It's sort of like a webapp but simpler because of Qt. Essentially the "thin" app is just a minimal compiled python file that loads data from a server. How can I do this without introducing security issues on the client? Is https good enough? Is there a way to get pyqt to run in a sandbox of sorts? PS. I'm not stuck on Qt or python. I do like the concept though. I don't really want to use Java - server or client side.
true
3,517,841
1.2
0
0
1
Your desire to send "app logic" from the server to the client without sending "code" is inherently self-contradictory, though you may not realize that yet -- even if the "logic" you're sending is in some simplified ad-hoc "language" (which you don't even think of as a language;-), to all intents and purposes your Python code will be interpreting that language and thereby execute that code. You may "sandbox" things to some extent, but in the end, that's what you're doing. To avoid hijackings and other tricks, instead, use HTTPS and validate the server's cert in your client: that will protect you from all the problems you're worrying about (if somebody can edit the app enough to defeat the HTTPS cert validation, they can edit it enough to make it run whatever code they want, without any need to send that code from a server;-). Once you're using https, having the server send Python modules (in source form if you need to support multiple Python versions on the clients, else bytecode is fine) and the client thereby save them to disk and import / reload them, will be just fine. You'll basically be doing a variant of the classic "plugins architecture" where the "plugins" are in fact being sent from the server (instead of being found on disk in a given location).
0
1,647
0
0
2010-08-19T00:33:00.000
python,qt,networking,pyqt,thin
how to implement thin client app with pyqt
1
1
2
3,517,886
1
0
0
I have written a small HTTP server and everything is working fine locally, but I am not able to connect to the server from any other computer, including other computers on the network. I'm not sure if it is a server problem, or if I just need to make some adjustments to Windows. I turned the firewall off, so that can't be the probelm. I am using Python 2.6 on Windows 7.
true
3,522,641
1.2
0
0
2
Without any code sample I can only assume that your server is listening on some private interface like localhost/127.0.0.1 and not something that is connected to the rest of your network.
0
213
0
0
2010-08-19T14:13:00.000
python,windows,http
Python Server Help
1
2
2
3,522,672
0
0
0
I have written a small HTTP server and everything is working fine locally, but I am not able to connect to the server from any other computer, including other computers on the network. I'm not sure if it is a server problem, or if I just need to make some adjustments to Windows. I turned the firewall off, so that can't be the probelm. I am using Python 2.6 on Windows 7.
false
3,522,641
0
0
0
0
Some things to check: Can you connect to the server via your machine's IP instead of localhost? I.e. if your machine is 1.2.3.4 in the network and the server is listening on port 8080, can you see it by opening a browser to http://1.2.3.4:8080 on the same machine? Can you do (1) from another machine? (just a sanity check...) Do other servers work throughout the network? I.e. if you run a simple FTP server (like Filezilla server) on the machine, can you FTP to it from other machines? Can you ping one machine from another? Do you still have firewalls running? (i.e. default Windows firewall)
0
213
0
0
2010-08-19T14:13:00.000
python,windows,http
Python Server Help
1
2
2
3,522,734
0
0
0
I've develop webmail client for any mail server. I want to implement message conversion for it — for example same emails fwd/reply/reply2all should be shown together like gmail does... My question is: what's the key to find those emails which are either reply/fwd or related to the original mail....
true
3,530,851
1.2
1
0
3
The In-Reply-To header of the child should have the value of the Message-Id header of the parent(s).
0
1,161
0
0
2010-08-20T12:34:00.000
python,imap,pop3,imaplib,poplib
How to maintain mail conversion (reply / forward / reply to all like gmail) of email using Python pop/imap lib?
1
2
2
3,566,252
0
0
0
I've develop webmail client for any mail server. I want to implement message conversion for it — for example same emails fwd/reply/reply2all should be shown together like gmail does... My question is: what's the key to find those emails which are either reply/fwd or related to the original mail....
false
3,530,851
0.197375
1
0
2
Google just seems to chain messages based on the subject line (so does Apple Mail by the way.)
0
1,161
0
0
2010-08-20T12:34:00.000
python,imap,pop3,imaplib,poplib
How to maintain mail conversion (reply / forward / reply to all like gmail) of email using Python pop/imap lib?
1
2
2
3,530,868
0
0
0
Does python have a full fledged email library with things for pop, smtp, pop3 with ssl, mime? I want to create a web mail interface that pulls emails from email servers, and then shows the emails, along with attachments, can display the sender, subject, etc. (handles all the encoding issues etc). It's one thing to be available in the libraries and another for them to be production ready. I'm hoping someone who has used them to pull emails w/attachments etc. in a production environment can comment on this.
true
3,538,430
1.2
1
0
2
It has all the components you need, in a more modular and flexible arrangement than you appear to envisage -- the standard library's email package deals with the message once you have received it, and separate modules each deal with means of sending and receiving, such as pop, smtp, imap. SSL is an option for each of them (if the counterpart, e.g. mail server, supports it, of course), being basically just "a different kind of socket". Have you looked at the rich online docs for all of these standard library modules?
0
513
0
0
2010-08-21T18:16:00.000
python,email,smtp,mime,pop3
Does python have a robust pop3, smtp, mime library where I could build a webmail interface?
1
1
2
3,538,453
0
0
0
i want to do router configuration using python , but dont want to use any application level protocol to configure it . Is it possible to deal it on a hardware level ? Please do tell if the question is vague or if it needs more explanation , then I would put more details on as to what I have my doubt in
false
3,555,485
0.099668
0
0
1
There is a package named roscraco that configure and extract information from some consumer level routers. It's available on PyPi.
0
616
0
1
2010-08-24T10:30:00.000
python
is it possible to write python scripts which can do router configuration without telnetting into the router?
1
2
2
5,845,120
0
0
0
i want to do router configuration using python , but dont want to use any application level protocol to configure it . Is it possible to deal it on a hardware level ? Please do tell if the question is vague or if it needs more explanation , then I would put more details on as to what I have my doubt in
false
3,555,485
0.099668
0
0
1
The title of your question by itself makes some sense. The body of your question doesn't make sense. is it possible to write python scripts which can do router configuration without telnetting into the router? Yes, depending on the platform. You maybe able to use a variety of other methods to configure the router that do not include telnet. E.g. xml-rpc, ssh + interactive, scp config file or fragments, snmp to induce upload config file, etc. Is it possible to deal it on a hardware level? You're in the realms of nanotech microscopy and seriously invalidating the warranty on your router.
0
616
0
1
2010-08-24T10:30:00.000
python
is it possible to write python scripts which can do router configuration without telnetting into the router?
1
2
2
3,555,720
0
0
0
I'm using the urllib2.urlopen method to open a URL and fetch the markup of a webpage. Some of these sites redirect me using the 301/302 redirects. I would like to know the final URL that I've been redirected to. How can I get this?
false
3,556,266
0.049958
0
0
1
e.g.: urllib2.urlopen('ORIGINAL LINK').geturl() urllib2.urlopen(urllib2.Request('ORIGINAL LINK')).geturl()
0
34,914
0
22
2010-08-24T12:12:00.000
python,urllib2
How can I get the final redirect URL when using urllib2.urlopen?
1
2
4
31,354,580
0
0
0
I'm using the urllib2.urlopen method to open a URL and fetch the markup of a webpage. Some of these sites redirect me using the 301/302 redirects. I would like to know the final URL that I've been redirected to. How can I get this?
false
3,556,266
0.197375
0
0
4
The return value of urllib2.urlopen has a geturl() method which should return the actual (i.e. last redirect) url.
0
34,914
0
22
2010-08-24T12:12:00.000
python,urllib2
How can I get the final redirect URL when using urllib2.urlopen?
1
2
4
3,556,295
0
0
0
I have tried this probably 6 or 7 different ways, such as using various attribute values, XPath, id pattern matching (it always matches ":\w\w"), etc. as locators, and nothing has worked. If anyone can give me a tested, confirmed-working locator string for this button, I'd be much obliged.
false
3,561,993
0
0
0
0
If you want to emulate a click on the button, just go to #compose.
0
964
0
1
2010-08-25T00:18:00.000
c#,java,python,gmail,selenium-rc
How to access Gmail's "Send" button using Selenium RC for Java or C# or Python
1
1
2
3,562,629
0
1
0
I am trying to encode and store, and decode arguments in Python and getting lost somewhere along the way. Here are my steps: 1) I use google toolkit's gtm_stringByEscapingForURLArgument to convert an NSString properly for passing into HTTP arguments. 2) On my server (python), I store these string arguments as something like u'1234567890-/:;()$&@".,?!\'[]{}#%^*+=_\\|~<>\u20ac\xa3\xa5\u2022.,?!\'' (note that these are the standard keys on an iphone keypad in the "123" view and the "#+=" view, the \u and \x chars in there being some monetary prefixes like pound, yen, etc) 3) I call urllib.quote(myString,'') on that stored value, presumably to %-escape them for transport to the client so the client can unpercent escape them. The result is that I am getting an exception when I try to log the result of % escaping. Is there some crucial step I am overlooking that needs to be applied to the stored value with the \u and \x format in order to properly convert it for sending over http? Update: The suggestion marked as the answer below worked for me. I am providing some updates to address the comments below to be complete, though. The exception I received cited an issue with \u20ac. I don't know if it was a problem with that specifically, rather than the fact that it was the first unicode character in the string. That \u20ac char is the unicode for the 'euro' symbol. I basically found I'd have issues with it unless I used the urllib2 quote method.
false
3,563,126
0.132549
0
0
2
You are out of your luck with stdlib, urllib.quote doesn't work with unicode. If you are using django you can use django.utils.http.urlquote which works properly with unicode
0
83,104
0
48
2010-08-25T05:42:00.000
python,url-encoding
URL encoding/decoding with Python
1
1
3
3,563,366
0
1
0
I'm not sure how to find this information, I have found a few tutorials so far about using Python with selenium but none have so much as touched on this.. I am able to run some basic test scripts through python that automate selenium but it just shows the browser window for a few seconds and then closes it.. I need to get the browser output into a string / variable (ideally) or at least save it to a file so that python can do other things on it (parse it, etc).. I would appreciate if anyone can point me towards resources on how to do this. Thanks
true
3,571,233
1.2
0
0
2
There's a Selenium.getHtmlSource() method in Java, most likely it is also available in Python. It returns the source of the current page as string, so you can do whatever you want with it
0
4,318
0
3
2010-08-26T00:22:00.000
python,selenium,browser-automation
Selenium with Python, how do I get the page output after running a script?
1
1
3
3,573,288
0
0
0
A sever I can't influence sends very broken XML. Specifically, a Unicode WHITE STAR would get encoded as UTF-8 (E2 98 86) and then translated using a Latin-1 to HTML entity table. What I get is &acirc; 98 86 (9 bytes) in a file that's declared as utf-8 with no DTD. I couldn't configure W3C tidy in a way that doesn't garble this irreversibly. I only found how to make lxml skip it silently. SAX uses Expat, which cannot recover after encountering this. I'd like to avoid BeautifulSoup for speed reasons. What else is there?
false
3,577,652
0.197375
0
0
2
BeautifulSoup is your best bet in this case. I suggest profiling before ruling out BeautifulSoup altogether.
0
1,950
0
4
2010-08-26T17:18:00.000
python,xml
How to parse broken XML in Python?
1
1
2
3,577,694
0
0
0
Is it possible to create REST Web Services, that returns JSON or XML, using Python ? Could you give me some recomandations ? Thank you.
false
3,577,994
0
0
0
0
Sure, you can use any web framework you like, just set the content-type header to the mime type you need. For generating json I recommend the simplejson module (ranamed to json and included in the standard library since 2.6), for handling XML the lxml library is very nice.
0
12,155
0
2
2010-08-26T17:57:00.000
python,web-services,rest
Creating REST Web Services with Python
1
1
5
3,578,028
0
0
0
They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?
false
3,586,295
0
1
0
0
If you make changes and test the behaviour from browser and from urllib, it is easy to make a stupid mistake. In browser you are logged in, but in urllib.urlopen your app can redirect you always to the same login page, so if you just see the page size or the top of your common layout, you could think that your changes have no effect.
0
12,290
0
13
2010-08-27T16:34:00.000
python,urllib2,urlopen
Does urllib2.urlopen() cache stuff?
1
3
5
38,239,971
0
0
0
They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?
true
3,586,295
1.2
1
0
10
So I wonder it does cache stuff somewhere, right? It doesn't. If you don't see new data, this could have many reasons. Most bigger web services use server-side caching for performance reasons, for example using caching proxies like Varnish and Squid or application-level caching. If the problem is caused by server-side caching, usally there's no way to force the server to give you the latest data. For caching proxies like squid, things are different. Usually, squid adds some additional headers to the HTTP response (response().info().headers). If you see a header field called X-Cache or X-Cache-Lookup, this means that you aren't connected to the remote server directly, but through a transparent proxy. If you have something like: X-Cache: HIT from proxy.domain.tld, this means that the response you got is cached. The opposite is X-Cache MISS from proxy.domain.tld, which means that the response is fresh.
0
12,290
0
13
2010-08-27T16:34:00.000
python,urllib2,urlopen
Does urllib2.urlopen() cache stuff?
1
3
5
3,586,796
0
0
0
They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?
false
3,586,295
-0.07983
1
0
-2
I find it hard to believe that urllib2 does not do caching, because in my case, upon restart of the program the data is refreshed. If the program is not restarted, the data appears to be cached forever. Also retrieving the same data from Firefox never returns stale data.
0
12,290
0
13
2010-08-27T16:34:00.000
python,urllib2,urlopen
Does urllib2.urlopen() cache stuff?
1
3
5
3,936,916
0
0
0
I'm writing a game which requires users to log in to their accounts in order to be able to play. What's the best way of transmitting passwords from client to server and storing them? I'm using Python and Twisted, if that's of any relevance.
true
3,595,835
1.2
1
0
1
The best way is to authenticate via SSL/TLS. The best way of storing passwords is to store them hashed with some complex hash like sha1(sha1(password)+salt) with salt.
0
447
0
0
2010-08-29T17:38:00.000
python,security,passwords,network-programming
Handling Password Authentication over a Network
1
1
2
3,595,865
0
1
0
Is there any approach to generate editor of an XML file basing on an XSD scheme? (It should be a Java or Python web based editor).
false
3,599,569
0.066568
0
0
1
Funny, I'm concerning myself with something similar. I'm building an editor (not really WYSIWYG, but it abstracts the DOM away) for the XMLs Civilization 4 (strategy game) usesu to store about everything. I thought about it for quite a while and built two prototypes (in Python), one of which looks promising so I will extend it in the future. Note that Civ 4 XMLs are merely more than a buzzword-conform database (just the kind of data you better store in JSON/YAML and the like, mostly key-value pairs with a few sublists of key-value pairs - no recursive data structures). My first approach was based on the fact that there are mostly key-value pairs, which doesn't fit documents that exploit the full power of XML (recursive data structures, etc). My new design is more sophisticated - up to now, I only built a (still buggy) validator factory this way, but I'm looking forward to extend it, e.g. for schema-sensetive editing widgets. The basic idea is to walk the XSD's DOM, recognize the expected content (a list of other nodes, text of a specific format, etc), build in turn (recursively) validators for these, and then build a higher-order validator that applies all the previously generated validators in the right order. It propably takes some exposure to functional programming to get comfortable with the idea. For the editing part (btw, I use PyQt), I plan to generate a Label-LineEdit pair for tags which contain text and a heading (Label) for tags that contain other elements, possibly indenting the subelements and/or providing folding. Again, recursion is the key to build these. Qt allows us to attach a validator to an text input widget, so this part is easy once we can generate a validator for e.g. a tag containing an "int". For tags containing other tags, something similar to the above is possible: Generate a validator for each subelement and chain them. The only part that needs to change is how we get the content. Ignoring comments, attributes, processing instructions, etc, this should still be relatively simple - for a "tag: content" pair, generate "content" and feed it to your DOM parser; for elements with subelements, generate a representation of the children and put it between "...". Attributes could be implemented as key-value pairs too, only with an extra flag.
0
4,417
0
4
2010-08-30T10:33:00.000
java,python,xml,xsd
Automatic editor of XML (based on XSD scheme)
1
1
3
3,599,767
0
0
0
I'm trying to develop some scripts for iTunes in python and actually i'm having quite a hard time getting the API information. I'm using the win32com.client module but i would really need to get all the specifications, methods and details. There are a few examples but I need some extra data...... thanks!!!
false
3,602,728
-0.049958
0
0
-1
Run dir(my_com_client) to get a list of available methods.
0
9,348
0
5
2010-08-30T17:27:00.000
python,itunes,win32com
iTunes API for python scripting
1
1
4
3,602,834
0
0
0
I'm learning to use the Queue module, and am a bit confused about how a queue consumer thread can be made to know that the queue is complete. Ideally I'd like to use get() from within the consumer thread and have it throw an exception if the queue has been marked "done". Is there a better way to communicate this than by appending a sentinel value to mark the last item in the queue?
false
3,605,188
0.07983
0
0
2
Queue is a FIFO (first in first out) register so remember that the consumer can be faster than producer. When consumers thread detect that the queue is empty normally realise one of following actions: Send to API: switch to next thread. Send to API: sleep some ms and than check again the queue. Send to API: wait on event (like new message in queue). If you wont that consumers thread terminate after job is complete than put in queue a sentinel value to terminate task.
1
10,185
0
14
2010-08-31T00:21:00.000
python,multithreading,queue
Communicating end of Queue
1
3
5
3,607,187
0
0
0
I'm learning to use the Queue module, and am a bit confused about how a queue consumer thread can be made to know that the queue is complete. Ideally I'd like to use get() from within the consumer thread and have it throw an exception if the queue has been marked "done". Is there a better way to communicate this than by appending a sentinel value to mark the last item in the queue?
false
3,605,188
0
0
0
0
The best practice way of doing this would be to have the queue itself notify a client that it has reached the 'done' state. The client can then take any action that is appropriate. What you have suggested; checking the queue to see if it is done periodically, would be highly undesirable. Polling is an antipattern in multithreaded programming, you should always be using notifications. EDIT: So your saying that the queue itself knows that it's 'done' based on some criteria and needs to notify the clients of that fact. I think you are correct and the best way to do this is by throwing when a client calls get() and the queue is in the done state. If your throwing this would negate the need for a sentinel value on the client side. Internally the queue can detect that it is 'done' in any way it pleases e.g. queue is empty, it's state was set to done etc I don't see any need for a sentinel value.
1
10,185
0
14
2010-08-31T00:21:00.000
python,multithreading,queue
Communicating end of Queue
1
3
5
3,605,282
0
0
0
I'm learning to use the Queue module, and am a bit confused about how a queue consumer thread can be made to know that the queue is complete. Ideally I'd like to use get() from within the consumer thread and have it throw an exception if the queue has been marked "done". Is there a better way to communicate this than by appending a sentinel value to mark the last item in the queue?
false
3,605,188
1
0
0
8
A sentinel is a natural way to shut down a queue, but there are a couple things to watch out for. First, remember that you may have more than one consumer, so you need to send a sentinel once for each running consumer, and guarantee that each consumer will only consume one sentinel, to ensure that each consumer receives its shutdown sentinel. Second, remember that Queue defines an interface, and that when possible, code should behave regardless of the underlying Queue. You might have a PriorityQueue, or you might have some other class that exposes the same interface and returns values in some other order. Unfortunately, it's hard to deal with both of these. To deal with the general case of different queues, a consumer that's shutting down must continue to consume values after receiving its shutdown sentinel until the queue is empty. That means that it may consume another thread's sentinel. This is a weakness of the Queue interface: it should have a Queue.shutdown call to cause an exception to be thrown by all consumers, but that's missing. So, in practice: if you're sure you're only ever using a regular Queue, simply send one sentinel per thread. if you may be using a PriorityQueue, ensure that the sentinel has the lowest priority.
1
10,185
0
14
2010-08-31T00:21:00.000
python,multithreading,queue
Communicating end of Queue
1
3
5
3,605,258
0
0
0
I am using python for web programming and javascript heavily. Currently, i am using NetBeans but i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?
false
3,608,409
0
0
0
0
It's not quite IDE, but on MacOSX i'm using TextMate, it have many extensions which makes it very powerful.
1
6,379
0
6
2010-08-31T11:12:00.000
javascript,python,ide
IDE Suggestion for python and javascript
1
4
7
3,608,498
0
0
0
I am using python for web programming and javascript heavily. Currently, i am using NetBeans but i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?
false
3,608,409
0.028564
0
0
1
PyCharm (and other IDEs on IDEA platform) is brilliant IDE for python, js, xml, css and other languages in webdev stack.
1
6,379
0
6
2010-08-31T11:12:00.000
javascript,python,ide
IDE Suggestion for python and javascript
1
4
7
3,608,554
0
0
0
I am using python for web programming and javascript heavily. Currently, i am using NetBeans but i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?
false
3,608,409
0.028564
0
0
1
I use Eclipse with Pydev (Python) and Aptana (Javascript) plugins
1
6,379
0
6
2010-08-31T11:12:00.000
javascript,python,ide
IDE Suggestion for python and javascript
1
4
7
3,608,748
0
0
0
I am using python for web programming and javascript heavily. Currently, i am using NetBeans but i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?
false
3,608,409
0
0
0
0
For web programming I used Espresso, it only work on Mac but it is quite good,this one is an IDE. I don't think the rest classify as an IDE. For python I use sublimetext2 because it can be customize and has a great GUI feel. I used to use notepad++ don't really suggest it. I think if you are asking for efficiency use vim.
1
6,379
0
6
2010-08-31T11:12:00.000
javascript,python,ide
IDE Suggestion for python and javascript
1
4
7
28,404,847
0
0
0
I try to move email from mailbox's gmail to another one, Just curious that UID of each email will change when move to new mailbox ?
true
3,615,561
1.2
1
0
4
Yes of course the UID is changed when you do move operation. the new UID for that mail will be the next UID from the destination folder. (i.e if the last mail UID of the destination folder is : 9332 , then the UID of the move email will be 9333) Note: UID is changed but the Message-Id will not be changed during any operation on that mail)
0
5,836
0
0
2010-09-01T06:37:00.000
python,imap,imaplib
About IMAP UID with imaplib
1
1
2
3,636,059
0
1
0
I am scripting in python for some web automation. I know i can not automate captchas but here is what i want to do: I want to automate everything i can up to the captcha. When i open the page (usuing urllib2) and parse it to find that it contains a captcha, i want to open the captcha using Tkinter. Now i know that i will have to save the image to my harddrive first, then open it but there is an issue before that. The captcha image that is on screen is not directly in the source anywhere. There is a variable in the source, inside some javascript, that points to another page that has the link to the image, BUT if you load that middle page, the captcha picture for that link changes, so the image associated with that javascript variable is no longer valid. It may be impossible to gather the image using this method, so please enlighten me if you have any ideas on this. Now if I use firebug to load the page, there is a "GET" that is a direct link to the current Captcha image that i am seeing, and i'm wondering if there is anyway to make python or ullib2 see the "GET"s that are going on when a page is loaded, because if that was possible, this would be simple. Please let me know if you have any suggestions.
false
3,623,077
0.379949
0
0
2
Of course the captcha's served by a page which will serve a new one each time (if it was repeated, then once it was solved for one fake userid, a spammer could automatically make a million!). I think you need some "screenshot" functionality to capture the image you want -- there is no cross-platform way to invoke such functionality, but each platform (or desktop manager in the case of Linux, BSD, etc) tends to have one. Or, you could automate the browser (e.g. via SeleniumRC) to "screenshot" (e.g. "print to PDF") things at the right time. (I believe what you're seeing in firebug may be misleading you because it is "showing a snapshot"... just at the html source or DOM level rather than at a screen/bitmap level).
0
1,432
0
0
2010-09-02T00:38:00.000
python,web-applications,firebug,tkinter,urllib2
Is there a way to save a captcha image and view it later in python?
1
1
1
3,623,274
0
1
0
I have a form which when submitted by a user redirects to a thank you page and the file chosen for download begins to download. How can I save this file using python? I can use python's urllib.urlopen to open the url to post to but the html returned is the thank you page, which I suspected it would be. Is there a solution that allows me to grab the contents of the file being served for download from the website and save that locally? Thanks in advance for any help.
true
3,628,454
1.2
0
0
2
If you're getting back a thank you page, the URL to the file is likely to be in there somewhere. Look for <meta http-equiv="refresh"> or JavaScript redirects. Ctrl+F'ing the page for the file name might also help. Some sites may have extra protection in, so if you can't figure it out, post a link to the site, just in case someone can be bothered to look.
0
383
0
1
2010-09-02T15:07:00.000
python,html,download,urllib
Python - How do I save a file delivered from html?
1
1
1
3,628,497
0
0
0
How can I remove / inspect / modify handlers configured for my loggers using the fileConfig() function? For removing there is Logger.removeHandler(hdlr) method, but how do I get the handler in first place if it was configured from file?
true
3,630,774
1.2
0
0
78
logger.handlers contains a list with all handlers of a logger.
0
46,204
0
70
2010-09-02T20:05:00.000
python,logging
logging remove / inspect / modify handlers configured by fileConfig()
1
1
5
3,630,800
0
0
0
I am attempting to use the tweepy api to make a twitter function and I have two issues. I have little experience with the terminal and Python in general. 1) It installed properly with Python 2.6, however I can't use it or install it with Python 3.1. When I attempt to install the module in 3.1 it gives me an error that there is no module setuptools. Originally I thought that perhaps I was unable to use tweepy module with 3.1, however in the readme it says "Python 3 branch (3.1)", which I assume means it is compatible. When I searched for the setuptools module, which I figured I could load into the new version, there was only modules for up to Python 2.7. How would I install the Tweepy api properly on Python 3.1? 2) My default Python when run from terminal is 2.6.1 and I would like to make it 3.1 so I don't have to type python3.1.
false
3,631,828
-0.099668
1
0
-1
Update: The comments below have some solid points against this technique. 2) What OS are you running? Generally, there is a symlink somewhere in your system, which points from 'python' to 'pythonx.x', where x.x is the version number preferred by your operating system. On Linux, there is a symlink /usr/bin/python, which points to (on Ubuntu 10.04) /usr/bin/python2.6 on a standard installation. Just manually change the current link to point to the python3.1 binary, and you are fine.
1
1,565
0
0
2010-09-02T22:39:00.000
python,tweepy
Python defaults and using tweepy api
1
1
2
3,631,888
0
1
0
I'm using Python to parse an auction site. If I use browser to open this site, it will go to a loading page, then jump to the search result page automatically. If I use urllib2 to open the webpage, the read() method only return the loading page. Is there any python package could wait until all contents are loaded then read() method return all results? Thanks.
false
3,637,681
0
0
0
0
How does the search page work? If it loads anything using Ajax, you could do some basic reverse engineering and find the URLs involved using Firebug's Net panel or Wireshark and then use urllib2 to load those. If it's more complicated than that, you could simulate the actions JS performs manually without loading and interpreting JavaScript. It all depends on how the search page works. Lastly, I know there are ways to run scripting on pages without a browser, since that's what some functional testing suites do, but my guess is that this could be the most complicated approach.
0
862
0
0
2010-09-03T16:26:00.000
javascript,python
How to parse a web use javascript to load .html by Python?
1
1
2
3,637,740
0
0
0
i'm crawling an SNS with crawler written in python it works for a long time, but few days ago, the webpages got from my severs were ERROR 403 FORBIDDEN. i tried to change the cookie, change the browser, change the account, but all failed. and it seems that are the forbidden severs are in the same network segment. what can i do? steal someone else's ip? = =... thx a lot
true
3,648,525
1.2
0
0
1
Looks like you've been blacklisted at the router level in that subnet, perhaps because you (or somebody else in the subnet) was violating terms of use, robots.txt, max crawling frequency as specified in a site-map, or something like that. The solution is not technical, but social: contact the webmaster, be properly apologetic, learn what exactly you (or one of your associates) had done wrong, convincingly promise to never do it again, apologize again until they remove the blacklisting. If you can give that webmaster any reason why they should want to let you crawl that site (e.g., your crawling feeds a search engine that will bring them traffic, or something like this), so much the better!-)
0
2,076
0
0
2010-09-06T01:19:00.000
python,web-crawler,http-status-code-403
how to crawl a 403 forbidden SNS
1
1
1
3,648,748
0
1
0
I have to migrate data to OpenERP through XMLRPC by using TerminatOOOR. I send a name with value "Rotule right Aurélia". In Python the name with be encoded with value : 'Rotule right Aur\xc3\xa9lia ' But in TerminatOOOR (xmlrpc client) the data is encoded with value 'Rotule middle Aur\357\277\275lia' So in the server side, the data value is not decoded correctly and I get bad data. The terminateOOOR is a ruby plugin for Kettle ( Java product) and I guess it should encode data by utf-8. I just don't know why it happens like this. Any help?
false
3,651,031
0.099668
0
0
1
This issue comes from Kettle. My program is using Kettle to get an Excel file, get the active sheet and transfer the data in that sheet to TerminateOOOR for further handling. At the phase of reading data from Excel file, Kettle can not recognize the encoding then it gives bad data to TerminateOOOR. My work around solution is manually exporting excel to csv before giving data to TerminateOOOR. By doing this, I don't use the feature to mapping excel column name a variable name (used by kettle).
0
1,981
0
1
2010-09-06T11:23:00.000
python,ruby,unicode,xml-rpc
Handling unicode data in XMLRPC
1
1
2
3,698,942
0
0
0
I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?
false
3,664,016
0.028564
1
0
1
You could consider using a combination of python and PyGtkMozEmbed or PyWebKitGtk plus javascript to create your spider. The spidering could be done in javascript after the page and all other scripts have loaded. You'd have one of the few web spiders that supports javascript, and might pick up some hidden stuff the others don't see :)
0
6,849
0
3
2010-09-08T01:27:00.000
php,c++,python,web-crawler
What languages are good for writing a web crawler?
1
4
7
3,664,065
0
0
0
I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?
false
3,664,016
-0.085505
1
0
-3
C# and C++ are probably the best two languages for this, it's just a matter of which you know better and which is faster (C# is probably easier). I wouldn't recommend Python, Javascript, or PHP. They will usually be slower in text processing compared to a C-family language. If you're looking to crawl any significant chunk of the web, you'll need all the speed you can get. I've used C# and the HtmlAgilityPack to do so before, it works relatively well and is pretty easy to pick up. The ability to use a lot of the same commands to work with HTML as you would XML makes it nice (I had experience working with XML in C#). You might want to test the speed of available C# HTML parsing libraries vs C++ parsing libraries. I know in my app, I was running through 60-70 fairly messy pages a second and pulling a good bit of data out of each (but that was a site with a pretty constant layout). Edit: I notice you mentioned accessing a database. Both C++ and C# have libraries to work with most common database systems, from SQLite (which would be great for a quick crawler on a few sites) to midrange engines like MySQL and MSSQL up to the bigger DB engines (I've never used Oracle or DB2 from either language, but it's possible).
0
6,849
0
3
2010-09-08T01:27:00.000
php,c++,python,web-crawler
What languages are good for writing a web crawler?
1
4
7
3,664,086
0
0
0
I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?
false
3,664,016
1
1
0
6
Any language you can easily use with a good network library and support for parsing the formats you want to crawl. Those are really the only qualifications.
0
6,849
0
3
2010-09-08T01:27:00.000
php,c++,python,web-crawler
What languages are good for writing a web crawler?
1
4
7
3,664,054
0
0
0
I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?
true
3,664,016
1.2
1
0
0
C++ - if you know what you're doing. You will not need a web server and a web application, because a web crawler is just a client, after all.
0
6,849
0
3
2010-09-08T01:27:00.000
php,c++,python,web-crawler
What languages are good for writing a web crawler?
1
4
7
3,664,049
0
0
0
I m trying to use smtp class from Python 2.6.4 to send smtp email from a WinXP VMware machine. After the send method is called, I always got this error: socket.error: [Errno 10061] No connection could be made because the target machine actively refused it. Few stuff I noticed: The same code works in the physical WinXP machine with user in/not in the domain, connected to the same smtp server. If I use the smtp server which is setup in the same VM machine, then it works. Any help is appreciate!
true
3,664,438
1.2
1
0
2
The phrase "...because the target machine actively refused it" usually means there's a firewall that drops any unauthorized connections. Is there a firewall service on the SMTP server that's blocking the WinXP VM's IP address? Or, more likely: Is the SMTP server not configured to accept relays from the WinXP VM's IP address?
0
828
0
1
2010-09-08T03:23:00.000
python,email,smtp,vmware
Python smtp connection is always failed in a VMware Windows machine
1
1
1
3,668,622
0
0
0
I am developing an email client in Python. Is it possible to check if an email contains an attachement just from the e-mail header without downloading the whole E-Mail?
true
3,676,344
1.2
1
0
5
"attachment" is quite a broad term. Is an image for HTML message an attachment? In general, you can try analyzing content-type header. If it's multipart/mixed, most likely the message contains an attachment.
0
2,147
0
4
2010-09-09T12:05:00.000
python,email,imap,imaplib
Is it possible to check if an email contains an attachement just from the e-mail header?
1
1
2
3,676,393
0
0
0
Can anyone point me towards tutorials for using the Python API in Ntop (other than that Luca Deris paper)? In web interfaces there is about > online documentation > python engine but I think this link has an error. Does anyone have access to that document to re-post online for me?
false
3,686,080
0.53705
0
0
3
If you have ntop installed you can look at the example files in /usr/share/ntop/python (that's where they're at in the Ubuntu package version, at least). If you have epydoc installed you can run make from within the /usr/share/ntop/python/docs directory to generate the documentation. Once you do that the About > Online Documentation > Python ntop Engine > Python API link will work correctly (it seems like a bug that it requires work on the part of the user to fix that link).
0
1,189
0
0
2010-09-10T15:46:00.000
python
Ntop Python API
1
1
1
7,302,974
0
1
0
Is there anyway I can parse a website by just viewing the content as displayed to the user in his browser? That is, instead of downloading "page.htm"l and starting to parse the whole page with all the HTML/javascript tags, I will be able to retrieve the version as displayed to users in their browsers. I would like to "crawl" websites and rank them according to keywords popularity (viewing the HTML source version is problematic for that purpose). Thanks! Joel
false
3,690,560
0
0
0
0
You could get the source and strip the tags out, leaving only non-tag text, which works for almost all pages, except those where JavaScript-generated content is essential.
0
82
0
0
2010-09-11T10:09:00.000
python,html
Counting content only in HTML page
1
2
3
3,690,576
0
1
0
Is there anyway I can parse a website by just viewing the content as displayed to the user in his browser? That is, instead of downloading "page.htm"l and starting to parse the whole page with all the HTML/javascript tags, I will be able to retrieve the version as displayed to users in their browsers. I would like to "crawl" websites and rank them according to keywords popularity (viewing the HTML source version is problematic for that purpose). Thanks! Joel
true
3,690,560
1.2
0
0
0
A browser also downloads the page.html and then renders it. You should work the same way. Use a html parser like lxml.html or BeautifulSoup, using those you can ask for only the text enclosed within tags (and arguments you do like, like title and alt attributes).
0
82
0
0
2010-09-11T10:09:00.000
python,html
Counting content only in HTML page
1
2
3
3,690,865
0
0
0
I'm moving some tests from Selenium to the WebDriver. My problem is that I can't find an equivalent for selenium.wait_for_condition. Do the Python bindings have this at the moment, or is it still planned?
false
3,694,508
0
0
0
0
The Java binding include a Wait class. This class repeatedly checks for a condition (with sleeps between) until a timeout is reached. If you can detect the completion of your Javascript using the normal API, you can take the same approach.
0
3,806
0
8
2010-09-12T10:37:00.000
python,selenium,webdriver
selenium.wait_for_condition equivalent in Python bindings for WebDriver
1
1
3
3,743,112
0
0
0
How can we call the CLI executables commands using Python For example i have 3 linux servers which are at the remote location and i want to execute some commands on those servers like finding the version of the operating system or executing any other commands. So how can we do this in Python. I know this is done through some sort of web service (SOAP or REST) or API but i am not sure....... So could you all please guide me.
false
3,699,268
0
0
0
0
Depends on how you want to design your software. You could do stand-alone scripts as servers listening for requests on specific ports, or you could use a webserver which runs python scripts so you just have to access a URL. REST is one option to implement the latter. You should then look for frameworks for REST development with python, or if it’s simple logic with not so many possible requests can do it on your own as a web-script.
0
170
1
1
2010-09-13T09:41:00.000
python,django,web-services,api,soap
How can we call the CLI executables commands using Python
1
1
3
3,699,299
0
0
0
I am working on a project that requires me to collect a large list of URLs to websites about certain topics. I would like to write a script that will use google to search specific terms, then save the URLs from the results to a file. How would I go about doing this? I have used a module called xgoogle, but it always returned no results. I am using Python 2.6 on Windows 7.
false
3,732,595
0
0
0
0
Make sure that you change the User-Agent of urllib2. The default one tends to get blocked by Google. Make sure that you obey the terms of use of the search engine that you're scripting.
0
402
0
0
2010-09-17T04:04:00.000
python,windows,search,hyperlink
Search Crawling "Bot"?
1
1
2
3,732,721
0
0
0
I am going to write a TCP server, the client sends me XML message, I am wondering if below condition will happen and how to avoid that: 1) client sends <cmd ...></cmd> 2) sever is busy doing something 3) clients sends <cmd ...></cmd> 4) server does a recv() and put the string to buffer Will the buffer be filled with <cmd ...></cmd><cmd ...></cmd> or even worse <cmd ...></cmd><cmd ... if my buffer is not big enough? What I want is the TCP stack divides the messages to the same pieces as how clients sent them. Is it doable?
true
3,733,363
1.2
0
0
0
You often write clients in the plural form: are there several clients connecting to your server? In this case, each client should be using its own TCP stream, and the issue you are describing should never occur. If the various commands are send from a single client, then you should write your client code so that it waits for the answer to a command before issuing the next one.
0
772
1
0
2010-09-17T07:20:00.000
python,networking,tcp
TCP server: how to avoid message overlapping
1
2
3
3,733,452
0
0
0
I am going to write a TCP server, the client sends me XML message, I am wondering if below condition will happen and how to avoid that: 1) client sends <cmd ...></cmd> 2) sever is busy doing something 3) clients sends <cmd ...></cmd> 4) server does a recv() and put the string to buffer Will the buffer be filled with <cmd ...></cmd><cmd ...></cmd> or even worse <cmd ...></cmd><cmd ... if my buffer is not big enough? What I want is the TCP stack divides the messages to the same pieces as how clients sent them. Is it doable?
false
3,733,363
0.26052
0
0
4
This is impossible to guarantee at the TCP level, since it only knows about streams. Depending on the XML parser you're using, you should be able to feed it the stream and have it tell you when it has a complete object, leaving the second <cmd... in its buffer until it is closed also.
0
772
1
0
2010-09-17T07:20:00.000
python,networking,tcp
TCP server: how to avoid message overlapping
1
2
3
3,733,419
0
0
0
Is it possible that CherryPy, in its default configuration, is caching the responses to one or more of my request handlers? And, if so, how do I turn that off?
true
3,736,606
1.2
1
0
4
CherryPy has a caching Tool, but it's never on by default. Most HTTP responses are cacheable by default, though, so look for an intermediate cache between your client and server. Look at the browser first. If you're not sure whether or not your content is being cached, compare the Date response header to the current time.
0
2,007
0
3
2010-09-17T15:15:00.000
python,http,cherrypy
How to determine if CherryPy is caching responses?
1
1
2
3,737,124
0
0
0
I have a simple workflow [Step 0]->[1]->[2]->...->[Step N]. The master program knows the step (state) it is currently at. I want to stream this in real time to a website (in the local area network) so that when my colleagues open, say, http://thecomputer:8000, they can see a real time rendering of the current state of our workflow with any relevant details. I've tought about writing the state of the script to an StringIO object (streaming to it) and use Javascript to refresh the browser, but I honestly have no idea how to actually do this. Any advice?
false
3,739,296
0.039979
0
0
1
You could have the python script write an xml file that you get with an ajax request in your web page, and get the status info from that.
0
880
0
4
2010-09-17T21:47:00.000
javascript,python,streaming,real-time
Streaming the state of a Python script to a website
1
1
5
3,739,311
0
0
0
I want to see if I can access an online API, but for that, I need to have Internet access. How can I see if there's a connection available and active using Python?
false
3,764,291
1
0
0
10
You can just try to download data, and if connection fail you will know that somethings with connection isn't fine. Basically you can't check if computer is connected to internet. There can be many reasons for failure, like wrong DNS configuration, firewalls, NAT. So even if you make some tests, you can't have guaranteed that you will have connection with your API until you try.
0
229,860
0
162
2010-09-21T20:39:00.000
python,networking
How can I see if there's an available and active network connection in Python?
1
2
21
3,764,315
0
0
0
I want to see if I can access an online API, but for that, I need to have Internet access. How can I see if there's a connection available and active using Python?
false
3,764,291
0.047583
0
0
5
Try the operation you were attempting to do anyway. If it fails python should throw you an exception to let you know. To try some trivial operation first to detect a connection will be introducing a race condition. What if the internet connection is valid when you test but goes down before you need to do actual work?
0
229,860
0
162
2010-09-21T20:39:00.000
python,networking
How can I see if there's an available and active network connection in Python?
1
2
21
3,764,759
0
0
0
I am currently working on exposing data from legacy system over the web. I have a (legacy) server application that sends and receives data over UDP. The software uses UDP to send sequential updates to a given set of variables in (near) real-time (updates every 5-10 ms). thus, I do not need to capture all UDP data -- it is sufficient that the latest update is retrieved. In order to expose this data over the web, I am considering building a lightweight web server that reads/write UDP data and exposes this data over HTTP. As I am experienced with Python, I am considering to use it. The question is the following: how can I (continuously) read data from UDP and send snapshots of it over TCP/HTTP on-demand with Python? So basically, I am trying to build a kind of "UDP2HTTP" adapter to interface with the legacy app so that I wouldn't need to touch the legacy code. A solution that is WSGI compliant would be much preferred. Of course any tips are very welcome and MUCH appreciated!
true
3,768,019
1.2
0
0
4
The software uses UDP to send sequential updates to a given set of variables in (near) real-time (updates every 5-10 ms). thus, I do not need to capture all UDP data -- it is sufficient that the latest update is retrieved What you must do is this. Step 1. Build a Python app that collects the UDP data and caches it into a file. Create the file using XML, CSV or JSON notation. This runs independently as some kind of daemon. This is your listener or collector. Write the file to a directory from which it can be trivially downloaded by Apache or some other web server. Choose names and directory paths wisely and you're done. Done. If you want fancier results, you can do more. You don't need to, since you're already done. Step 2. Build a web application that allows someone to request this data being accumulated by the UDP listener or collector. Use a web framework like Django for this. Write as little as possible. Django can serve flat files created by your listener. You're done. Again. Some folks think relational databases are important. If so, you can do this. Even though you're already done. Step 3. Modify your data collection to create a database that the Django ORM can query. This requires some learning and some adjusting to get a tidy, simple ORM model. Then write your final Django application to serve the UDP data being collected by your listener and loaded into your Django database.
0
4,771
1
6
2010-09-22T09:44:00.000
python,wsgi
How to serve data from UDP stream over HTTP in Python?
1
1
3
3,768,227
0
0
0
I am trying to use python's imaplib to create an email and send it to a mailbox with specific name, e.g. INBOX. Anyone has some great suggestion :).
false
3,769,701
-1
1
0
-6
No idea how they do it but doesn't Microsoft Outlook let you move an email from a local folder to a remote IMAP folder?
0
18,674
0
5
2010-09-22T13:28:00.000
python,imaplib
How to create an email and send it to specific mailbox with imaplib
1
1
4
3,787,209
0
0
0
how do i run a python program that is received by a client from server without writing it into a new python file?
false
3,798,067
0
0
0
0
Dcolish's answer is good. I'm not sure the idea of executing code that comes in on a network interface is good in itself, though - you will need to take care to verify that you can trust the sending party, especially if this interface is going to be exposed to the Internet or really any production network.
0
387
0
0
2010-09-26T13:49:00.000
python
network programming in python
1
1
4
3,805,092
0
1
0
I'm using web2py for an intranet site and need to get current login windows user id in my controller. Whether any function is available?
false
3,798,606
0.066568
0
0
1
If you mean you need code at the server to know the windows id of the current browser user, web2py isn't going to be able to tell you that. Windows authentication has nothing to do with web protocols.
0
996
0
3
2010-09-26T16:04:00.000
python,windows,web2py
How to get windows user id in web2py for an intranet application?
1
1
3
3,798,630
0
1
0
I changed my domain from abc.com to xyz.com. After that my facebook authentication is not working. It is throwing a key error KeyError: 'access_token'I am using python as my language.
false
3,806,082
0
1
0
0
You probably need to update the domain in the facebook settings/api key which allow you access.
0
186
0
0
2010-09-27T17:08:00.000
python,facebook
My facebook authentication is not working?
1
1
1
3,806,101
0
0
0
Is there any python function that validates E-mail addresses, aware of IDN domains ? For instance, [email protected] should be as correct as user@zääz.de or user@納豆.ac.jp Thanks.
false
3,806,393
0.066568
1
0
1
It is very difficult to validate an e-mail address because the syntax is so flexible. The best strategy is to send a test e-mail to the entered address.
0
548
0
0
2010-09-27T17:43:00.000
python
Function to validate an E-mail (IDN aware)
1
1
3
3,806,787
0
0
0
I've been searching on this but can't seem to find an exact answer (most get into more complicated things like multithreading, etc), I just want to do something like a Try, Except statement where if the process doesn't finish within X number of seconds it will throw an exception. EDIT: The reason for this is that I am using a website testing software (selenium) with a configuration that sometimes causes it to hang. It doesn't throw an error, doesn't timeout or do anything so I have no way of catching it. I am wondering what the best way is to determine that this has occured so I can move on in my application, so I was thinking if I could do something like, "if this hasn't finished by X seconds... move on".
false
3,810,869
0.132549
0
0
2
You can't do it without some sort of multithreading or multiprocessing, even if that's hidden under some layers of abstraction, unless that "process" you're running is specifically designed for asynchronicity and calls-back to a known function once in a while. If you describe what that process actually is, it will be easier to provide real solutions. I don't think that you appreciate the power of Python where it comes to implementations that are succinct while being complete. This may take just a few lines of code to implement, even if using multithreading/multiprocessing.
1
6,285
0
4
2010-09-28T08:15:00.000
python,error-handling,timeout
Python, Timeout of Try, Except Statement after X number of seconds?
1
1
3
3,810,878
0
1
0
There is the URL of page on the Internet. I need to get a screenshot of this page (no matter in which browser). I need a script (PHP, Python (even Django framework)) that receives the URL (string) and output screenshot-file at the exit (file gif, png, jpg). UPD: I need dynamically create a page where opposite to URL will be placed screenshot of the page with the same URL.
false
3,811,674
0
0
0
0
If you are family with Python, you can use PyQt4. This library supports to get screenshot from a url.
0
31,489
0
7
2010-09-28T10:08:00.000
php,python,django,url,screenshot
Convert URL to screenshot (script)
1
1
6
21,106,018
0
0
0
How can I know if a node that is being accessed using TCP socket is alive or if the connection was interrupted and other errors? Thanks!
false
3,813,451
0.379949
0
0
2
You can't. Any intermediate nodes can drop your packets or the reply packets from the remote node.
0
240
0
1
2010-09-28T13:51:00.000
python,sockets,system,distributed
Python socket programming
1
1
1
3,813,510
0
0
0
I wish to set the namespace prefix in xml.etree. I found register_namespace(prefix, url) on the Web but this threw "unknown attribute". I have also tried nsmap=NSMAP but this also fails. I'd be grateful for example syntax that shows how to add specified namespace prefixes
true
3,814,365
1.2
0
0
1
register_namespace was only introduced in lxml 2.3 (still beta) I believe you can provide an nsmap parameter (dictionary with prefix-uri mappings) when creating an element, but I don't think you can change it for an existing element. (there is an .nsmap property on the element, but changing that doesn't seem to work. There is also a .prefix property on the element, but that's read-only)
0
1,128
0
3
2010-09-28T15:28:00.000
python,xml.etree
how to set namespace prefixes in xml.etree
1
1
1
3,814,788
0
0
0
i wrote a py script to fetch page from web,it just read write permission enough,so my question is when we need execute permission?
false
3,822,336
1
0
0
6
Read/write is enough if you want to run it by typing python file.py. If you want to run it directly as if it were a compiled program, e.g. ./file.py, then you need execute permission (and the appropriate hash-bang line at the top).
0
6,642
0
4
2010-09-29T14:01:00.000
python,permissions,chmod
when we need chmod +x file.py
1
3
3
3,822,354
0
0
0
i wrote a py script to fetch page from web,it just read write permission enough,so my question is when we need execute permission?
false
3,822,336
0
0
0
0
If you want to be able to run it directly with $ file.py then you'll need the execute bit set. Otherwise you can run it with $ python file.py.
0
6,642
0
4
2010-09-29T14:01:00.000
python,permissions,chmod
when we need chmod +x file.py
1
3
3
3,822,352
0
0
0
i wrote a py script to fetch page from web,it just read write permission enough,so my question is when we need execute permission?
true
3,822,336
1.2
0
0
5
It's required to do so if you need to run the script in this way: ./file.py. Keep in mind though, you need to put the path of python at the very top of the script: #!/usr/bin/python. But wait, you need to make sure you have the proper path, to do that execute: which python.
0
6,642
0
4
2010-09-29T14:01:00.000
python,permissions,chmod
when we need chmod +x file.py
1
3
3
3,822,410
0
0
0
The situation is that I have a small datacenter, with each server running python instances. It's not your usual distributed worker setup, as each server has a specific role with an appropriate long-running process. I'm looking for good ways to implement the the cross-server communication. REST seems like overkill. XML-RPC seems nice, but I haven't played with it yet. What other libraries should I be looking at to get this done? Requirements: Computation servers crunch numbers in the background. Other servers would like to occasionally ask them for values, based upon their calculation sets. I know this seems pretty well aligned with a REST mentality, but I'm curious about other options.
false
3,823,420
0.099668
0
0
1
It wasn't obvious from your question but if getting answers back synchronously doesn't matter to you (i.e., you are just asking for work to be performed) you might want to consider just using a job queue. It's generally the easiest way to communicate between hosts. If you don't mind depending on AWS using SQS is super simple. If you can't depend on AWS then you might want to try something like RabbitMQ. Many times problems that we think need to be communicated synchronously are really just queues in disguise.
0
196
0
0
2010-09-29T15:57:00.000
python,network-protocols
What are good specs/libraries for closed network communication in python?
1
1
2
3,824,612
0
0
0
When I call selenium.get_text("foo") on a certain element it returns back a different value depending on what browser I am working in due to the way each browser handles newlines. Example: An elements string is "hello[newline]how are you today?[newline]Very well, thank you." When selenium gets this back from IE it gets the string "hello\nhow are you today?\nVery well, thank you." When selenium gets this back from Firefox it gets the string "hello\n how are you today?\n Very well, thank you." (Notice that IE changes [newline] into '\n' and Firefox changes it into '\n ') Is there anyway using selenium/python that I can easily strip out this discrepancy? I thought about using .replace("\n ", "\n"), but that would cause issues if there was an intended space after a newline (for whatever reason). Any ideas?
true
3,824,734
1.2
0
0
0
I ended up just doing a check of what browser I was running and then returning the string with the '\n ' replaced with '\n' if the browser was firefox.
0
1,614
0
1
2010-09-29T18:31:00.000
python,selenium
Selenium and Python: remove \n from returned selenium.get_text()
1
1
1
3,825,411
0
0
0
I have a script that runs continuously when invoked and every 5 minutes checks my gmail inbox. To get it to run every 5 minutes I am using the time.sleep() function. However I would like user to end the script anytime my pressing q, which it seems cant be done when using time.sleep(). Any suggestions on how i can do this? Ali
false
3,836,620
0
0
0
0
If you really wanted to (and wanted to waste a lot of resources), you could cut your loop into 200 ms chunks. So sleep 200 ms, check input, repeat until five minutes elapse, and then check your inbox. I wouldn't recommend it, though. While it's sleeping, though, the process is blocked and won't receive input until the sleep ends. Oh, as an added note, if you hit the key while it's sleeping, it should still go into the buffer, so it'll get pulled out when the sleep ends and input is finally read, IIRC.
0
5,470
0
4
2010-10-01T04:51:00.000
python,continuous
Continous loop and exiting in python
1
1
3
3,836,650
0
0
0
I just need to write a simple python CGI script to parse the contents of a POST request containing JSON. This is only test code so that I can test a client application until the actual server is ready (written by someone else). I can read the cgi.FieldStorage() and dump the keys() but the request body containing the JSON is nowhere to be found. I can also dump the os.environ() which provides lots of info except that I do not see a variable containing the request body. Any input appreciated. Chris
false
3,836,828
1
1
0
8
notice that if you call cgi.FieldStorage() before in your code, you can't get the body data from stdin, because it just be read once.
0
8,362
0
11
2010-10-01T05:49:00.000
python,parsing,cgi,request
How to parse the "request body" using python CGI?
1
1
2
39,910,366
0
0
0
I have a list of xml examples I would like to turn into schemas (xsd files). Exactly what the trang tool does (http://www.thaiopensource.com/relaxng/trang.html). I don't like calling trang from my script (i.e doing os.system('java -jar trang...')) - is there a python package I can use instead?
false
3,849,632
0
0
0
0
If you are running Jython (http://jython.org/) then you could import trang and run it internally.
0
387
0
3
2010-10-03T11:59:00.000
python,xml,xsd
Python: Is there a way to generate xsd files based on xml examples
1
1
1
5,188,517
0
0
0
I have a server with two separate Ethernet connections. When I bind a socket in python it defaults to one of the two networks. How do I pull a multicast stream from the second network in Python? I have tried calling bind using the server's IP address on the second network, but that hasn't worked.
true
3,859,090
1.2
0
0
0
I figured it out. It turns out that the piece I was missing was adding the interface to the mreq structure that is used in adding membership to a multicast group.
0
9,358
0
6
2010-10-04T21:04:00.000
python,sockets,networking
Choosing multicast network interface in Python
1
1
4
3,873,419
0
0
0
How can I call shutdown() in a SocketServer after receiving a certain message "exit"? As I know, the call to serve_forever() will block the server. Thanks!
false
3,863,281
0.379949
0
0
4
No the serve_forever is checking a flag on a regular basis (by default 0.5 sec). Calling shutdown will raise this flag and cause the serve_forever to end.
0
3,881
0
6
2010-10-05T11:40:00.000
python,sockets,socketserver
Python SocketServer
1
1
2
3,863,539
0
1
0
Im using scrapy to crawl a news website on a daily basis. How do i restrict scrapy from scraping already scraped URLs. Also is there any clear documentation or examples on SgmlLinkExtractor.
false
3,871,613
0.039979
0
0
1
I think jama22's answer is a little incomplete. In the snippet if self.FILTER_VISITED in x.meta:, you can see that you require FILTER_VISITED in your Request instance in order for that request to be ignored. This is to ensure that you can differentiate between links that you want to traverse and move around and item links that well, you don't want to see again.
0
9,648
0
15
2010-10-06T10:38:00.000
python,web-crawler,scrapy
Scrapy - how to identify already scraped urls
1
2
5
8,830,983
0
1
0
Im using scrapy to crawl a news website on a daily basis. How do i restrict scrapy from scraping already scraped URLs. Also is there any clear documentation or examples on SgmlLinkExtractor.
false
3,871,613
0.039979
0
0
1
Scrapy can auto-filter urls which are scraped, isn't it? Some different urls point to the same page will not be filtered, such as "www.xxx.com/home/" and "www.xxx.com/home/index.html".
0
9,648
0
15
2010-10-06T10:38:00.000
python,web-crawler,scrapy
Scrapy - how to identify already scraped urls
1
2
5
13,578,588
0
0
0
I am running my code on multiple VPSes (with more than one IP, which are set up as aliases to the network interfaces) and I am trying to figure out a way such that my code acquires the IP addresses from the network interfaces on the fly and bind to it. Any ideas on how to do it in python without adding a 3rd party library ? Edit I know about socket.gethostbyaddr(socket.gethostname()) and about the 3rd party package netifaces, but I am looking for something more elegant from the standard library ... and parsing the output of the ifconfig command is not something elegant :)
false
3,881,951
0
0
0
0
The IP addresses are assigned to your VPSes, no possibility to change them on the fly. You have to open a SSH tunnel to or install a proxy on your VPSes. I think a SSH tunnel would be the best way how to do it, and then use it as SOCKS5 proxy from Python.
0
118
0
1
2010-10-07T13:10:00.000
python
figuring out how to get all of the public ips of a machine
1
1
2
3,882,193
0
1
0
I want to show .ppt (PowerPoint) files uploaded by my user on my website. I could do this by converting them into Flash files, then showing the Flash files on the web page. But I don't want to use Flash to do this. I want to show it, like google docs shows, without using Flash. I've already solved the problem for .pdf files by converting them into images using ImageMagick, but now I have trouble with .ppt files.
true
3,882,249
1.2
0
0
0
Now i found a solution to showing .ppt file on my website without using the flash the solution is: just convert the .ppt file to .pdf files using any language or using software(e.g. open office) and then use Imagemagick to convert that .pdf into image and show to your web page once again thanks to you all for answering my question.
0
1,773
0
2
2010-10-07T13:45:00.000
java,c++,python,powerpoint,google-docs
How google docs shows my .PPT files without using a flash viewer?
1
1
3
3,899,697
0
0
0
On my linux machine, 1 of 3 network interfaces may be actually connected to the internet. I need to get the IP address of the currently connected interface, keeping in mind that my other 2 interfaces may be assigned IP addresses, just not be connected. I can just ping a website through each of my interfaces to determine which one has connectivity, but I'd like to get this faster than waiting for a ping time out. And I'd like to not have to rely on an external website being up. Update: All my interfaces may have ip addresses and gateways. This is for an embedded device. So we allow the user to choose between say eth0 and eth1. But if there's no connection on the interface that the user tells us to use, we fall back to say eth2 which (in theory) will always work. So what I need to do is first check if the user's selection is connected and if so return that IP. Otherwise I need to get the ip of eth2. I can get the IPs of the interfaces just fine, it's just determining which one is actually connected.
false
3,885,160
0
0
0
0
If the default gateway for the system is reliable, then grab that from the output from route -n the line that contains " UG " (note the spaces) will also contain the IP of the gateway and interface name of the active interface.
0
2,443
1
1
2010-10-07T19:24:00.000
python,linux,networking,ip-address
Determine IP address of CONNECTED interface (linux) in python
1
1
2
3,885,255
0
0
0
Using just python, is it possible to possible to use a USB flash drive to serve files locally to a browser, and save information off the online web? Ideally I would only need python. Where would I start?
false
3,885,519
0
0
0
0
This doesn't seem much different then serving files from a local hard drive. You could map the thumbdrive to always be something not currently used on your machine (like U:).
0
195
0
0
2010-10-07T20:13:00.000
python,web-services,usb
Is it possible to use a USB flash drive to serve files locally to a browser?
1
1
2
3,885,544
0
0
0
I have client for web interface to long running process. I'd like to have output from that process to be displayed as it comes. Works great with urllib.urlopen(), but it doesn't have timeout parameter. On the other hand with urllib2.urlopen() the output is buffered. Is there a easy way to disable that buffer?
true
3,888,812
1.2
0
0
0
A quick hack that has occurred to me is to use urllib.urlopen() with threading.Timer() to emulate timeout. But that's only quick and dirty hack.
0
1,600
1
1
2010-10-08T08:20:00.000
python,urllib2,urllib,buffering,urlopen
unbuffered urllib2.urlopen
1
1
2
3,888,827
0
0
0
Is it possible to stream my webcam form my local machine that's connected to the internet to show up on my website without using any media server or something similar?
false
3,890,271
0.099668
0
0
1
You could do it with some kind of java applet or flash/silverlight application, just look at sites like "chat roulette"
0
807
0
0
2010-10-08T12:03:00.000
python,webcam
How to stream my webcam through my site?
1
1
2
3,890,377
0
1
0
OK so im using websockets to let javascript talk to python and that works very well BUT the data i need to send often has several parts like an array, (username,time,text) but how could i send it ? I originally though to encode each one in base64 or urlencode then use a character like | which those encoding methods will never use and then split the information. Unfortunately i cant find a method which both python and javascript can both do. So the question, is there a encoding method which bath can do OR is there a different better way i can send the data because i havent really done anything like this before. (I have but AJAX requests and i send that data URL encoded). Also im not sending miles of text, about 100bytes at a time if that. thankyou ! edit Most comments point to JSON,so, Whats the best convert to use for javascript because javascript stupidly cant convert string to JSON,or the other way round. Finished Well jaascript does have a native way to convert javascript to string, its just hidden form the world. JSON.stringify(obj, [replacer], [space]) to convert it to a string and JSON.parse(string, [reviver]) to convert it back
false
3,890,390
1
0
0
7
JSON is definitely the way to go. It has a very small overhead and is capable of storing almost any kind of data. I am not a python expert, but i am sure that there is some kind of en/decoder available.
1
1,908
0
1
2010-10-08T12:22:00.000
javascript,python,sockets,encoding
Python to javascript communication
1
1
2
3,890,407
0
0
0
I need to expose an RS232 connection to clients via a network socket. I plan to write in python a TCP socket server which will listen on some port, allow client to connect and handle outgoing and manage and control requests and replies to from the R2232 port. My question is, how do I synchronize the clients, each client will send some string to the serial port and after the reply is sent back I need to return that client the result. Only then do I need to process the next request. How to I synchronize access to the serial port ?
false
3,900,403
0.099668
0
0
1
The simplest way is to simply accept a connection, handle the request, and close the connection. This way, your program handles only one request at a time. An alternative is to use locking or semaphores, to prevent multiple clients accessing the RS232 port simultaneously.
0
1,048
0
1
2010-10-10T13:03:00.000
python,sockets,serial-port
Writing a TCP to RS232 driver
1
1
2
3,900,446
0
0
0
maybe this is a noob question, but I'm receiving some data over TCP and when I look at the string I get the following: \x00\r\xeb\x00\x00\x00\x00\x01t\x00 What is that \r character, and what does the t in \x01t mean? I've tried Googling, but I'm not sure what to Google for... thanks.
false
3,906,903
1
0
0
9
\r is a carriage return (0x0d), the t is a t.
1
819
0
0
2010-10-11T14:00:00.000
python,networking,character,bits
Non-binary(hex) characters in string received over TCP with Python
1
1
3
3,906,938
0
0
0
Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into "os.system('rm -rf *')" sent by some evil programmer. Apart from "rm -rf" you could expect people try using the server to send spam or dos someone, or fool around with "while True: pass" kind of things. Is there a way to coop with such unfriendly/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.
false
3,910,223
0.057081
0
0
2
It's impossible to provide an absolute solution for this because the definition of 'bad' is pretty hard to nail down. Is opening and writing to a file bad or good? What if that file is /dev/ram? You can profile signatures of behavior, or you can try to block anything that might be bad, but you'll never win. Javascript is a pretty good example of this, people run arbitrary javascript code all the time on their computers -- it's supposed to be sandboxed but there's all sorts of security problems and edge conditions that crop up. I'm not saying don't try, you'll learn a lot from the process. Many companies have spent millions (Intel just spent billions on McAffee) trying to understand how to detect 'bad code' -- and every day machines running McAffe anti-virus get infected with viruses. Python code isn't any less dangerous than C. You can run system calls, bind to C libraries, etc.
0
5,900
0
8
2010-10-11T21:53:00.000
python,trusted-vs-untrusted
sandbox to execute possibly unfriendly python code
1
4
7
3,910,435
0
0
0
Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into "os.system('rm -rf *')" sent by some evil programmer. Apart from "rm -rf" you could expect people try using the server to send spam or dos someone, or fool around with "while True: pass" kind of things. Is there a way to coop with such unfriendly/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.
false
3,910,223
0
0
0
0
I think a fix like this is going to be really hard and it reminds me of a lecture I attended about the benefits of programming in a virtual environment. If you're doing it virtually its cool if they bugger it. It wont solve a while True: pass but rm -rf / won't matter.
0
5,900
0
8
2010-10-11T21:53:00.000
python,trusted-vs-untrusted
sandbox to execute possibly unfriendly python code
1
4
7
3,910,850
0
0
0
Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into "os.system('rm -rf *')" sent by some evil programmer. Apart from "rm -rf" you could expect people try using the server to send spam or dos someone, or fool around with "while True: pass" kind of things. Is there a way to coop with such unfriendly/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.
false
3,910,223
0.057081
0
0
2
I would seriously consider virtualizing the environment to run this stuff, so that exploits in whatever mechanism you implement can be firewalled one more time by the configuration of the virtual machine. Number of users and what kind of code you expect to test/run would have considerable influence on choices btw. If they aren't expected to link to files or databases, or run computationally intensive tasks, and you have very low pressure, you could be almost fine by just preventing file access entirely and imposing a time limit on the process before it gets killed and the submission flagged as too expensive or malicious. If the code you're supposed to test might be any arbitrary Django extension or page, then you're in for a lot of work probably.
0
5,900
0
8
2010-10-11T21:53:00.000
python,trusted-vs-untrusted
sandbox to execute possibly unfriendly python code
1
4
7
3,910,730
0
0
0
Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into "os.system('rm -rf *')" sent by some evil programmer. Apart from "rm -rf" you could expect people try using the server to send spam or dos someone, or fool around with "while True: pass" kind of things. Is there a way to coop with such unfriendly/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.
false
3,910,223
0
0
0
0
Unless I'm mistaken (and I very well might be), this is much of the reason behind the way Google changed Python for the App Engine. You run Python code on their server, but they've removed the ability to write to files. All data is saved in the "nosql" database. It's not a direct answer to your question, but an example of how this problem has been dealt with in some circumstances.
0
5,900
0
8
2010-10-11T21:53:00.000
python,trusted-vs-untrusted
sandbox to execute possibly unfriendly python code
1
4
7
3,910,895
0
0
0
I'm using urllib.urlopen to read a file from a URL. What is the best way to get the filename? Do servers always return the Content-Disposition header? Thanks.
true
3,912,910
1.2
0
0
1
It's an optional header, so no. See if it exists, and if not then fall back to checking the URL.
0
575
0
1
2010-10-12T08:49:00.000
python,urllib,urlopen
Get filename when using urllib.urlopen
1
1
1
3,912,922
0
0
0
Is there any way in python for S60 (using the python 2.5.4 codebase) to track the amount of data transferred over the mobile device's internet connection?
true
3,928,685
1.2
0
0
2
Symbian C++ API has such a capability, so it is possible to write a python library for that, but if such already exists, that I do not know... BR STeN
0
83
0
0
2010-10-13T22:48:00.000
python,symbian,pys60
Measuring internet data transfers
1
1
1
4,002,429
0
1
0
Disclaimer here: I'm really not a programmer. I'm eager to learn, but my experience is pretty much basic on c64 20 years ago and a couple of days of learning Python. I'm just starting out on a fairly large (for me as a beginner) screen scraping project. So far I have been using python with mechanize+lxml for my browsing/parsing. Now I'm encountering some really javascript heavy pages that doesn't show a anything without javascript enabled, which means trouble for mechanize. From my searching I've kind come to the conclusion that I have a basically a few options: Trying to figure out what the javascript is doing a emulate that in my code (I don't quite know where to start with this. ;-)) Using pywin32 to control internet explorer or something similar, like using the webkit-browser from pyqt4 or even using telnet and mozrepl (this seems really hard) Switching language to perl since www::Mechanize seems be a lot more mature on per (addons and such for javascript). Don't know too much about this at all. If anyone has some pointers here that would be great. I understand that I need to do a lot of trial and error, but would be nice I wouldn't go too far away from the "true" answer, if there is such a thing.
false
3,929,005
0
0
0
0
For nonprogrammers, I recomment using IRobotSoft. It is visual oriented and with full javascript support. The shortcoming is that it runs only on Windows. The good thing is you can become an expert just by trial and error to learn the software.
0
795
0
2
2010-10-13T23:58:00.000
python,screen-scraping
Options for handling javascript heavy pages while screen scraping
1
1
4
3,992,151
0
1
0
I am trying to make an app for authenticating user with their facebook account in python. App opens the facebook login page in web browser. After user logs in, facebook redirects it to thei dummy success page. At that moment i need to capture that redirect url in my app. I am not able to catch that URL. I am opening fb login page by using webbrowser.open . How can i catch the redirect url after opening web browser? Any suggestions will be very helpful. Thanks, Tara Singh
false
3,930,129
0
0
0
0
There's a getLoginUrl in the facebook SDK. You might want to look at that. -Roozbeh
0
755
0
2
2010-10-14T04:38:00.000
python,facebook
Catching the Access Token sent by Facebook after successful authentication
1
1
2
4,663,777
0
1
0
Do you think is technically possible to take a screeshot of a website programmatically? I would like to craft a scheduled Python task that crawls a list of websites taking an homepage screenshot of them. Do you think is technically feasible or do you know third party website that offer a service like that (Input: url --> Output: screenshot) ? Any suggestion?
false
3,940,098
0
0
0
0
It's certainly technically possible. You would probably have to render the HTML directly onto an image file (or more likely, onto an in-memory bitmap that's written to an image file once completed). I don't know any libraries to do this for you (apart from a modified WebKit, perhaps)... but there's certainly websites that do this. Of course, this is a bit more involved than just opening the page in a browser on a machine and taking a screenshot programatically, but the result would likely be better if you don't care about the result from a specific browser.
0
2,523
0
6
2010-10-15T06:53:00.000
python,google-app-engine,screenshot
Is it technically possible to take a screenshot of a website programmatically?
1
1
5
3,940,121
0
0
0
I'm unfamiliar with the new oauth system. I wanted to crawl the status updates of my friends, and their friends' (if permissions allow) with my specified account credentials using the python-twitter api. With the new oauth authentication, does it means that I have to first register an application with twitter before I can use api?
true
3,940,774
1.2
1
0
1
Yes, thats right. You need to register it and connect "grant access" it with your twitter id, if you want, for example, post something on your twitter wall. Also see "connections" in your twitter id.
0
206
0
1
2010-10-15T08:53:00.000
python,oauth,twitter
noob question regarding twitter oauth
1
2
2
3,940,860
0
0
0
I'm unfamiliar with the new oauth system. I wanted to crawl the status updates of my friends, and their friends' (if permissions allow) with my specified account credentials using the python-twitter api. With the new oauth authentication, does it means that I have to first register an application with twitter before I can use api?
false
3,940,774
0
1
0
0
For use api you must register your aplication or use GET methods to post into twi through web interface.
0
206
0
1
2010-10-15T08:53:00.000
python,oauth,twitter
noob question regarding twitter oauth
1
2
2
3,940,840
0
1
0
I can traverse generic tags easily with BS, but I don't know how to find specific tags. For example, how can I find all occurances of <div style="width=300px;">? Is this possible with BS?
false
3,945,750
1
0
0
9
with bs4 things have changed a little. so the code should look like this soup = BeautifulSoup(htmlstring,'lxml') soup.find_all('div', {'style':"width=300px;"})
0
62,029
0
29
2010-10-15T20:11:00.000
python,beautifulsoup
Find a specific tag with BeautifulSoup
1
1
2
45,193,575
0
1
0
I always found links in html source stored in such format, the question is how do I change such links back to what it's normally like? Thanks a lot!
false
3,949,739
0
0
0
0
urllib.unquote() on its own may still cause problems by throwing the exception: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position n: ordinal not in range(128) In that case try: print urllib.unquote("Ober%C3%B6sterreich.txt").decode("utf8")
0
797
0
0
2010-10-16T16:33:00.000
python,html
How to turn an encoded link such as "http%3A%2F%2Fexample.com%2Fwhatever" into "http://example.com/whatever" in python?
1
1
2
5,693,726
0