Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I have a Python application that, to be brief, receives data from a remote server, processes it, responds to the server, and occasionally saves the processed data to disk. The problem I've encountered is that there is a lot of data to write, and the save process can take upwards of half a minute. This is apparently a blocking operation, so the network IO is stalled during this time. I'd like to be able to make the save operation take place in the background, so-to-speak, so that the application can continue to communicate with the server reasonably quickly.
I know that I probably need some kind of threading module to accomplish this, but I can't tell what the differences are between thread, threading, multiprocessing, and the various other options. Does anybody know what I'm looking for? | true | 3,950,607 | 1.2 | 0 | 0 | 7 | Since you're I/O bound, then use the threading module.
You should almost never need to use thread, it's a low-level interface; the threading module is a high-level interface wrapper for thread.
The multiprocessing module is different from the threading module, multiprocessing uses multiple subprocesses to execute a task; multiprocessing just happens to use the same interface as threading to reduce learning curve. multiprocessing is typically used when you have CPU bound calculation, and need to avoid the GIL (Global Interpreter Lock) in a multicore CPU.
A somewhat more esoteric alternative to multi-threading is asynchronous I/O using asyncore module. Another options includes Stackless Python and Twisted. | 0 | 1,332 | 1 | 6 | 2010-10-16T20:20:00.000 | python,multithreading,io,blocking,nonblocking | What threading module should I use to prevent disk IO from blocking network IO? | 1 | 1 | 1 | 3,950,630 | 0 |
0 | 0 | So I have 16 GB worth of XML files to process (about 700 files total), and I already have a functional PHP script to do that (With XMLReader) but it's taking forever. I was wondering if parsing in Python would be faster (Python being the only other language I'm proficient in, I'm sure something in C would be faster). | false | 3,953,563 | 0.132549 | 1 | 0 | 2 | I think that both of them can rely over wrappers for fast C libraries (mostly libxml2) so there's shouldn't be too much difference in parsing per se.
You could try if there are differences caused by overhead, then it depends what are you gonna do over that XML. Parsing it for what? | 0 | 1,714 | 0 | 2 | 2010-10-17T14:07:00.000 | php,python,xml | Is XML parsing in PHP as fast as Python or other alternatives? | 1 | 2 | 3 | 3,953,576 | 0 |
0 | 0 | So I have 16 GB worth of XML files to process (about 700 files total), and I already have a functional PHP script to do that (With XMLReader) but it's taking forever. I was wondering if parsing in Python would be faster (Python being the only other language I'm proficient in, I'm sure something in C would be faster). | false | 3,953,563 | 0.132549 | 1 | 0 | 2 | There's actually three differing performance problems here:
The time it takes to parse a file, which depends on the size of individual files.
The time it takes to handle the files and directories in the filesystem, if there's a lot of them.
Writing the data into your databases.
Where you should look for performance improvements depends on which one of these is the biggest bottleneck.
My guess is that the last one is the biggest problem because writes is almost always the slowest: writes can't be cached, they requires writing to disk and if the data is sorted it can take a considerable time to find the right spot to write it.
You presume that the bottleneck is the first alternative, the XML parsing. If that is the case, changing language is not the first thing to do. Instead you should see if there's some sort of SAX parser for your language. SAX parsing is much faster and memory effective than DOM parsing. | 0 | 1,714 | 0 | 2 | 2010-10-17T14:07:00.000 | php,python,xml | Is XML parsing in PHP as fast as Python or other alternatives? | 1 | 2 | 3 | 3,953,874 | 0 |
1 | 0 | Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.
My plan:
Shake hands:
Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.
Browser generate a encrypt key k3, and decrypt key k4, send k3 to server.
Browse:
During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.
Please figure out my mistake.
If it's right, my question is
how to generate a key pair in
javascript and python, are there
some libraries?
how to encrypt and decrypt data in
javascript and python , are there
some libraries? | false | 3,977,274 | 0.049958 | 0 | 0 | 1 | You can't stop the men in the middle from trapping your packets/messages, especially if they don't really care if you find out. What you can do is encrypt your messages so that trapping them does not enable them to read what you're sending and receiving. In theory that's fine, but in practice you can't do modern crypto by hand even with the keys: you need to transfer some software too, and that's where it gets much more awkward.
You want to have the client's side of the crypto software locally, or at least enough to be able to check whether a digital signature of the crypto software is correct. Digital signatures are very difficult to forge. Deliver signed code, check its signature, and if the signature validates against a public key that you trust (alas, you'll have to transfer that out of band) then you know that the code (plus any CA certificates – trust roots – sent along with it) can be trusted to work as desired. The packets can then go over plain HTTP; they'll either get to where they're meant to or be intercepted, but either way nobody but the intended recipient will be able to read them. The only advantage of SSL is that it builds virtually all of this stuff for you and makes it easy.
I have no idea how practical it is to do this all in Javascript. Obviously it can do it – it's a Turing-complete language, it has access to all the requisite syscalls – but it could be stupidly expensive. It might be easier to think in terms of using GPG…
(Hiding the fact from the government that you are communicating at all is a different problem entirely.) | 0 | 2,770 | 0 | 5 | 2010-10-20T11:29:00.000 | javascript,python,ssl | Encryption: simulate SSL in javascript and python | 1 | 3 | 4 | 3,978,603 | 0 |
1 | 0 | Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.
My plan:
Shake hands:
Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.
Browser generate a encrypt key k3, and decrypt key k4, send k3 to server.
Browse:
During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.
Please figure out my mistake.
If it's right, my question is
how to generate a key pair in
javascript and python, are there
some libraries?
how to encrypt and decrypt data in
javascript and python , are there
some libraries? | false | 3,977,274 | 0 | 0 | 0 | 0 | There's a big problem, if security really is a big concern: Your algorithm is going to be transfered unsecured. Can you trust the client at all? Can the client trust the server at all? | 0 | 2,770 | 0 | 5 | 2010-10-20T11:29:00.000 | javascript,python,ssl | Encryption: simulate SSL in javascript and python | 1 | 3 | 4 | 3,977,301 | 0 |
1 | 0 | Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.
My plan:
Shake hands:
Browser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.
Browser generate a encrypt key k3, and decrypt key k4, send k3 to server.
Browse:
During the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.
Please figure out my mistake.
If it's right, my question is
how to generate a key pair in
javascript and python, are there
some libraries?
how to encrypt and decrypt data in
javascript and python , are there
some libraries? | false | 3,977,274 | 0.099668 | 0 | 0 | 2 | You have a fundamental problem in that a JavaScript implementation of SSL would have no built-in root certificates to establish trust, which makes it impossible to prevent a man-in-the-middle attack. Any certificates you deliver from your site, including a root certificate, could be intercepted and replaced by a spy.
Note that this is a fundamental limitation, not a peculiarity of the way SSL works. All cryptographic security relies on establishing a shared secret. The root certificates deployed with mainstream browsers provide the entry points to a trust network established by certifying authorities (CAs) that enable you to establish the shared secret with a known third party. These certificates are not, AFAIK, directly accessible to JavaScript code. They are only used to establish secure (e.g., https) connections. | 0 | 2,770 | 0 | 5 | 2010-10-20T11:29:00.000 | javascript,python,ssl | Encryption: simulate SSL in javascript and python | 1 | 3 | 4 | 3,977,325 | 0 |
0 | 0 | I'm creating a desktop application that requires authorization from a remote server before performing certain actions locally.
What's the best way to have my desktop application notified when the server approves the request for authorization? Authorization takes 20 seconds average on, 5 seconds minimum, with a 120 second timeout.
I considered polling the server ever 3 seconds or so, but this would be hard to scale when I deploy the application more widely, and seems inelegant.
I have full control over the design of the server and client API. The server is using web.py on Ubuntu 10.10, Python 2.6. | false | 3,978,739 | 0 | 0 | 0 | 0 | Does the remote end block while it does the authentication? If so, you can use a simple select to block till it returns.
Another way I can think of is to pass a callback URL to the authentication server asking it to call it when it's done so that your client app can proceed. Something like a webhook. | 0 | 116 | 1 | 0 | 2010-10-20T14:08:00.000 | python,authentication,authorization,polling,web.py | How can my desktop application be notified of a state change on a remote server? | 1 | 1 | 2 | 3,978,891 | 0 |
0 | 0 | I have a list of product names in Chinese. I want to translate these into English, I have tried Google AJAX language API, but it seems that translation is not good, it would be great if someone could give me some advice about or point me towards a better choice.
Thank you. | true | 3,979,092 | 1.2 | 0 | 0 | 2 | I think Google are probably one of the best web based automatic translation services. | 1 | 4,408 | 0 | 4 | 2010-10-20T14:44:00.000 | python,translate | Is there a translation API service for Chinese to English? | 1 | 1 | 2 | 3,979,445 | 0 |
1 | 0 | I'm working on a script currently that needs to pull information down from a specific user's wall. The only problem is that it requires authentication, and the script needs to be able to run without any human interference. Unfortunately all I can find thus far tells me that I need to register an application, and then do the whole FB Connect dance to pull off what I want. Problem is that requires browser interaction, which I'm trying to avoid.
I figured I could probably just use httplib2, and login this route. I got that to work, only to find that with that method I still don't get an "access_token" in any retrievable method. If I could get that token without launching a browser, I'd be completely set. Surely people are crawling feeds and such without using FB Connect right? Is it just not possible, thus why I'm hitting so many road blocks? Open to any suggestions you all might have. | true | 4,000,896 | 1.2 | 0 | 0 | 5 | What you are trying to do is not possible. You are going to have to use a browser to get an access token one way or another. You cannot collect username and passwords (a big violation of Facebook's TOS). If you need a script that runs without user interaction you will still need to use a browser to authenticate, but once you have the user's token you can use it without their direct interaction. You must request the "offline_access" permission to gain an access token that does not expire. You can save this token and then use it for however long you need. | 0 | 4,595 | 0 | 4 | 2010-10-22T20:55:00.000 | python,facebook | Logging into Facebook without a Browser | 1 | 1 | 3 | 4,000,963 | 0 |
0 | 0 | I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).
I am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?
So far I have decided:
to use YUV colorspace instead of RGB
to use UDP protocol instead of TCP/IP
Is there anything else I could do to get the maximum fps possible? | false | 4,013,046 | 0.197375 | 1 | 0 | 2 | Compress the difference between successive images. Add some checksum. Provide some way for the receiver to request full image data for the case where things get out of synch.
There are probably a host of protocols doing that already.
So, search for live video stream protocols.
Cheers & hth., | 0 | 375 | 0 | 0 | 2010-10-25T08:49:00.000 | c#,c++,python,algorithm,performance | How to speed up transfer of images from client to server | 1 | 2 | 2 | 4,013,104 | 0 |
0 | 0 | I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).
I am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?
So far I have decided:
to use YUV colorspace instead of RGB
to use UDP protocol instead of TCP/IP
Is there anything else I could do to get the maximum fps possible? | true | 4,013,046 | 1.2 | 1 | 0 | 4 | This might be quite a bit of work but if your client can handle the computations in real time you could use the same method that video encoders use. Send a key frame every say 5 frames and in between only send the information that changed not the whole frame. I don't know the details of how this is done, but try Googling p-frames or video compression. | 0 | 375 | 0 | 0 | 2010-10-25T08:49:00.000 | c#,c++,python,algorithm,performance | How to speed up transfer of images from client to server | 1 | 2 | 2 | 4,013,112 | 0 |
0 | 0 | I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.
Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?
So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.
I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?
Thank you | false | 4,014,670 | 0.033321 | 1 | 0 | 1 | You can write a WEB APPLICATION. The encryption part is solved by simple HTTPS usage. On the server side (your home computer with USB devices attached to it) you should use Python (since you're quite experienced with it) and a Python Web Framework you want (I.E. Django). | 0 | 1,907 | 1 | 3 | 2010-10-25T12:48:00.000 | java,php,javascript,python | Send commands between two computers over the internet | 1 | 3 | 6 | 4,014,696 | 0 |
0 | 0 | I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.
Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?
So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.
I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?
Thank you | false | 4,014,670 | 0 | 1 | 0 | 0 | Well, I think that java can work well, in fact you have to deal with system calls to manage usb devices and things like that (and as far as I know, PHP is not the best language to do this). Also shouldn't be so hard to create a basic server/client program, just use good encryption mechanism to not show commands around web. | 0 | 1,907 | 1 | 3 | 2010-10-25T12:48:00.000 | java,php,javascript,python | Send commands between two computers over the internet | 1 | 3 | 6 | 4,014,765 | 0 |
0 | 0 | I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.
Extra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?
So basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.
I have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?
Thank you | false | 4,014,670 | 0 | 1 | 0 | 0 | I you are looking for solution you could use from any computer anywhere in the worls without the need to install any software on client pc, try logmein.com (http://secure.logmein.com).
It is free, reliable, works in any modern browser, you don't have to remmeber IPs and hope they won't change, ...
Or if this is a "for fun project" why not write a php script, open port 80 in your router so you can access you script from outside, possibly dynamically link some domain to your IP (http://www.dyndns.com/). In the script you would just login and then for example type the orders in textfield in some form in your script. Lets just say you want to do some command prompt stuf, so you will basically remotely construst a *.bat file for example. Then the script stores this a fromtheinternets.bat to a folder on your desktop that is being constantly monitored for changes. And when such a change is found you just activate the bat file.
Insecure? Yes (It could be made secureER)
Fun to write? Definitely
PS: I am new here, hope it's not "illegal" to post link to actual services, instead of wiki lists. This is by no means and advertisement, I am just a happy user. :) | 0 | 1,907 | 1 | 3 | 2010-10-25T12:48:00.000 | java,php,javascript,python | Send commands between two computers over the internet | 1 | 3 | 6 | 4,015,151 | 0 |
0 | 0 | I've have asked these questions before with no proper answer. I hope I'll get some response here.
I'm developing an instant messenger in python and I'd like to handle video/audio streaming with VLC. Tha basic idea right now is that in each IM client I'm running one VLC instance that acts as a server that streams to all the users I want, and another VLC instance that's a client and recieves and displays all the streams that other users are sending to me. As you can see, it's kind of a P2P connection and I am having lots of problems.
My first problem was VLC can handle only one stream per port, but I solved this using VLM, the Videolan Manager which allows multiple streams with one instance and on one port.
My second problem was this kind of P2P take has several drawbacks as if someone is behind NAT or a router, you have to do manual configurations to forward the packages from the router to your PC, and it also has another drawback, you can only forward to 1 PC, so you would be able to use the program in only one workstation.
Also, the streams were transported in HTTP protocol, which uses TCP and it's pretty slow. When I tried to do the same with RTSP, I wasn't able to get the stream outside my private LAN.
So, this P2P take is very unlikely to be implemented successfully by an amateur like me, as it has all the typical NAT traversal problems, things that I don't want to mess with as this is not a commercial application, just a school project I must finish in order to graduate as a technician. Finally, I've been recommended to a use a server in a well known IP and that would solve the problem, only one router configuration and let both ends of the conversations be clients. I have no idea how to implement this idea, please any help is useful. Thanks in advance. Sorry for any error, I am not a programming/networking expert nor am I an english-speaking person. | false | 4,015,227 | 0 | 0 | 0 | 0 | I think they were suggesting you run your program on a LAN which has no ports blocked. | 0 | 224 | 0 | 0 | 2010-10-25T13:53:00.000 | python,streaming,p2p,vlc,instant-messaging | Problems with VLC and instant messaging | 1 | 1 | 1 | 4,200,613 | 0 |
1 | 0 | Right now I'm base 64 encoding them and using data uris. The idea was that this will somehow lower the number of requests the browser needs to make. Does this bucket hold any water?
What is the best way of serving images in general? DB, from FS, S3?
I am most interested in python and java based answers, but all are welcome! | false | 4,046,242 | 0.049958 | 0 | 0 | 1 | Data urls will definitely reduce the number of requests to the server, since the browser doesn't have to ask for the pixels in a separate request. But they are not supported in all browsers. You'll have to make the tradeoff. | 0 | 547 | 0 | 3 | 2010-10-28T18:54:00.000 | java,javascript,python,image | What is the best way to serve small static images? | 1 | 1 | 4 | 4,046,258 | 0 |
0 | 0 | I'm writing a Python client+server that uses gevent.socket for communication. Are there any good ways of testing the socket-level operation of the code (for example, verifying that SSL connections with an invalid certificate will be rejected)? Or is it simplest to just spawn a real server?
Edit: I don't believe that "naive" mocking will be sufficient to test the SSL components because of the complex interactions involved. Am I wrong in that? Or is there a better way to test SSL'd stuff? | false | 4,047,897 | 1 | 1 | 0 | 9 | Mocking and stubbing are great, but sometimes you need to take it up to the next level of integration. Since spawning a server, even a fakeish one, can take some time, consider a separate test suite (call them integration tests) might be in order.
"Test it like you are going to use it" is my guideline, and if you mock and stub so much that your test becomes trivial it's not that useful (though almost any test is better than none). If you are concerned about handling bad SSL certs, by all means make some bad ones and write a test fixture you can feed them to. If that means spawning a server, so be it. Maybe if that bugs you enough it will lead to a refactoring that will make it testable another way. | 0 | 14,758 | 0 | 20 | 2010-10-28T23:16:00.000 | python,sockets,testing,gevent | Python: unit testing socket-based code? | 1 | 1 | 3 | 4,048,286 | 0 |
0 | 0 | I'm looking to parse a xml file using Python and I was wondering if there was any way of automating the task over manually walking through all xml nodes/attributes using xml.dom.minidom library.
Essentially what would be sweet is if I could load a xml schema for the xml file I am reading then have that automatically generate some kind of data struct/set with all of the data within the xml.
In C# land this is possible via creating a strongly typed dataset class from a xml schema and then using this dataset to read the xml file in.
Is there any equivalent in Python? | false | 4,054,205 | -0.066568 | 0 | 0 | -1 | hey dude - take beautifulSoup - it is a super library. HEAD over to the site scraperwiki.com
the can help you! | 0 | 931 | 0 | 1 | 2010-10-29T17:05:00.000 | c#,python,xml,dataset,schema | Python XML Parse (using schema to generate dataset) | 1 | 2 | 3 | 4,054,231 | 0 |
0 | 0 | I'm looking to parse a xml file using Python and I was wondering if there was any way of automating the task over manually walking through all xml nodes/attributes using xml.dom.minidom library.
Essentially what would be sweet is if I could load a xml schema for the xml file I am reading then have that automatically generate some kind of data struct/set with all of the data within the xml.
In C# land this is possible via creating a strongly typed dataset class from a xml schema and then using this dataset to read the xml file in.
Is there any equivalent in Python? | false | 4,054,205 | 0 | 0 | 0 | 0 | You might take a look at lxml.objectify, particularly the E-factory. It's not really an equivalent to the ADO tools, but you may find it useful nonetheless. | 0 | 931 | 0 | 1 | 2010-10-29T17:05:00.000 | c#,python,xml,dataset,schema | Python XML Parse (using schema to generate dataset) | 1 | 2 | 3 | 4,054,752 | 0 |
0 | 0 | I have written a program which sends more than 15 queries to Google in each iteration, total iterations is about 50. For testing I have to run this program several times. However, by doing that, after several times, Google blocks me. is there any ways so I can fool google maybe by adding delays between each iteration? Also I have heard that google can actually learn the timesteps. so I need these delays to be random so google cannot find a patter from it to learn my behavior. also it should be short so the whole process doesn't take so much.
Does anyone knows something, or can provide me a piece of code in python?
Thanks | false | 4,054,254 | 0.039979 | 0 | 0 | 1 | Also you can try to use few proxy servers for prevent ban by IP adress. urllib support proxies by special constructor parameter, httplib can use proxy too | 0 | 111,313 | 0 | 67 | 2010-10-29T17:10:00.000 | python,delay | How to add random delays between the queries sent to Google to avoid getting blocked in python | 1 | 2 | 5 | 4,054,980 | 0 |
0 | 0 | I have written a program which sends more than 15 queries to Google in each iteration, total iterations is about 50. For testing I have to run this program several times. However, by doing that, after several times, Google blocks me. is there any ways so I can fool google maybe by adding delays between each iteration? Also I have heard that google can actually learn the timesteps. so I need these delays to be random so google cannot find a patter from it to learn my behavior. also it should be short so the whole process doesn't take so much.
Does anyone knows something, or can provide me a piece of code in python?
Thanks | false | 4,054,254 | 0.158649 | 0 | 0 | 4 | Since you're not testing Google's speed, figure out some way to simulate it when doing your testing (as @bstpierre suggested in his comment). This should solve your problem and factor its variable response times out at the same time. | 0 | 111,313 | 0 | 67 | 2010-10-29T17:10:00.000 | python,delay | How to add random delays between the queries sent to Google to avoid getting blocked in python | 1 | 2 | 5 | 4,054,614 | 0 |
1 | 0 | Our client wants us to implement change history for website articles. What is the best way to do it? | true | 4,075,309 | 1.2 | 1 | 0 | 0 | I presume you're using a CMS. If not, use one. WordPress is a good start.
If you're developing from scratch, the usual method is to have two tables: one for page information (so title, menu position etc.) and then a page_content table, which has columns for page_id, content, and timestamp.
As you save a page, instead of updating a database table you instead write a new record to the page_content table with the page's ID and the time of the save. That way, when displaying pages on your front-end you just select the latest record for that particular page ID, but you also have a history of that page by querying for all records by page_id, sorted by timestamp. | 0 | 289 | 0 | 0 | 2010-11-02T06:11:00.000 | php,.net,python,ruby | What is the best way to store change history of website articles? | 1 | 2 | 3 | 4,076,484 | 0 |
1 | 0 | Our client wants us to implement change history for website articles. What is the best way to do it? | false | 4,075,309 | -0.066568 | 1 | 0 | -1 | There is a wide variety of ways to do this as you alluded by tagging php, .net, python, and ruby. You missed a few off the top of my head perl and jsp. Each of these have their plusses and minuses and is really a question of what best suits your needs.
PHP is probably the fastest reward for time spent.
Ruby, i'm assuming Ruby on Rails, is the automatic Buzz Word Bingo for the day.
.Net, are you all microsoft every where and want easy integration with your exchange server and a nice outlook API?
python? Do you like the scripted languages but you're too good for php and ruby.
Each of these languages have their strong points and their draw backs and it's really a matter of what you know, how much you have to spend, and what is your timeframe. | 0 | 289 | 0 | 0 | 2010-11-02T06:11:00.000 | php,.net,python,ruby | What is the best way to store change history of website articles? | 1 | 2 | 3 | 4,075,372 | 0 |
0 | 0 | I need something like rfc822.AddressList to parse, say, the content of the "TO" header field of an email into individual addresses. Since rfc822 is deprecated in favor of the email package, I looked for something similar there but couldn't find anything. Does anyone know what I'm supposed to use instead?
Thanks! | true | 4,084,608 | 1.2 | 0 | 0 | 6 | Oh it's email.utils.getaddresses. Just make sure to call it with a list. | 0 | 1,862 | 0 | 7 | 2010-11-03T06:12:00.000 | python,email,rfc822,email-parsing | Is there a non-deprecated equivalent of rfc822.AddressList? | 1 | 1 | 2 | 4,084,648 | 0 |
0 | 0 | first of all, I'm sorry for my English
I am doing some scripting in Python using Selenium RC.
The aim is to access to some website, and download some files
I would like to know, at the end of the script, what files exactly have been downloaded
At that moment, I'm doing something a bit naive, which is checking the new files who appears in the download directory of Firefox, it's working well but if I launch severals clients in the same times, they can't detect which files they own etc...
So i was trying to find a solution to that problem, if it's possible to handle the download from Firefox to know exactly when a download occur, and what is downloaded, then I would be super fine, but so far, I haven't find anything about that
Thanks for your help | false | 4,088,703 | 0 | 0 | 0 | 0 | In this case, Just create a new folder everytime and download your file there.
Make sure the foldername is incremented if it already exits (Ex: folder1, folder2, Folder3.....) | 0 | 1,024 | 0 | 3 | 2010-11-03T15:28:00.000 | python,firefox,selenium,download,firefox-addon | Is it possible to know what file is downloaded by Firefox with Selenium | 1 | 3 | 4 | 13,342,386 | 0 |
0 | 0 | first of all, I'm sorry for my English
I am doing some scripting in Python using Selenium RC.
The aim is to access to some website, and download some files
I would like to know, at the end of the script, what files exactly have been downloaded
At that moment, I'm doing something a bit naive, which is checking the new files who appears in the download directory of Firefox, it's working well but if I launch severals clients in the same times, they can't detect which files they own etc...
So i was trying to find a solution to that problem, if it's possible to handle the download from Firefox to know exactly when a download occur, and what is downloaded, then I would be super fine, but so far, I haven't find anything about that
Thanks for your help | false | 4,088,703 | 0 | 0 | 0 | 0 | I haven't tried it myself, but I would consider setting up multiple Firefox profiles each set with a different download directory and then telling my instances to use those profiles (or maybe programmatically setting profile values if you're using Selenium2 - I'm not sure if download directory is possible to change or not). Then you can keep monitoring each directory and seeing what was downloaded for each session. | 0 | 1,024 | 0 | 3 | 2010-11-03T15:28:00.000 | python,firefox,selenium,download,firefox-addon | Is it possible to know what file is downloaded by Firefox with Selenium | 1 | 3 | 4 | 4,698,653 | 0 |
0 | 0 | first of all, I'm sorry for my English
I am doing some scripting in Python using Selenium RC.
The aim is to access to some website, and download some files
I would like to know, at the end of the script, what files exactly have been downloaded
At that moment, I'm doing something a bit naive, which is checking the new files who appears in the download directory of Firefox, it's working well but if I launch severals clients in the same times, they can't detect which files they own etc...
So i was trying to find a solution to that problem, if it's possible to handle the download from Firefox to know exactly when a download occur, and what is downloaded, then I would be super fine, but so far, I haven't find anything about that
Thanks for your help | false | 4,088,703 | 0 | 0 | 0 | 0 | If you are working with python-->Selenium RC why don't you just
create a lastdownload.txt type of file, and put in the dates, filenames
of the files you download.
So each time your script runs, it will check the fileserver, and your log file
to see which files are new, which files you already have. (if same filename is used
you can check the lastupdatetime of headers, or even the filesize as a way to compare)
Then you just download the new files... so this way you replicate a simple incremental mechanism with lookup on a txt file... | 0 | 1,024 | 0 | 3 | 2010-11-03T15:28:00.000 | python,firefox,selenium,download,firefox-addon | Is it possible to know what file is downloaded by Firefox with Selenium | 1 | 3 | 4 | 4,118,400 | 0 |
0 | 0 | I'm using pythonbrew to install Python 2.6.6 on Snow Leopard. It failed with a readline error, then a socket error. I installed readline from source, which made the installer happy on the next attempt, but the socket error remains:
test_socket
test test_socket failed -- Traceback (most recent call last):
File "/Users/gferguson/python/pythonbrew/build/Python-2.6.6/Lib/test/test_socket.py", line 483, in testSockName
my_ip_addr = socket.gethostbyname(socket.gethostname())
gaierror: [Errno 8] nodename nor servname provided, or not known
Digging around with the system Python shows:
>>> import socket
>>> my_ip_addr = socket.gethostbyname(socket.gethostname())
Traceback (most recent call last):
File "", line 1, in
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
>>> socket.gethostname()
'S1WSMA-JHAMI'
>>> socket.gethostbyname('S1WSMA-JHAMI')
Traceback (most recent call last):
File "", line 1, in
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
>>> socket.gethostbyname('google.com')
'74.125.227.20'
I triangulated the problem with Ruby's IRB:
IPSocket.getaddress(Socket.gethostname)
SocketError: getaddrinfo: nodename nor servname provided, or not known
So, I'm not sure if this is a bug in the resolver not understanding the hostname, or if there's something weird in the machine's configuration, or if it's something weird in our network's DNS lookup, but whatever it is the installer isn't happy.
I think it's a benign failure in the installer though, so I feel safe to force the test to succeed, but I'm not sure how to tell pythonbrew how to ignore that test value or specifically pass test_socket.
I'm also seeing the following statuses but haven't figured out if they're significant yet:
33 tests skipped:
test_al test_bsddb test_bsddb3 test_cd test_cl test_codecmaps_cn
test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr
test_codecmaps_tw test_curses test_dl test_epoll test_gdbm test_gl
test_imageop test_imgfile test_largefile test_linuxaudiodev
test_normalization test_ossaudiodev test_pep277 test_py3kwarn
test_smtpnet test_socketserver test_startfile test_sunaudiodev
test_timeout test_urllib2net test_urllibnet test_winreg
test_winsound test_zipfile64
1 skip unexpected on darwin:
test_dl
Anyone have experience getting Python 2.6.6 installed with pythonbrew on Snow Leopard?
Update: I just tried the socket.gethostbyname(socket.gethostname()) command from Python installed on my MacBook Pro with Snow Leopard, and it successfully reported my IP back so it appears the problem is in the system config at work. I am going to ask at SO's sibling "Apple" site and see if anyone knows what it might be. | true | 4,090,753 | 1.2 | 0 | 0 | 0 | The solution was to --force pythonbrew to install in spite of the errors.
I tested the socket responses using the built-in Python, Perl and Ruby, and they had the same problem resolving the localhost name. I tested using a current version of Ruby and Python on one of my Linux boxes, and the calls worked, so I was pretty sure it was something outside of that particular Mac's configuration.
After forcing the install I tested the socket calls to other hosts and got the expected results and haven't had any problems doing other networking tasks so I think everything is fine. | 0 | 2,411 | 1 | 0 | 2010-11-03T19:15:00.000 | python,macos,installation,osx-snow-leopard | Workaround for Pythonbrew failing because test_socket can't resolve? | 1 | 1 | 2 | 4,161,287 | 0 |
1 | 0 | I have exposed a simple RESTful JSON url via CherryPy (Python web framework). I have a second application (using Pylons) which needs to reach a URL exposed by CherryPy. Both are being served via localhost. Both URLs resolve just fine when using a browser directly.
But, when a DOJO script running from the initial Pylons request invokes the JSON url from CherryPy, it fails. I open LiveHeaders in Firefox and find that DOJO is first sending an HTTP "OPTIONS" request. CherryPy refuses the OPTIONS request with a 405, Method Not Allowed and it all stops.
If I drop this same page into the CherryPy application, all is well.
What is the best way to resolve this on my localhost dev platform? .... and will this occur in Prod? | false | 4,107,576 | 0.099668 | 0 | 0 | 1 | My guess would be you are serving these two apps locally via 2 different ports, which is making dojo try to execute a cross-domain XHR call.
You need to be able to serve the JSON URL from the same URL (protocol, hostname, & port) to make a successful XHR call. I do this by using nginx locally, and configuring it to serve the database requests from my Dojo application by forwarding them to CouchDB. | 0 | 2,060 | 0 | 1 | 2010-11-05T15:50:00.000 | javascript,python,ajax,dojo | DOJO AJAX Request asking for OPTIONS | 1 | 1 | 2 | 5,576,563 | 0 |
0 | 0 | I am experiencing strange behavior with urllib2.urlopen() on Ubuntu 10.10. The first request to a url goes fast but the second takes a long time to connect. I think between 5 and 10 seconds. On windows this just works normal?
Does anybody have an idea what could cause this issue?
Thanks, Onno | true | 4,110,992 | 1.2 | 0 | 0 | 3 | 5 seconds sounds suspiciously like the DNS resolving timeout.
A hunch, It's possible that it's cycling through the DNS servers in your /etc/resolv.conf and if one of them is broken, the default timeout is 5 seconds on linux, after which it will try the next one, looping back to the top when it's tried them all.
If you have multiple DNS servers listed in resolv.conf, try removing all but one. If this fixes it; then after that see why you're being assigned incorrect resolving servers. | 0 | 427 | 1 | 1 | 2010-11-05T23:37:00.000 | python,ubuntu,urllib2,ubuntu-10.10 | Strange urllib2.urlopen() behavior on Ubuntu 10.10 | 1 | 1 | 2 | 4,112,300 | 0 |
1 | 0 | I'm trying to parse some html in Python. There were some methods that actually worked before... but nowadays there's nothing I can actually use without workarounds.
beautifulsoup has problems after SGMLParser went away
html5lib cannot parse half of what's "out there"
lxml is trying to be "too correct" for typical html (attributes and tags cannot contain unknown namespaces, or an exception is thrown, which means almost no page with Facebook connect can be parsed)
What other options are there these days? (if they support xpath, that would be great) | false | 4,114,722 | 0.197375 | 0 | 0 | 5 | html5lib cannot parse half of what's "out there"
That sounds extremely implausible. html5lib uses exactly the same algorithm that's also implemented in recent versions of Firefox, Safari and Chrome. If that algorithm broke half the web, I think we would have heard. If you have particular problems with it, do file bugs. | 0 | 3,978 | 0 | 15 | 2010-11-06T19:17:00.000 | python,html,parsing | Python html parsing that actually works | 1 | 2 | 5 | 4,115,108 | 0 |
1 | 0 | I'm trying to parse some html in Python. There were some methods that actually worked before... but nowadays there's nothing I can actually use without workarounds.
beautifulsoup has problems after SGMLParser went away
html5lib cannot parse half of what's "out there"
lxml is trying to be "too correct" for typical html (attributes and tags cannot contain unknown namespaces, or an exception is thrown, which means almost no page with Facebook connect can be parsed)
What other options are there these days? (if they support xpath, that would be great) | false | 4,114,722 | 0.039979 | 0 | 0 | 1 | I think the problem is that most HTML is ill-formed. XHTML tried to fix that, but it never really caught on enough - especially as most browsers do "intelligent workarounds" for ill-formed code.
Even a few years ago I tried to parse HTML for a primitive spider-type app, and found the problems too difficult. I suspect writing your own might be on the cards, although we can't be the only people with this problem! | 0 | 3,978 | 0 | 15 | 2010-11-06T19:17:00.000 | python,html,parsing | Python html parsing that actually works | 1 | 2 | 5 | 4,114,746 | 0 |
0 | 0 | I'm getting some content from Twitter API, and I have a little problem, indeed I sometimes get a tweet ending with only one backslash.
More precisely, I'm using simplejson to parse Twitter stream.
How can I escape this backslash ?
From what I have read, such raw string shouldn't exist ...
Even if I add one backslash (with two in fact) I still get an error as I suspected (since I have a odd number of backslashes)
Any idea ?
I can just forget about these tweets too, but I'm still curious about that.
Thanks : ) | false | 4,121,751 | 0.049958 | 0 | 0 | 1 | Prepending the string with r (stands for "raw") will escape all characters inside the string. For example:
print r'\b\n\\'
will output
\b\n\\
Have I understood the question correctly? | 1 | 1,886 | 0 | 1 | 2010-11-08T06:38:00.000 | python,string,escaping,backslash | [Python]How to deal with a string ending with one backslash? | 1 | 1 | 4 | 4,121,817 | 0 |
1 | 0 | Hi I am trying to secure a server function being used for an Ajax request, so that the function is not accessed for any sort of malicious activity. I have done the following till now:-
I am checking whether a valid session is present while the function is being called.
I am using POST rather than GET
I look for specific headers by using request.is_xhr else I induce a redirect.
I have compressed the javascript using dojo shrinksafe(..i am using dojo..)
What else can and should be done here. Need your expert advice on this.
(NB-I am using Flask and Dojo) | true | 4,131,327 | 1.2 | 0 | 0 | 2 | No any special secure actions required. Consider ajax request as any other client request. | 0 | 280 | 0 | 1 | 2010-11-09T07:24:00.000 | python,ajax,dojo,flask | Handling and securing server functions in an ajax request..python | 1 | 1 | 1 | 4,131,349 | 0 |
0 | 0 | For a while I've been using a package called "gnosis-utils" which provides an XML pickling service for Python. This class works reasonably well, however it seems to have been neglected by it's developer for the last four years.
At the time we originally selected gnosis it was the only XML serization tool for Python. The advantage of Gnosis was that it provided a set of classes whose function was very similar to the built-in Python XML pickler. It produced XML which python-developers found easy to read, but non-python developers found confusing.
Now that the proejct has grown we have a new requirement: We need to be able to exchange XML with our colleagues who prefer Java or .Net. These non-python developers will not be using Python - they intend to produce XML directly, hence we have a need to simplify the format of the XML.
So are there any alternatives to Gnosis. Our requirements:
Must work on Python 2.4 / Windows x86 32bit
Output must be XML, as simple as possible
API must resemble Pickle as closely as possible
Performance is not hugely important
Of course we could simply adapt Gnosis, however we'd prefer to simply use a component which already provides the functions we requrie (assuming that it exists). | true | 4,135,836 | 1.2 | 0 | 0 | 0 | So what you're looking for is a python library that spits out arbitrary XML for your objects? You don't need to control the format, so you can't be bothered to actually write something that iterates over the relevant properties of your data and generates the XML using one of the existing tools?
This seems like a bad idea. Arbitrary XML serialization doesn't sound like a good way to move forward. Any format that includes all of pickle's features is going to be ugly, verbose, and very nasty to use. It will not be simple. It will not translate well into Java.
What does your data look like?
If you tell us precisely what aspects of pickle you need (and why lxml.objectify doesn't fulfill those), we will be better able to help you.
Have you considered using JSON for your serialization? It's easy to parse, natively supports python-like data structures, and has wide-reaching support. As an added bonus, it doesn't open your code to all kinds of evil exploits the way the native pickle module does.
Honestly, you need to bite the bullet and define a format, and build a serializer using the standard XML tools, if you absolutely must use XML. Consider JSON. | 0 | 2,017 | 0 | 4 | 2010-11-09T16:10:00.000 | python,xml,serialization,pickle | XML object serialization in python, are there any alternatives to Gnosis? | 1 | 1 | 2 | 4,136,375 | 0 |
1 | 0 | I would like to access any element in a web page. I know how to do that when I have a form (form = cgi.FieldStorage()), but not when I have, for example, a table.
How can I do that?
Thanks | false | 4,149,598 | 0 | 0 | 0 | 0 | You can access only data, posted by form (or as GET parameters).
So, you can extract data you need using JavaScript and post it through form | 0 | 178 | 0 | 0 | 2010-11-10T22:07:00.000 | python,cgi,html-parsing | How can I access any element in a web page with Python? | 1 | 1 | 4 | 4,149,742 | 0 |
0 | 0 | I need to pickle a scapy packet. Most of the time this works, but sometimes the pickler complains about a function object. As a rule of thumb: ARP packets pickle fine. Some UDP packets are problematic. | false | 4,156,328 | 0.099668 | 0 | 0 | 3 | (This is more for reference, so no votes expected)
The Scapy list [email protected] is well-monitored and tends to be very responsive. If you don't get answers here, try there as well. | 0 | 4,367 | 0 | 12 | 2010-11-11T15:57:00.000 | python,pickle,scapy | How to pickle a scapy packet? | 1 | 1 | 6 | 4,157,378 | 0 |
0 | 0 | Using Python, how might one read a file's path from a remote server?
This is a bit more clear to me on my local PC. | false | 4,163,456 | 0 | 1 | 0 | 0 | use the os.path module to manipulate path string (you need to import os)
the current directory is os.path.abspath(os.curdir)
join 2 parts of a path with os.path.join(dirname, filename): this will take care of inserting the right path separator ('\' or '/', depending on the operating system) for building the path | 0 | 36,179 | 0 | 10 | 2010-11-12T09:58:00.000 | python | Python - how to read path file/folder from server | 1 | 1 | 2 | 4,164,507 | 0 |
1 | 0 | I have 2 pages, a static html page and a python script - hosted on [local] google app engine.
/html/hello.html
define as login: required
/broadcast
which is a python script
when I access hello.html for the first time I am redirected to login page, I sign in, and then redirected back to hello.html.
inside hello.html - an AJAX call with jQuery is executed to load data from '/broadcast', this call errors saying 'you're not logged in'!
BUT - the same call to '/broadcast' through the browser address field succeeds as if I AM signed in!
as if the ajax and the browser callers have different cookies!??
HELP, am I going bananas? | true | 4,163,748 | 1.2 | 0 | 0 | 2 | Stupid me...
The ajax call was to localhost/broadcast
and the browser address field was 127.0.0.1/broadcast
...
the cookies for "different" domains ('127.0.0.1' != 'localhost') are not shared ofcourse...
Then I haven't gone mad... | 0 | 90 | 0 | 0 | 2010-11-12T10:37:00.000 | javascript,python,ajax,google-app-engine,jquery | AJAX and browser GET calls appear to have different cookies | 1 | 1 | 1 | 4,164,056 | 0 |
1 | 0 | I am writing a python script to combine about 20+ RSS feeds. I would like to use a custom solution instead of feedjack or planetfeed.
I use feedparser to parse the feeds and mysql to cache them.
The problem I am running into is determining which feeds have already been cached and which haven't.
Some pseudo code for what I have tried:
create a list of all feed items
get the date of last item cached from db
check which items in my list have a date greater than my item from the db and return this filtered list
sort the returned filtered list by date the item was created
add new items to the db
I feel like this would work, but my problem is that not all of the dates on the RSS feeds I am using are correct. Sometimes a publisher, for whatever reason, will have feed items with dates in the future. If this future date gets added to the db, then it will always be greater than the date of the items in my list. So, the comparison stops working and no new items get added to the db. I would like to come up with another solution and not rely on the publishers dates.
How would some of you pros do this? Assuming you have to combine multiple rss feeds, save them to a mysql db and then return them in ordered by date. I'm just looking for pseudo code to give me an idea of the best way to do this.
Thanks for your help. | true | 4,167,863 | 1.2 | 0 | 0 | 1 | Depending on how often the feeds are updated and how often you check, you could simply fix broken dates (if it's in the future, reset it to today), before adding them to the database.
Other than that, you'd have to use some sort of ID—I think RSS has an ID field on each item. If your feeds are kept in order, you can get the most recent cached ID, find that in the feed items list, and then add everything newer. If they're out of order, you'd have to check each one against your cache, and add it if it's missing. | 0 | 1,116 | 0 | 1 | 2010-11-12T18:29:00.000 | python | best algorithm to combine multiple RSS feeds using Python | 1 | 1 | 1 | 4,168,693 | 0 |
0 | 0 | I am looking for tutorials and/or examples of certain components of a social network web app that may include Python code examples of:
user account auto-gen function(database)
friend/follow function (Twitter/Facebook style)
messaging/reply function (Twitter style)
live chat function (Facebook style)
blog function
public forums (like Get Satisfaction or Stack Overflow)
profile page template auto-gen function
I just want to start getting my head around how Python can be used to make these features. I am not looking for a solution like Pinax since it is built upon Django and I will be ultimately using Pylons or just straight up Python. | true | 4,173,883 | 1.2 | 1 | 0 | 5 | So you're not interested in a fixed solution but want to program it yourself, do I get that correctly? If not: Go with a fixed solution. This will be a lot of programming effort, and whatever you want to do afterwards, doing it in another framework than you intended will be a much smaller problem.
But if you're actually interested in the programming experience, and you haven't found any tutorials googling for, say "messaging python tutorial", then that's because these are large-scale projects,- if you describe a project of this size, you're so many miles above actual lines of code that the concrete programming language almost doesn't matter (or at least you don't get stuck with the details). So you need to break these things down into smaller components.
For example, the friend/follow function: How to insert stuff into a table with a user id, how to keep a table of follow-relations, how to query for a user all texts from people she's following (of course there's also some infrastructural issues if you hit >100.000 people, but you get the idea ;). Then you can ask yourself, which is the part of this which I don't know how to do in Python? If your problem, on the other hand, is breaking down the problems into these subproblems, you need to start looking for help on that, but that's probably not language specific (so you might just want to start googling for "architecture friend feed" or whatever). Also, you could ask that here (beware, each bullet point makes for a huge question in itself ;). Finally, you could get into the Pinax code (don't know it but I assume it's open source) and see how they're doing it. You could try porting some of their stuff to Pylons, for example, so you don't have to reinvent their wheel, learn how they do it, end up in the framework you wanted and maybe even create something reusable by others.
sorry for tl;dr, that's because I don't have a concrete URL to point you to! | 0 | 2,075 | 0 | 3 | 2010-11-13T17:49:00.000 | python,social-networking,pylons,get-satisfaction | Where can I find Python code examples, or tutorials, of social networking style functions/components? | 1 | 1 | 1 | 4,174,212 | 0 |
0 | 0 | I want to do a test load for a web page. I want to do it in python with multiple threads.
First POST request would login user (set cookies).
Then I need to know how many users doing the same POST request simultaneously can server take.
So I'm thinking about spawning threads in which requests would be made in loop.
I have a couple of questions:
1. Is it possible to run 1000 - 1500 requests at the same time CPU wise? I mean wouldn't it slow down the system so it's not reliable anymore?
2. What about the bandwidth limitations? How good the channel should be for this test to be reliable?
Server on which test site is hosted is Amazon EC2 script would be run from another server(Amazon too).
Thanks! | false | 4,179,879 | 0.099668 | 1 | 0 | 1 | too many variables. 1000 at the same time... no. in the same second... possibly. bandwidth may well be the bottleneck. this is something best solved by experimentation. | 0 | 8,721 | 0 | 4 | 2010-11-14T21:46:00.000 | python,multithreading,load-testing | Python script load testing web page | 1 | 1 | 2 | 4,180,003 | 0 |
0 | 0 | I have a browser which sends utf-8 characters to my Python server, but when I retrieve it from the query string, the encoding that Python returns is ASCII. How can I convert the plain string to utf-8?
NOTE: The string passed from the web is already UTF-8 encoded, I just want to make Python to treat it as UTF-8 not ASCII. | false | 4,182,603 | 0.066568 | 0 | 0 | 4 | First, str in Python is represented in Unicode.
Second, UTF-8 is an encoding standard to encode Unicode string to bytes. There are many encoding standards out there (e.g. UTF-16, ASCII, SHIFT-JIS, etc.).
When the client sends data to your server and they are using UTF-8, they are sending a bunch of bytes not str.
You received a str because the "library" or "framework" that you are using, has implicitly converted some random bytes to str.
Under the hood, there is just a bunch of bytes. You just need ask the "library" to give you the request content in bytes and you will handle the decoding yourself (if library can't give you then it is trying to do black magic then you shouldn't use it).
Decode UTF-8 encoded bytes to str: bs.decode('utf-8')
Encode str to UTF-8 bytes: s.encode('utf-8') | 1 | 817,441 | 0 | 230 | 2010-11-15T08:26:00.000 | python,python-2.7,unicode,utf-8 | How to convert a string to utf-8 in Python | 1 | 1 | 12 | 63,293,431 | 0 |
1 | 0 | I have Django app that presents a list of items that you can add comments to.
What i basically want to do is something like the Facebook did: when someone post a comment on your item, you will receive an e-mail. What I want to do, is when you reply to that e-mail, the reply to be posted as a comment reply on the website.
What should I use to achieve this using python as much as possible ? Maybe even Django ? | false | 4,183,158 | 0 | 0 | 0 | 0 | You can for example write script for importing comments from mailbox(for example 1-3 minutes for cron).
You should connect to special mailbox which collects replies from users(comments).
Every mail have own header and title. You really can find out which post user try to comment(by header or title), and then import django enviroment and insert new recods. | 0 | 422 | 0 | 2 | 2010-11-15T09:51:00.000 | python,django,email | How to post a comment on e-mail reply? | 1 | 2 | 4 | 4,183,238 | 0 |
1 | 0 | I have Django app that presents a list of items that you can add comments to.
What i basically want to do is something like the Facebook did: when someone post a comment on your item, you will receive an e-mail. What I want to do, is when you reply to that e-mail, the reply to be posted as a comment reply on the website.
What should I use to achieve this using python as much as possible ? Maybe even Django ? | false | 4,183,158 | -0.049958 | 0 | 0 | -1 | I think a good way is how Google+ handles it using a + on email address it can be reply+id-or [email protected] then u must write a worker that check the POP server and | 0 | 422 | 0 | 2 | 2010-11-15T09:51:00.000 | python,django,email | How to post a comment on e-mail reply? | 1 | 2 | 4 | 18,135,014 | 0 |
0 | 0 | I want to to test my application's handling of timeouts when grabbing data via urllib2, and I want to have some way to force the request to timeout.
Short of finding a very very slow internet connection, what method can I use?
I seem to remember an interesting application/suite for simulating these sorts of things. Maybe someone knows the link? | false | 4,188,723 | 0 | 0 | 0 | 0 | why not write a very simple CGI script in bash that just sleeps for the required timeout period? | 0 | 6,193 | 0 | 6 | 2010-11-15T20:55:00.000 | python,urllib2 | How can I force urllib2 to time out? | 1 | 1 | 5 | 4,188,773 | 0 |
0 | 0 | I am looking to write a program that searches for the tags in an xml document and changes the string between the tags from localhost to manager. The tag might appear in the xml document multiple times, and the document does have a definite path. Would python or vbscript make the most sense for this problem? And can anyone provide a template so I can get started? That would be great. Thanks. | true | 4,198,416 | 1.2 | 0 | 0 | 0 | I was able to get this to work by using the vbscript solutions provided. The reasons I hadn't committed to a Visual Basic script before was that I didn't think it was possible to execute this script remotely with PsExec. It turns out I solved this problem as well with the help of Server Fault. In case you are interested in how that works, cscript.exe is the command parameter of PsExec and the vbscript file serves as the argument of cscript. Thanks for all the help, everyone! | 0 | 1,031 | 0 | 0 | 2010-11-16T20:04:00.000 | python,xml,scripting,vbscript,batch-file | batch script or python program to edit string in xml tags | 1 | 1 | 5 | 4,230,800 | 0 |
0 | 0 | I'm currently working on a site that makes several calls to big name online sellers like eBay and Amazon to fetch prices for certain items. The issue is, currently it takes a few seconds (as far as I can tell, this time is from making the calls) to load the results, which I'd like to be more instant (~10 seconds is too much in my opinion).
I've already cached other information that I need to fetch, but that information is static. Is there a way that I can cache the prices but update them only when needed? The code is in Python and I store info in a mySQL database.
I was thinking of somehow using chron or something along that lines to update it every so often, but it would be nice if there was a simpler and less intense approach to this problem.
Thanks! | false | 4,208,989 | 0 | 0 | 0 | 0 | How are you getting the price? If you are scrapping the data from the normal HTML page using a tool such as BeautifulSoup, that may be slowing down the round-trip time. In this case, it might help to compute a fast checksum (such as MD5) from the page to see if it has changed, before parsing it. If you are using a API which gives a short XML version of the price, this is probably not an issue. | 0 | 196 | 0 | 1 | 2010-11-17T20:46:00.000 | python,mysql,html,caching | Caching online prices fetched via API unless they change | 1 | 1 | 4 | 4,210,460 | 0 |
0 | 0 | Is there a generic/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables. | false | 4,213,696 | 0 | 0 | 0 | 0 | We do something like this at work sometimes but not in python. In that case, each usage requires a custom program to be written. We only have a SAX parser available. Using an XML decoder to get a dictionary/hash in a single step would help a lot.
At the very least you'd have to tell it which tags map to which to tables and fields, no pre-existing lib can know that... | 0 | 2,008 | 0 | 3 | 2010-11-18T10:15:00.000 | python,mysql,xml,r | Parsing an xml file and storing it into a database | 1 | 3 | 4 | 4,214,098 | 0 |
0 | 0 | Is there a generic/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables. | false | 4,213,696 | 0.049958 | 0 | 0 | 1 | There's the XML package for reading XML into R, and the RMySQL package for writing data from R into MySQL.
Between the two there's a lot of work. XML surpasses the scope of a RDBMS like MySQL so something that could handle any XML thrown at it would be either ridiculously complex or trivially useless. | 0 | 2,008 | 0 | 3 | 2010-11-18T10:15:00.000 | python,mysql,xml,r | Parsing an xml file and storing it into a database | 1 | 3 | 4 | 4,214,476 | 0 |
0 | 0 | Is there a generic/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables. | false | 4,213,696 | 0.197375 | 0 | 0 | 4 | They're three separate operations: parsing, table creation, and data population. You can do all three with python, but there's nothing "automatic" about it. I don't think it's so easy.
For example, XML is hierarchical and SQL is relational, set-based. I don't think it's always so easy to get a good relational schema for every single XML stream you can encounter. | 0 | 2,008 | 0 | 3 | 2010-11-18T10:15:00.000 | python,mysql,xml,r | Parsing an xml file and storing it into a database | 1 | 3 | 4 | 4,213,749 | 0 |
0 | 0 | I wrote the program that would need to authenticate users using their Linux usernames and passwords. I think it should do with PAM. I have tried searching from google PAM module for python3, but I did not find any. Is there a ready to use the PAM libraries, or try to make my own library? Is PAM usage some special security risks that should be taken into?
I know that I can authenticate users with python3 spwd class but I dont want to use that, because then I have to run my program with root access. | false | 4,216,163 | 0.099668 | 0 | 0 | 1 | +che
the python pam module you linked to is not python3 compatible. there are three pam modules that i'm aware of {'pam', 'pypam', 'spypam'}, and none are py3 compatible.
i've modified Chris AtLee's original pam package to work with python3. cleaning it up a bit before feeding back to him | 0 | 2,466 | 0 | 2 | 2010-11-18T15:03:00.000 | python,linux,authentication,python-3.x,pam | Authenticate user in linux with python 3 | 1 | 1 | 2 | 8,396,713 | 0 |
0 | 0 | i am working on a project that requires me to create multiple threads to download a large remote file. I have done this already but i cannot understand while it takes a longer amount of time to download a the file with multiple threads compared to using just a single thread. I used my xampp localhost to carry out the time elapsed test. I would like to know if its a normal behaviour or is it because i have not tried downloading from a real server.
Thanks
Kennedy | true | 4,219,134 | 1.2 | 1 | 0 | 4 | 9 women can't combine to make a baby in one month. If you have 10 threads, they each have only 10% the bandwidth of a single thread, and there is the additional overhead for context switching, etc. | 1 | 1,871 | 0 | 0 | 2010-11-18T20:15:00.000 | python,multithreading,download,urllib2 | Python/Urllib2/Threading: Single download thread faster than multiple download threads. Why? | 1 | 2 | 3 | 4,219,434 | 0 |
0 | 0 | i am working on a project that requires me to create multiple threads to download a large remote file. I have done this already but i cannot understand while it takes a longer amount of time to download a the file with multiple threads compared to using just a single thread. I used my xampp localhost to carry out the time elapsed test. I would like to know if its a normal behaviour or is it because i have not tried downloading from a real server.
Thanks
Kennedy | false | 4,219,134 | 0.066568 | 1 | 0 | 1 | Twisted uses non-blocking I/O, that means if data is not available on socket right now, doesn't block the entire thread, so you can handle many socket connections waiting for I/O in one thread simultaneous. But if doing something different than I/O (parsing large amounts of data) you still block the thread.
When you're using stdlib's socket module it does blocking I/O, that means when you're call socket.read and data is not available at the moment — it will block entire thread, so you need one thread per connection to handle concurrent download.
These are two approaches to concurrency:
Fork new thread for new connection (threading + socket from stdlib).
Multiplex I/O and handle may connections in one thread (Twisted). | 1 | 1,871 | 0 | 0 | 2010-11-18T20:15:00.000 | python,multithreading,download,urllib2 | Python/Urllib2/Threading: Single download thread faster than multiple download threads. Why? | 1 | 2 | 3 | 4,222,497 | 0 |
0 | 0 | I am looking for a python snippet to read an internet radio stream(.asx, .pls etc) and save it to a file.
The final project is cron'ed script that will record an hour or two of internet radio and then transfer it to my phone for playback during my commute. (3g is kind of spotty along my commute)
any snippits or pointers are welcome. | false | 4,247,248 | 0.099668 | 1 | 0 | 3 | I am aware this is a year old, but this is still a viable question, which I have recently been fiddling with.
Most internet radio stations will give you an option of type of download, I choose the MP3 version, then read the info from a raw socket and write it to a file. The trick is figuring out how fast your download is compared to playing the song so you can create a balance on the read/write size. This would be in your buffer def.
Now that you have the file, it is fine to simply leave it on your drive (record), but most players will delete from file the already played chunk and clear the file out off the drive and ram when streaming is stopped.
I have used some code snippets from a file archive without compression app to handle a lot of the file file handling, playing, buffering magic. It's very similar in how the process flows. If you write up some sudo-code (which I highly recommend) you can see the similarities. | 0 | 16,163 | 0 | 12 | 2010-11-22T15:47:00.000 | python,stream,audio-streaming,radio | Record streaming and saving internet radio in python | 1 | 1 | 6 | 13,279,976 | 0 |
0 | 0 | For a research project, I am collecting tweets using Python-Twitter. However, when running our program nonstop on a single computer for a week we manage to collect about only 20 MB of data per week. I am only running this program on one machine so that we do not collect the same tweets twice.
Our program runs a loop that calls getPublicTimeline() every 60 seconds. I tried to improve this by calling getUserTimeline() on some of the users that appeared in the public timeline. However, this consistently got me banned from collecting tweets at all for about half an hour each time. Even without the ban, it seemed that there was very little speed-up by adding this code.
I know about Twitter's "whitelisting" that allows a user to submit more requests per hour. I applied for this about three weeks ago, and have not hear back since, so I am looking for alternatives that will allow our program to collect tweets more efficiently without going over the standard rate limit. Does anyone know of a faster way to collect public tweets from Twitter? We'd like to get about 100 MB per week.
Thanks. | false | 4,249,684 | 0.066568 | 1 | 0 | 1 | I did a similar project analyzing data from tweets. If you're just going at this from a pure data collection/analysis angle, you can just scrape any of the better sites that collect these tweets for various reasons. Many sites allow you to search by hashtag, so throw in a popular enough hashtag and you've got thousands of results. I just scraped a few of these sites for popular hashtags, collected these into a large list, queried that list against the site, and scraped all of the usable information from the results. Some sites also allow you to export the data directly, making this task even easier. You'll get a lot of garbage results that you'll probably need to filter (spam, foreign language, etc), but this was the quickest way that worked for our project. Twitter will probably not grant you whitelisted status, so I definitely wouldn't count on that. | 0 | 5,951 | 0 | 5 | 2010-11-22T20:02:00.000 | python,twitter,python-twitter | How to Collect Tweets More Quickly Using Twitter API in Python? | 1 | 1 | 3 | 4,250,479 | 0 |
0 | 0 | Question: Where is a good starting point for learning to write server applications?
Info:
I'm looking in to writing a distributed computing system to harvest the idle cycles of the couple hundred computers sitting idle around my college's campus. There are systems that come close, but don't quite meet all the requirements I need. (most notable all transactions have to be made through SSH because the network blocks everything else) So I've decided to write my own application. partly to get exactly what I want, but also for experience.
Important features:
Written in python
All transaction made through ssh(this is solved through the simple use of pexpect)
Server needs to be able to take potentially hundreds of hits. I'll optimize later, the point being simulation sessions.
I feel like those aren't to ridiculous of things to try and accomplish. But with the last one I'm not certain where to even start. I've actually already accomplished the first 2 and written a program that will log into my server, and then print ls -l to a file locally. so that isn't hard. but how do i attach several clients asking the server for simulation data to crunch all at the same time? obviously it feels like threading comes in to play here, but more than that I'm sure.
This is where my problem is. Where does one even start researching how to write server applications? Am I even using the right wording? What information is there freely available on the internet and/or what books are there on such? again, specifically python, but a step in the right direction is one more than where i am now.
p.s. this seeemed more fitting for stackoverflow than serverfault. Correct me if I am wrong. | false | 4,253,557 | 0 | 0 | 0 | 0 | Here's an approach.
Write an "agent" in Python. The agent is installed on the various computers. It does whatever processing your need locally. It uses urllib2 to make RESTful HTTP requests of the server. It either posts data or requests work to do or whatever is supposed to go on.
Write a "server" in Python. The server is installed on one computer. This is written using wsgiref and is a simple WSGI-based server that serves requests from the various agents scattered around campus.
While this requires agent installation, it's very, very simple. It can be made very, very secure (use HTTP Digest Authentication). And the agent's privileges define the level of vulnerability. If the agent is running in an account with relatively few privileges, it's quite safe. The agent shouldn't run as root and the agent's account should not be allowed to su or sudo. | 0 | 186 | 1 | 4 | 2010-11-23T07:04:00.000 | python | where to start programing a server application | 1 | 1 | 3 | 4,255,361 | 0 |
0 | 0 | We have a Rest API that requires client certificate authentication. The API is used by this collection of python scripts that a user can run. To make it so that the user doesn't have to enter their password for their client certificate every time they run one of the scripts, we've created this broker process in java that a user can startup and run in the background which holds the user's certificate password in memory (we just have the javax.net.ssl.keyStorePassword property set in the JVM). The scripts communicate with this process and the process just forwards the Rest API calls to the server (adding the certificate credentials).
To do the IPC between the scripts and the broker process we're just using a socket. The problem is that the socket opens up a security risk in that someone could use the Rest API using another person's certificate by communicating through the broker process port on the other person's machine. We've mitigated the risk somewhat by using java security to only allow connections to the port from localhost. I think though someone in theory could still do it by remotely connecting to the machine and then using the port. Is there a way to further limit the use of the port to the current windows user? Or maybe is there another form of IPC I could use that can do authorization using the current windows user?
We're using Java for the broker process just because everyone on our team is much more familiar with Java than python but it could be rewritten in python if that would help.
Edit: Just remembered the other reason for using java for the broker process is that we are stuck with using python v2.6 and at this version https with client certificates doesn't appear to be supported (at least not without using a 3rd party library). | true | 4,257,038 | 1.2 | 0 | 0 | 0 | The most simple approach is to use cookie-based access control. Have a file in the user's profile/homedirectory which contains the cookie. Have the Java server generate and save the cookie, and have the Python client scripts send the cookie as the first piece of data on any TCP connection.
This is secure as long as an adversary cannot get the cookie, which then should be protected by file system ACLs. | 0 | 521 | 0 | 0 | 2010-11-23T14:33:00.000 | java,python,windows,ipc,ssl-certificate | IPC on Windows between Java and Python secured to the current user | 1 | 2 | 2 | 4,257,229 | 0 |
0 | 0 | We have a Rest API that requires client certificate authentication. The API is used by this collection of python scripts that a user can run. To make it so that the user doesn't have to enter their password for their client certificate every time they run one of the scripts, we've created this broker process in java that a user can startup and run in the background which holds the user's certificate password in memory (we just have the javax.net.ssl.keyStorePassword property set in the JVM). The scripts communicate with this process and the process just forwards the Rest API calls to the server (adding the certificate credentials).
To do the IPC between the scripts and the broker process we're just using a socket. The problem is that the socket opens up a security risk in that someone could use the Rest API using another person's certificate by communicating through the broker process port on the other person's machine. We've mitigated the risk somewhat by using java security to only allow connections to the port from localhost. I think though someone in theory could still do it by remotely connecting to the machine and then using the port. Is there a way to further limit the use of the port to the current windows user? Or maybe is there another form of IPC I could use that can do authorization using the current windows user?
We're using Java for the broker process just because everyone on our team is much more familiar with Java than python but it could be rewritten in python if that would help.
Edit: Just remembered the other reason for using java for the broker process is that we are stuck with using python v2.6 and at this version https with client certificates doesn't appear to be supported (at least not without using a 3rd party library). | false | 4,257,038 | 0 | 0 | 0 | 0 | I think I've come up with a solution inspired by Martin's post above. When the broker process starts up I'll create an mini http server listening on the IPC port. Also during startup I'll write a file containing a randomly generated password (that's different every startup) to the user's home directory so that only the user can read the file (or an administrator but I don't think I need to worry about that). Then I'll lock down the IPC port by requiring all http requests sent there to use the password. It's a bit Rube Goldberg-esque but I think it will work. | 0 | 521 | 0 | 0 | 2010-11-23T14:33:00.000 | java,python,windows,ipc,ssl-certificate | IPC on Windows between Java and Python secured to the current user | 1 | 2 | 2 | 4,271,608 | 0 |
0 | 0 | I'm using python with urllib2 & cookielib and such to open a url. This url set's one cookie in it's header and two more in the page with some javascript. It then redirects to a different page.
I can parse out all the relevant info for the cookies being set with the javascript, but I can't for the life of me figure out how to get them into the cookie-jar as cookies.
Essentially, when I follow to the site being redirected too, those two cookies have to be accessible by that site.
To be very specific, I'm trying to login in to gomtv.net by using their "login in with a Twitter account" feature in python.
Anyone? | false | 4,258,278 | 0 | 1 | 0 | 0 | You can't set cookies for another domain - browsers will not allow it. | 0 | 255 | 0 | 2 | 2010-11-23T16:25:00.000 | python,authentication,cookies,cookielib | How do I manually put cookies in a jar? | 1 | 1 | 1 | 4,258,354 | 0 |
1 | 0 | I have some url to parse, and they used some javascript to create it dynamicly. So if i want to parse the result generated page with python... how can i do that ?
Firefox do that well with web developer... so i think it possible ... but i don't know where to start...
Thx for help
lo | false | 4,264,076 | 0 | 0 | 0 | 0 | I you want generated source you'll need a browser, I don't think you can with only python. | 0 | 212 | 0 | 2 | 2010-11-24T06:35:00.000 | javascript,python,parsing,url,dynamically-generated | How to see generated source from an URL page with python script and not anly source? | 1 | 2 | 2 | 4,264,223 | 0 |
1 | 0 | I have some url to parse, and they used some javascript to create it dynamicly. So if i want to parse the result generated page with python... how can i do that ?
Firefox do that well with web developer... so i think it possible ... but i don't know where to start...
Thx for help
lo | true | 4,264,076 | 1.2 | 0 | 0 | 2 | I've done this by doing a POST of document.body.innerHTML, after the page is loaded, to a CGI script in Python.
For the parsing, BeautifulSoup is a good choice. | 0 | 212 | 0 | 2 | 2010-11-24T06:35:00.000 | javascript,python,parsing,url,dynamically-generated | How to see generated source from an URL page with python script and not anly source? | 1 | 2 | 2 | 4,264,239 | 0 |
0 | 0 | I know it's possible to open up specific URL's with python's webbrowser module. Is it possible to use strings as search queries with it, or another module? Say in an engine like Google or Yahoo? | true | 4,265,580 | 1.2 | 0 | 0 | 1 | Of course it's possible - they're just GET requests. So long as you format the URL properly with the query string correct and all (http://google.com/search?q=query - look at the site to see what it needs to be), it'll work fine. It's just a URL. | 0 | 830 | 0 | 0 | 2010-11-24T10:09:00.000 | python,browser,search-engine | Python Web Search | 1 | 1 | 3 | 4,266,457 | 0 |
0 | 0 | Is it possible to write a peer-to-peer chat application in Python?
I am thinking of this from a hobbyist project point-of-view. Can two machines connect to each other directly without involving a server? I have always wondered this, but never actually seen it implemented anywhere so I am thinking there must be a catch somewhere.
PS: I intend to learn Twisted, so if that is involved, it would be an added advantage! | false | 4,269,287 | 0.197375 | 0 | 0 | 3 | Yes, each computer (as long as their on the same network) can establish a server instance with inbound and outbound POST/GET. | 0 | 2,413 | 0 | 2 | 2010-11-24T16:47:00.000 | python,twisted | Writing a P2P chat application in Python | 1 | 3 | 3 | 4,269,340 | 0 |
0 | 0 | Is it possible to write a peer-to-peer chat application in Python?
I am thinking of this from a hobbyist project point-of-view. Can two machines connect to each other directly without involving a server? I have always wondered this, but never actually seen it implemented anywhere so I am thinking there must be a catch somewhere.
PS: I intend to learn Twisted, so if that is involved, it would be an added advantage! | false | 4,269,287 | 0 | 0 | 0 | 0 | I think i am way too late in putting my two bits here, i accidentally stumbled upon here as i was also searching on similar lines. I think you can do this fairly easily using just sockets only, however as mentioned above one of the machines would have to act like a server, to whome the other will connect.
I am not familiar with twisted, but i did achieved this using just sockets. But yes even i am curious to know how would you achieve peer2peer chat communication if there are multiple clients connected to a server. Creating a chat room kind of app is easy but i am having hard time in thinking how to handle peer to peer connections. | 0 | 2,413 | 0 | 2 | 2010-11-24T16:47:00.000 | python,twisted | Writing a P2P chat application in Python | 1 | 3 | 3 | 49,536,752 | 0 |
0 | 0 | Is it possible to write a peer-to-peer chat application in Python?
I am thinking of this from a hobbyist project point-of-view. Can two machines connect to each other directly without involving a server? I have always wondered this, but never actually seen it implemented anywhere so I am thinking there must be a catch somewhere.
PS: I intend to learn Twisted, so if that is involved, it would be an added advantage! | true | 4,269,287 | 1.2 | 0 | 0 | 5 | Yes. You can do this pretty easily with Twisted. Just have one of the peers act like a server and the other one act like a client. In fact, the twisted tutorial will get you most of the way there.
The only problem you're likely to run into is firewalls. Most people run their home machines behind SNAT routers, which make it tougher to connect directly to them from outside. You can get around it with port forwarding though. | 0 | 2,413 | 0 | 2 | 2010-11-24T16:47:00.000 | python,twisted | Writing a P2P chat application in Python | 1 | 3 | 3 | 4,269,328 | 0 |
0 | 0 | I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api/library? | true | 4,287,941 | 1.2 | 1 | 0 | 27 | As far as I know, there isn't any way of doing this. That's nothing to do with Python, but because AMQP doesn't define any method of queue discovery.
In any case, in AMQP it's clients (consumers) that declare queues: publishers publish messages to an exchange with a routing key, and consumers determine which queues those routing keys go to. So it does not make sense to talk about queues in the absence of consumers. | 0 | 70,261 | 0 | 42 | 2010-11-26T19:06:00.000 | python,rabbitmq,amqp | How can I list or discover queues on a RabbitMQ exchange using python? | 1 | 2 | 8 | 4,288,304 | 0 |
0 | 0 | I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api/library? | false | 4,287,941 | 0.049958 | 1 | 0 | 2 | Management features are due in a future version of AMQP. So for now you will have to wait till for a new version that will come with that functionality. | 0 | 70,261 | 0 | 42 | 2010-11-26T19:06:00.000 | python,rabbitmq,amqp | How can I list or discover queues on a RabbitMQ exchange using python? | 1 | 2 | 8 | 4,289,172 | 0 |
0 | 0 | My code was working correctly till yesterday and I was able to fetch tweets, from GetSearch(), but now it is returning empty list, though I check my credentials are correct
Is something changed recently??
Thank you | true | 4,291,319 | 1.2 | 1 | 0 | 0 | They might have a limit of requests in a certain amount of time or they had a failure on the system. You can ask for new credentials to see if the problem was the first one and try getting the tweets with them. | 0 | 297 | 0 | 0 | 2010-11-27T11:08:00.000 | python-twitter | python-twitter GetSearch giving empty list | 1 | 1 | 1 | 4,291,535 | 0 |
1 | 0 | My goal:
I want to host a folder of photos, but if at anytime 100 files are being downloaded, I want to redirect a new downloader/request to a 'waiting page' and give them a place in line and an approximate countdown clock until its their turn to download their requested content. Then either redirect them directly to the content, or (ideally) give them a button (token,expiring serial number) they can click that will take them to the content when they are ready.
I've seen sites do something similar to this, such as rapidshare, but I have not seen an open-source example of this type of setup. I would think it would be combining several technologies and modifying request headers?
Any help/ideas would be greatly appreciated! | false | 4,295,823 | 0 | 0 | 0 | 0 | Twisted network engine is about the best answer for you. What you can have is you can have the downloader serving a maximum of 100 x people then when the queue is full you will direct people to a holding loop, in the holding loop they will wait x seconds, check if queue is full, check not expired, see who else is waiting, if this ticket was here first, jump to top of download queue. As a TCP/IP connection comes in on twisted the level of control on your clients is so insane that you can do some might and powerful things in weird and wonderful ways, now imagine building this into a scalable and interactive twisted http server where you keep the level of control but you can actually serve resources.
The simplest way to get away with it is probably a pool of tickets, when a download is complete the downloader returns the ticket to the pool for someone else to take, if there are no tickets wait your turn. | 0 | 62 | 0 | 1 | 2010-11-28T07:35:00.000 | c#,php,python,apache,nginx | Controlling rate of downloads on a per request and/or per resource basis (and providing a first-come-first-serve waiting system) | 1 | 1 | 1 | 4,297,198 | 0 |
1 | 0 | I have to upload a webpage on cdn. Say test.html, test.css, image1.jpg etc. Now I am uploading all these file one by one. I think which is not efficient. So, is it possible to keep all these files in folder and then upload this folder on the cdn? If yes, then what parameters i need to take care about that. Does zipping the folder helpful? I am using python.
Thanks in Advance | false | 4,299,324 | 0 | 0 | 0 | 0 | I think you are trying to upload the static content of your website (not the user uploaded files) to CDN via FTP client or something similar.
To achieve bulk upload you may ZIP all such files on local machine and upload to your webserver. Unzip files on webserver and write a batch script which utlize the CDN API to send files in CDN container.
For fulture new or modified files, write another batch script to grab all new/modified files and send to CDN container via CDN API. | 0 | 1,114 | 0 | 0 | 2010-11-28T21:55:00.000 | python,cdn | How to upload multiple files on cdn? | 1 | 1 | 2 | 18,712,080 | 0 |
0 | 0 | I feel stacked here trying to change encodings with Python 2.5
I have XML response, which I encode to UTF-8: response.encode('utf-8'). That is fine, but the program which uses this info doesn't like this encoding and I have to convert it to other code page. Real example is that I use ghostscript python module to embed pdfmark data to a PDF file - end result is with wrong characters in Acrobat.
I've done numerous combinations with .encode() and .decode() between 'utf-8' and 'latin-1' and it drives me crazy as I can't output correct result.
If I output the string to a file with .encode('utf-8') and then convert this file from UTF-8 to CP1252 (aka latin-1) with i.e. iconv.exe and embed the data everything is fine.
Basically can someone help me convert i.e. character á which is UTF-8 encoded as hex: C3 A1 to latin-1 as hex: E1?
Thanks in advance | true | 4,299,802 | 1.2 | 0 | 0 | 23 | Instead of .encode('utf-8'), use .encode('latin-1'). | 1 | 144,526 | 0 | 21 | 2010-11-28T23:37:00.000 | python,encoding | Python: convert string from UTF-8 to Latin-1 | 1 | 2 | 4 | 4,299,809 | 0 |
0 | 0 | I feel stacked here trying to change encodings with Python 2.5
I have XML response, which I encode to UTF-8: response.encode('utf-8'). That is fine, but the program which uses this info doesn't like this encoding and I have to convert it to other code page. Real example is that I use ghostscript python module to embed pdfmark data to a PDF file - end result is with wrong characters in Acrobat.
I've done numerous combinations with .encode() and .decode() between 'utf-8' and 'latin-1' and it drives me crazy as I can't output correct result.
If I output the string to a file with .encode('utf-8') and then convert this file from UTF-8 to CP1252 (aka latin-1) with i.e. iconv.exe and embed the data everything is fine.
Basically can someone help me convert i.e. character á which is UTF-8 encoded as hex: C3 A1 to latin-1 as hex: E1?
Thanks in advance | false | 4,299,802 | 0 | 0 | 0 | 0 | If the previous answers do not solve your problem, check the source of the data that won't print/convert properly.
In my case, I was using json.load on data incorrectly read from file by not using the encoding="utf-8". Trying to de-/encode the resulting string to latin-1 just does not help... | 1 | 144,526 | 0 | 21 | 2010-11-28T23:37:00.000 | python,encoding | Python: convert string from UTF-8 to Latin-1 | 1 | 2 | 4 | 32,096,180 | 0 |
0 | 0 | I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side? | true | 4,311,347 | 1.2 | 0 | 0 | 2 | If the script runs on your server, its purpose is to serve a document, not to download it (the latter would be the urllib solution).
Depending on your needs you can:
Set up static file serving with e.g. Apache
Make the script execute on a certain URL (e.g. with mod_wsgi), then the script should set the Content-Type (provides document type such as "text/plain") and Content-Disposition (provides download filename) headers and send the document data
As your question is not more specific, this answer can't be either. | 0 | 322 | 0 | 0 | 2010-11-30T07:10:00.000 | python | how to download in python | 1 | 3 | 4 | 4,311,727 | 0 |
0 | 0 | I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side? | false | 4,311,347 | 0.049958 | 0 | 0 | 1 | If the document is on your server and your intention is that the user should be able to download this file, couldn't you just serve the url to that resource as a hyperlink in your HTML code. Sorry if I have been obtuse but this seems the most logical step given your explanation. | 0 | 322 | 0 | 0 | 2010-11-30T07:10:00.000 | python | how to download in python | 1 | 3 | 4 | 4,311,383 | 0 |
0 | 0 | I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side? | false | 4,311,347 | 0.049958 | 0 | 0 | 1 | Set the appropriate Content-type header, then send the file contents. | 0 | 322 | 0 | 0 | 2010-11-30T07:10:00.000 | python | how to download in python | 1 | 3 | 4 | 4,311,378 | 0 |
0 | 0 | I have got a url in this form - http:\\/\\/en.wikipedia.org\\/wiki\\/The_Truman_Show. How can I make it normal url. I have tried using urllib.unquote without much success.
I can always use regular expressions or some simple string replace stuff. But I believe that there is a better way to handle this... | false | 4,312,197 | 1 | 0 | 0 | 11 | urllib.unquote is for replacing %xx escape codes in URLs with the characters they represent. It won't be useful for this.
Your "simple string replace stuff" is probably the best solution. | 1 | 8,233 | 0 | 4 | 2010-11-30T09:29:00.000 | python,url | Python unescape URL | 1 | 1 | 3 | 4,312,223 | 0 |
1 | 0 | I am trying to do WSDL SOAP connection to our JIRA server using SOAPpy (Python SOAP Library).
All seems to be fine except when I try finding specific issues. Through the web browser looking up the bug ID actually redirects to a bug (with a different ID), however it is the bug in question just moved to a different project.
Attempts to getIssue via the SOAPpy API results in an exception that the issue does not exist.
Any way around this?
Thanks | true | 4,320,135 | 1.2 | 0 | 0 | 2 | Yes, there's an existing bug on this I've seen. Use the JIRA issue id instead of the key to locate it, as a workaround. | 0 | 281 | 0 | 3 | 2010-12-01T00:25:00.000 | python,soap,wsdl,jira | Python JIRA SOAPpy annoying redirect on findIssue | 1 | 1 | 1 | 4,350,861 | 0 |
0 | 0 | I have an lxml object called item and it may have a child called item.brand, however it's possible that there is none as this is returned from an API. How can I check this in Python? | true | 4,339,708 | 1.2 | 0 | 0 | 4 | Try hasattr(). | 1 | 706 | 0 | 0 | 2010-12-02T20:51:00.000 | python | How can I see if a child exists in Python? | 1 | 1 | 1 | 4,339,741 | 0 |
0 | 0 | My app opens a TCP socket and waits for data from other users on the network using the same application. At the same time, it can broadcast data to a specified host on the network.
Currently, I need to manually enter the IP of the destination host to be able to send data. I want to be able to find a list of all hosts running the application and have the user pick which host to broadcast data to.
Is Bonjour/ZeroConf the right route to go to accomplish this? (I'd like it to cross-platform OSX/Win/*Nix) | false | 4,343,575 | 0.099668 | 0 | 0 | 2 | Zeroconf/DNS-SD is an excellent idea in this case. It's provided by Bonjour on OS X and Windows (but must be installed separately or as part of an Apple product on Windows), and by Avahi on FOSS *nix. | 0 | 1,210 | 1 | 2 | 2010-12-03T08:07:00.000 | python,networking,bonjour,zeroconf | Proper way to publish and find services on a LAN using Python | 1 | 1 | 4 | 4,343,600 | 0 |
0 | 0 | I want to use python urllib2 to simulate a login action, I use Fiddler to catch the packets and got that the login action is just an ajax request and the username and password is sent as json data, but I have no idea how to use urllib2 to send json data, help... | false | 4,348,061 | 1 | 0 | 0 | 21 | For Python 3.x
Note the following
In Python 3.x the urllib and urllib2 modules have been combined. The module is named urllib. So, remember that urllib in Python 2.x and urllib in Python 3.x are DIFFERENT modules.
The POST data for urllib.request.Request in Python 3 does NOT accept a string (str) -- you have to pass a bytes object (or an iterable of bytes)
Example
pass json data with POST in Python 3.x
import urllib.request
import json
json_dict = { 'name': 'some name', 'value': 'some value' }
# convert json_dict to JSON
json_data = json.dumps(json_dict)
# convert str to bytes (ensure encoding is OK)
post_data = json_data.encode('utf-8')
# we should also say the JSON content type header
headers = {}
headers['Content-Type'] = 'application/json'
# now do the request for a url
req = urllib.request.Request(url, post_data, headers)
# send the request
res = urllib.request.urlopen(req)
# res is a file-like object
# ...
Finally note that you can ONLY send a POST request if you have SOME data to send.
If you want to do an HTTP POST without sending any data, you should send an empty dict as data.
data_dict = {}
post_data = json.dumps(data_dict).encode()
req = urllib.request.Request(url, post_data)
res = urllib.request.urlopen(req) | 1 | 32,746 | 0 | 16 | 2010-12-03T17:15:00.000 | python,json,urllib2 | How to use python urllib2 to send json data for login | 1 | 1 | 4 | 7,469,725 | 0 |
1 | 0 | I am iusing in my html. I am trying to handle the request on server side using python BaseHTTPServer. I want to figure out how the request from video tag looks like??? | false | 4,348,707 | 0.099668 | 0 | 0 | 1 | It will be a simple GET request, just like any other resource embedded in an HTML document.
If you really want to examine exactly what browsers send, then use something like Charles or the Net tab of Firebug. | 0 | 513 | 0 | 0 | 2010-12-03T18:37:00.000 | python,html | is Video tag in html a POST request or GET request? | 1 | 2 | 2 | 4,348,712 | 0 |
1 | 0 | I am iusing in my html. I am trying to handle the request on server side using python BaseHTTPServer. I want to figure out how the request from video tag looks like??? | false | 4,348,707 | 0 | 0 | 0 | 0 | POST is usually reserved for form submissions because you are POSTing form information to the server. In this case you are just GETing the contents of a <video> source. | 0 | 513 | 0 | 0 | 2010-12-03T18:37:00.000 | python,html | is Video tag in html a POST request or GET request? | 1 | 2 | 2 | 4,348,815 | 0 |
0 | 0 | Is it possible to limit the response size with httplib2? For instance if it sees an HTTP body over X bytes the connection will just close without consuming more bandwidth. Or perhaps only download the first X bytes of a file. | true | 4,362,721 | 1.2 | 0 | 0 | 5 | Assuming that the server is sending the response body size in the Content-Length response header field, you can do it yourself.
First, call Http.request(method="HEAD") to retrieve only the headers and not the body. Then inspect the Content-Length field of the response to see if it is below your threshold. If it is, make a second request with the proper GET or POST method to retrieve the body; otherwise produce an error.
If the server isn't giving you the Content-Length (or is lying about it), it doesn't look like there is a way to cut off the download after some number of bytes. | 0 | 720 | 0 | 4 | 2010-12-06T02:37:00.000 | python,http | Limiting response size with httplib2 | 1 | 1 | 1 | 4,362,770 | 0 |
0 | 0 | I'm looking for a good framework to use for a soap service. I'd prefer to use a pythonic framework, but after looking at soaplib/rpclib (too unstable), SOAPy (doesn't work with 2.7) and ZSI (too...confusing), I'm not sure that's possible.
I'm fine with it being in another language, though I'm hesitant to use php's soap libraries due to some previous issues I've had with php.
Yes, I would very much like to to be SOAP as this is destined to primarily provide data to a Silverlight client, and VS makes it dead simple to work with soap services. And no, it can't be an WCF service as all of the hosts are linux-based.
Much appreciated. | false | 4,371,139 | 0.099668 | 1 | 0 | 1 | I have used Spring WS, JAVA in my previous project. It worked well, without any glitches. We served more than a million API request a day. | 0 | 1,264 | 0 | 3 | 2010-12-06T21:29:00.000 | python,silverlight,web-services,soap | What is a good framework for a soap service? | 1 | 1 | 2 | 4,371,795 | 0 |
0 | 0 | I have been working on some custom shared Internet calendar software. I currently have a webdav server setup using apache and my software (using python) and right now it works great with Thunderbird and the Lightning plugin, I can subscribe to an icalendar and edit events with no problem. However I've run into a snag with Outlook 2007. I can currently read an icalendar but it sets that calendar in Outlook 2007 to read only. Doing some searching I've come across some findings saying that setting up some webdav server stuff on a Windows machine I can get the machine to tell Outlook 2007 that the calendar can be editted as well (basically turn off the read only and allow that icalendar to be published). I'm currently trying to set the server up to work with but thought I might ask SO to speed up my research a bit.
My question basically is, is there some header information or something else that I can send in my response back to Outlook to let it know an Internet calendar has write privaleges? I know in general it is controlled by the client whether an icalendar can be written to since I can read and write these same calendars just fine in Thunderbird.
Additionally, I have heard this read/write problem with Internet calendars have been solved in Outlook 2010 but upgrading to that is not an option. | true | 4,378,924 | 1.2 | 0 | 0 | 1 | Microsoft Outlook 2007 seems to have no ability to allow writing to an Internet Calendar. Icalendars are set to read only. You can publish a calendar to a webdav to create your own icalendar but that calendar (in your Outlook '07) would never update if someone else were to somehow edit that calendar (on the server). It would always just overwrite it when it makes a 'PUT' to the server. | 0 | 1,189 | 0 | 1 | 2010-12-07T16:22:00.000 | python,apache,webdav,outlook-2007,icalendar | Allow Outlook 2007 to edit custom shared iCalendars | 1 | 1 | 1 | 4,443,166 | 0 |
0 | 0 | so i am developing my own download manager for educational purpose. I have multiple connections/threads downloading a file, each connection works on a particular range of the file. Now after they have all fetched their chunks, i dont exact know how to i bring this chunks together to re-make the original file.
What i did:
First, i created a temporary file in 'wb' mode, and allowed each connections/threads to dump their chunks. But everytime a connection does this, it overwrites previously saved chunks. I figured this was because i used the 'wb' file descriptor. I changed it to 'ab', but i can no longer perform seek() operations
What i am looking for:
I need an elegant way of re-packaging this chunk to the original file. I would like to know how other download managers do it.
Thank in advance. | false | 4,379,160 | 0.197375 | 0 | 0 | 2 | You need to write chunks it different temporary files and then join them in the original order. If you open one file for all the threads, you should make the access to it sequential to preserve to correct order of data, which discards thread usage since a thread should wait for the previous one. BTW, you should open files in wb mode. | 1 | 542 | 0 | 0 | 2010-12-07T16:42:00.000 | python,urllib2,connection,chunks | Download Manager: How to re-construct chunks fetched by multiple connections | 1 | 2 | 2 | 4,379,274 | 0 |
0 | 0 | so i am developing my own download manager for educational purpose. I have multiple connections/threads downloading a file, each connection works on a particular range of the file. Now after they have all fetched their chunks, i dont exact know how to i bring this chunks together to re-make the original file.
What i did:
First, i created a temporary file in 'wb' mode, and allowed each connections/threads to dump their chunks. But everytime a connection does this, it overwrites previously saved chunks. I figured this was because i used the 'wb' file descriptor. I changed it to 'ab', but i can no longer perform seek() operations
What i am looking for:
I need an elegant way of re-packaging this chunk to the original file. I would like to know how other download managers do it.
Thank in advance. | true | 4,379,160 | 1.2 | 0 | 0 | 1 | You were doing it just fine: seek() and write(). That should work!
Now, if you want a cleaner structure, without so many threads moving their hands all over a file, you might want to consider having downloader threads and a disk-writing thread. This last one may just sleep until woken by one of the others, write some kb to disk, and go back to sleep. | 1 | 542 | 0 | 0 | 2010-12-07T16:42:00.000 | python,urllib2,connection,chunks | Download Manager: How to re-construct chunks fetched by multiple connections | 1 | 2 | 2 | 4,379,227 | 0 |
0 | 0 | Would anyone happen to know what the maximum user ID is on Twitter? That is by now there are about 200mil users, so would the id's range from 1 - 200million? I am finding that in that range some of the id's are not used.
I have a python script that is basically accessing the following url:
"/1/statuses/user_timeline/" + str(user_id) + ".json?count=200"
Thanks, | false | 4,406,448 | 0 | 1 | 0 | 0 | No one knows that.
There were discussions on that in relation of how many users twitter really has.
There were a lot of tests as well as probing of id ranges etc.
The results were that the ids were sequentially incrementing a long time, but then had regular gaps of about 10 between them, and sometimes also seemed to be complelety random.
I don't know how accurately this information was collected, and the goal was something else, but I think you get the point.
From a technical point of view I would expect nothing else in a network as big as twitter.
I am pretty sure the IDs are sharded, which means they are assigned in special reagions or servers.
So that for example if your ID equals mudolo 17 I know I have to look on that very server. Or in that very country. Or something.
Or maby the server just have their own prefix or residue class for assigning ids when a new user signs up to avoid replication problems.
It is also in most cases uncommon, or "not so cool" to leak information as this.
Don't ask me why, its just my esperience that comapnies want to show as least information to the outside as possible.
This includes not having an reproduceable transparanet id incremention system.
It is also vulnerable for some sort of harmful attacks, unwanted crawling, stuff like that.
So my point is.
There is no way of giving you a reliable answer. And it should not be necessary.
You should design your application do deal with eveyr possible situation.
If you want to know how big you should make your database field not to get any conflicts.
I think integer should be fine for now. (even on 32 bit systems)
But always be prepared to upgrade.
Especially don't assume that it will stay numeric. Its just a unique string! | 0 | 3,366 | 0 | 2 | 2010-12-10T06:57:00.000 | python,twitter,oauth | Max Twitter ID? | 1 | 1 | 3 | 4,406,528 | 0 |
0 | 0 | I am using a script to crawl and download favicons from websites. Some sites gave me 2-3 favicon images of various sizes (16x16, 32x32) etc..embedded in the same image. When I try to use this image it is not displaying properly as a favicon. Is there anything that I can do to make sure I download a proper image? | true | 4,409,789 | 1.2 | 0 | 0 | 1 | That's a feature of the ico file format. They're perfectly valid files, but you're going to need to process them with something that actually understands Windows Icon files. | 0 | 269 | 0 | 0 | 2010-12-10T14:31:00.000 | python,html,favicon,web-crawler | Having issues while downloading favicon using python script | 1 | 1 | 1 | 4,409,872 | 0 |
0 | 0 | I'd like to select an element which has no children of a specific type, for example:
all <li> elements who have no <table class="someclass"> children, I'd like to select only the parent element, not the children that don't match table.
On a similar note, I'd like to match elements whose parents don't match X, for example:
all <li> elements who are not descendents of <table class="someclass">.
I'm using python, and lxml's cssselect.
Thanks! | false | 4,412,253 | 0 | 0 | 0 | 0 | I don't think CSS selectors have "anything but" selection, so you can't do it that way. Maybe you can do it with XPaths. which are more flexible, but even then you will get very complex and obtuse path expressions.
I'd recommend that you simply get all <li> elements, go through each elemnts children, and skip it if one of the children is a table.
This will be easily understandable and maintainable, easy to implement, and unless your performance requirements are really extreme and you need to process tens of thousands of pages per second, it will be Fast Enough (tm).
Keep it simple. | 0 | 2,018 | 0 | 0 | 2010-12-10T18:52:00.000 | python,css,css-selectors,lxml | CSS Selectors: select element where (parent|children) don't match X | 1 | 1 | 2 | 4,413,303 | 0 |
0 | 0 | I'm using Python's BaseHTTPRequestHandler class to build a web server. I want to add an endpoint for WebSockets. This means that I need to read whatever is available from the handler's rfile, so that I can process messages one by one, as I'm receiving them (instead of having to read the while input).
I tried using different combinations of 'read' (eg. with a big buffer, thinking that it'd return early with less data if less data was available; with no parameter, but then it just means to read until EOF) but couldn't get this to work.
I can think of two solutions:
To call read(1): to read bytes one by one. I'd rather not do this, as I'm not sure what the buffering semantics are (eg. I wouldn't want a syscall per byte read).
To temporally make the file non-blocking, then attempt a read for a chunk of data, then make it blocking, then attempt a read for 1 byte. This seems rather messy. Another option I can think of is to just use non-blocking sockets, but this wouldn't seem to work so well with my current threaded framework.
Any ideas of how to get read to return whatever data is available? | false | 4,414,611 | 0.197375 | 0 | 0 | 1 | WebSockets aren't HTTP, so you can't really handle them with an HTTP request handler.
However, using BaseHTTPRequestHandler with HTTP, you would normally be reading only the exact amount of data you expect (for instance, as specified in the Content-length header.) | 0 | 1,273 | 0 | 0 | 2010-12-11T01:07:00.000 | python,blocking,websocket,httpserver | Python, BaseHTTPRequestHandler: how to read what's available on file from socket? | 1 | 1 | 1 | 15,036,544 | 0 |
0 | 0 | I was wondering how to go about finding a string you don't know what is, in a string. I am writing an IRC bot and i need this function. I want to be able to write:
!greet Greg
and then my bot is supposed to say "Hi, Greg!". So what comes after greet is variable. And if i wrote !greet Matthew it would say "Hi, Matthew!".
Is this possible?
Thanks a lot.
Andesay | false | 4,422,948 | -0.039979 | 0 | 0 | -1 | if "Greg" in greet:
doSomething("Hi Greg")
the key is that strings take the in operator | 1 | 361 | 0 | 2 | 2010-12-12T17:35:00.000 | python,irc | How to find x in a string in Python | 1 | 1 | 5 | 4,422,957 | 0 |
1 | 0 | How would you get all the HTML tags from a URL and print them? | false | 4,435,882 | 0 | 0 | 0 | 0 | Fetch it (using mechanize, urllib or whatever else you want), parse what you get (using elementtree, BeautifulSoup, lxml or whatever else you want) and you have what you want. | 0 | 646 | 0 | 1 | 2010-12-14T04:31:00.000 | python,html,url,printing | How to get html tags from url? | 1 | 1 | 2 | 4,435,929 | 0 |
0 | 0 | Hi
I would like to create a wxpython application with a window where I can create a network graph. I have heard (never used) of graphviz and NetworkX, but it seems to me that they only creates graph given some input data. I would like to do the opposite - i.e., create drag and drop nodes and links from a pallete menu. The nodes and links should be right-clickable with context menu popups. Eg., I should be able to right click a node and click "properties" in context menu - where I can fill in ip address, number of ports, their mac addresses etc.
I believe graphviz will not allow me to do that. Is there any good package to do this ? Must be free / open-source. | false | 4,447,658 | 0.197375 | 0 | 0 | 1 | Take a look at the wx.lib.ogl package. It has the basics of what you would need to build diagrams of shapes, labels, lines, arrows, etc. and you can allow the user to interact with them to move them around the window, etc. It is not perfect, but people are using it for this type of thing quite a bit.
Another possibility for you might be wx.lib.floatcanvas. They both have their strengths and weaknesses, so it really depends on which is the best fit for your needs. | 0 | 943 | 0 | 1 | 2010-12-15T07:29:00.000 | networking,graph,wxpython,pygraphviz | wxPython: Network Graph - clickable with context menu - Any pkgs? | 1 | 1 | 1 | 4,493,216 | 0 |
0 | 0 | In my Python socket program, I sometimes need to interrupt it with Ctrl-C. When I do this, it does close the connection using socket.close().
However, when I try to reopen it I have to wait what seems like a minute before I can connect again. How does one correctly close a socket? Or is this intended? | false | 4,465,959 | 0.015383 | 0 | 0 | 1 | Do nothing just wait for a couple of minutes and it will get resolved. It happens due to the slow termination of some processes, and that's why it's not even showing in the running processes list. | 0 | 244,362 | 0 | 127 | 2010-12-16T22:24:00.000 | python,sockets,connection,errno | Python [Errno 98] Address already in use | 1 | 5 | 13 | 69,953,754 | 0 |
0 | 0 | In my Python socket program, I sometimes need to interrupt it with Ctrl-C. When I do this, it does close the connection using socket.close().
However, when I try to reopen it I have to wait what seems like a minute before I can connect again. How does one correctly close a socket? Or is this intended? | false | 4,465,959 | -0.03076 | 0 | 0 | -2 | sudo pkill -9 python
try this command | 0 | 244,362 | 0 | 127 | 2010-12-16T22:24:00.000 | python,sockets,connection,errno | Python [Errno 98] Address already in use | 1 | 5 | 13 | 67,721,945 | 0 |
0 | 0 | In my Python socket program, I sometimes need to interrupt it with Ctrl-C. When I do this, it does close the connection using socket.close().
However, when I try to reopen it I have to wait what seems like a minute before I can connect again. How does one correctly close a socket? Or is this intended? | false | 4,465,959 | 0 | 0 | 0 | 0 | I had the same problem (Err98 Address already in use) on a Raspberry Pi running python for a EV charging manager for a Tesla Wall Connector. The software had previously been fine but it stopped interrogating the solar inverter one day and I spent days thinking it was something I'd done in python. Turns out the root cause was the Wifi modem assigning a new dynamic IP to the solar inverter as as result of introducing a new smart TV into my home. I changed the python code to reflect the new IP address that I found from the wifi modem and bingo, the issue was fixed. | 0 | 244,362 | 0 | 127 | 2010-12-16T22:24:00.000 | python,sockets,connection,errno | Python [Errno 98] Address already in use | 1 | 5 | 13 | 67,780,925 | 0 |
0 | 0 | In my Python socket program, I sometimes need to interrupt it with Ctrl-C. When I do this, it does close the connection using socket.close().
However, when I try to reopen it I have to wait what seems like a minute before I can connect again. How does one correctly close a socket? Or is this intended? | false | 4,465,959 | 0.061461 | 0 | 0 | 4 | For Linux,
ps aux | grep python
This will show you the error. The process number (eg.35225) containing your python file is the error.
Now,
sudo kill -9 35225
This will kill the error process and your problem will be solved. | 0 | 244,362 | 0 | 127 | 2010-12-16T22:24:00.000 | python,sockets,connection,errno | Python [Errno 98] Address already in use | 1 | 5 | 13 | 69,756,864 | 0 |
0 | 0 | In my Python socket program, I sometimes need to interrupt it with Ctrl-C. When I do this, it does close the connection using socket.close().
However, when I try to reopen it I have to wait what seems like a minute before I can connect again. How does one correctly close a socket? Or is this intended? | false | 4,465,959 | 1 | 0 | 0 | 14 | A simple solution that worked for me is to close the Terminal and restart it. | 0 | 244,362 | 0 | 127 | 2010-12-16T22:24:00.000 | python,sockets,connection,errno | Python [Errno 98] Address already in use | 1 | 5 | 13 | 51,348,141 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.