Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
How would I check if the remote host is up without having a port number? Is there any other way I could check other then using regular ping. There is a possibility that the remote host might drop ping packets
false
2,535,055
0.028564
1
0
1
Many firewalls are configured to drop ping packets without responding. In addition, some network adapters will respond to ICMP ping requests without input from the operating system network stack, which means the operating system might be down, but the host still responds to pings (usually you'll notice if you reboot the server, say, it'll start responding to pings some time before the OS actually comes up and other services start up). The only way to be certain that a host is up is to actually try to connect to it via some well-known port (e.g. web server port 80). Why do you need to know if the host is "up", maybe there's a better way to do it.
0
46,690
0
10
2010-03-28T23:40:00.000
python,network-programming,network-protocols
Check if remote host is up in Python
1
3
7
2,535,076
0
0
0
How would I check if the remote host is up without having a port number? Is there any other way I could check other then using regular ping. There is a possibility that the remote host might drop ping packets
false
2,535,055
0.057081
1
0
2
A protocol-level PING is best, i.e., connecting to the server and interacting with it in a way that doesn't do real work. That's because it is the only real way to be sure that the service is up. An ICMP ECHO (a.k.a. ping) would only tell you that the other end's network interface is up, and even then might be blocked; FWIW, I have seen machines where all user processes were bricked but which could still be pinged. In these days of application servers, even getting a network connection might not be enough; what if the hosted app is down or otherwise non-functional? As I said, talking sweet-nothings to the actual service that you are interested in is the best, surest approach.
0
46,690
0
10
2010-03-28T23:40:00.000
python,network-programming,network-protocols
Check if remote host is up in Python
1
3
7
2,535,139
0
0
0
How would I check if the remote host is up without having a port number? Is there any other way I could check other then using regular ping. There is a possibility that the remote host might drop ping packets
false
2,535,055
0.028564
1
0
1
What about trying something that requires a RPC like a 'tasklist' command in conjunction with a ping?
0
46,690
0
10
2010-03-28T23:40:00.000
python,network-programming,network-protocols
Check if remote host is up in Python
1
3
7
17,115,260
0
0
0
How to Write/Read a file to/from a network folder/share using python? The application will run under Linux and network folder/share can be a Linux/Windows System. Also, how to check that network folder/share has enough space before writing a file? What things should i consider?
true
2,542,025
1.2
0
0
1
Mount the shares using Samba, check the free space on the share using df or os.statvfs and read/write to it like any other folder.
0
1,813
1
1
2010-03-29T19:33:00.000
python,network-shares
Read/Write a file from/to network folder/share using Python?
1
1
1
2,542,026
0
0
0
I'm using minidom to parse an xml file and it threw an error indicating that the data is not well formed. I figured out that some of the pages have characters like ไอเฟล &, causing the parser to hiccup. Is there an easy way to clean the file before I start parsing it? Right now I'm using a regular expressing to throw away anything that isn't an alpha numeric character and the </> characters, but it isn't quite working.
false
2,545,783
0
0
0
0
It looks like you're dealing with data which are saved with some kind of encoding "as if" they were ASCII. XML file should normally be UTF8, and SAX (the underlying parser used by minidom) should handle that, so it looks like something's wrong in that part of the processing chain. Instead of focusing on "cleaning up" I'd first try to make sure the encoding is correct and correctly recognized. Maybe a broken XML directive? Can you edit your Q to show the first few lines of the file, especially the <?xml ... directive at the very start?
1
5,927
0
1
2010-03-30T14:02:00.000
python,xml
Cleaning an XML file in Python before parsing
1
1
5
2,546,454
0
1
0
I'm trying to submit a few forms through a Python script, I'm using the mechanized library. This is so I can implement a temporary API. The problem is that before after submission a blank page is returned informing that the request is being processed, after a few seconds the page is redirected to the final page. I understand if it might sound a bit generic, but I'm not sure what is going on. :) Any ideas?
false
2,569,089
0.099668
0
0
1
If it uses meta tags then you need to parse the HTML manually. Otherwise mechanize will handle the redirect automatically.
0
553
0
1
2010-04-02T20:42:00.000
python,html,http,forms,screen-scraping
How to handle redirects while parsing HTML? - Python
1
1
2
2,574,423
0
0
0
I need to upload multiple files from directory to the server via FTP and SFTP. I've solved this task for SFTP with python, paramiko and threading. But I have problem with doing it for FTP. I tried to use ftplib for python, but it seems that it doesn't support threading and I upload all files one by one, which is very slow. I'm wondering is it even possible to do multithreading uploads with FTP protocol without creating separate connections/authorizations (it takes too long)? Solution can be on Python or PHP. Maybe CURL? Would be grateful for any ideas.
false
2,570,621
0
1
0
0
You can run the script in multiple command prompts / shells (just make sure each file is only handled once by all the different scripts). I am not sure if this quick and dirty trick will improve transfer speed though..
0
4,789
0
3
2010-04-03T08:07:00.000
php,python,ftp,curl,multithreading
Multithreaded FTP upload. Is it possible?
1
1
4
6,735,686
0
0
0
I'm building a download manager in python for fun, and sometimes the connection to the server is still on but the server doesn't send me data, so read method (of HTTPResponse) block me forever. This happens, for example, when I download from a server, which located outside of my country, that limit the bandwidth to other countries. How can I set a timeout for the read method (2 minutes for example)? Thanks, Nir.
false
2,573,044
0.049958
0
0
1
Setting the default timeout might abort a download early if it's large, as opposed to only aborting if it stops receiving data for the timeout value. HTTPlib2 is probably the way to go.
0
3,858
0
1
2010-04-03T23:32:00.000
python,timeout,httpresponse
set timeout to http response read method in python
1
1
4
3,602,969
0
0
0
I really like how I can easily share files on a network using the SimpleHTTPServer, but I wish there was an option like "download entire directory". Is there an easy (one liner) way to implement this? Thanks
false
2,573,670
0.158649
1
0
4
There is no one liner which would do it, also what do you mean by "download whole dir" as tar or zip? Anyway you can follow these steps Derive a class from SimpleHTTPRequestHandler or may be just copy its code Change list_directory method to return a link to "download whole folder" Change copyfile method so that for your links you zip whole dir and return it You may cache zip so that you do not zip folder every time, instead see if any file is modified or not Would be a fun exercise to do :)
0
11,099
0
6
2010-04-04T05:30:00.000
python,simplehttpserver
Download whole directories in Python SimpleHTTPServer
1
1
5
2,573,685
0
0
0
I recently installed twython, a really sleek and awesome twitter API wrapper for Python. I installed it and it works fine from the interpreter, but when I try to import it via Eclipse, it says that twython is an invalid import. How do I "tell" eclipse where twython is so that it will let me import and use it?
false
2,590,435
0
1
0
0
Daniel is right. As long as twython went into site-packages then Pydev will find it.
0
941
0
2
2010-04-07T06:29:00.000
python,eclipse,pydev,twython
Eclipse + PyDev: Eclipse telling me that this is an invalid import?
1
2
2
2,594,323
0
0
0
I recently installed twython, a really sleek and awesome twitter API wrapper for Python. I installed it and it works fine from the interpreter, but when I try to import it via Eclipse, it says that twython is an invalid import. How do I "tell" eclipse where twython is so that it will let me import and use it?
true
2,590,435
1.2
1
0
3
I believe I have had this problem before - try going into the menu: Window_Preferences and then select Pydev and Interpreter-Python. Then try to click Auto-config - it should update its search paths to include everything installed in Python. If that doesn't work, you should at least be able to manually add the folder by clicking "New Folder" in the bottom part of that screen and navigating to the location where you have twython installed.
0
941
0
2
2010-04-07T06:29:00.000
python,eclipse,pydev,twython
Eclipse + PyDev: Eclipse telling me that this is an invalid import?
1
2
2
2,590,469
0
0
0
I'm trying to write some Python code that will establish an invisible relay between two TCP sockets. My current technique is to set up two threads, each one reading and subsequently writing 1kb of data at a time in a particular direction (i.e. 1 thread for A to B, 1 thread for B to A). This works for some applications and protocols, but it isn't foolproof - sometimes particular applications will behave differently when running through this Python-based relay. Some even crash. I think that this is because when I finish performing a read on socket A, the program running there considers its data to have already arrived at B, when in fact I - the devious man in the middle - have yet to send it to B. In a situation where B isn't ready to receive the data (whereby send() blocks for a while), we are now in a state where A believes it has successfully sent data to B, yet I am still holding the data, waiting for the send() call to execute. I think this is the cause of the difference in behaviour that I've found in some applications, while using my current relaying code. Have I missed something, or does that sound correct? If so, my real question is: is there a way around this problem? Is it possible to only read from socket A when we know that B is ready to receive data? Or is there another technique that I can use to establish a truly 'invisible' two-way relay between [already open & established] TCP sockets?
false
2,604,740
0.066568
0
0
1
Perhaps the application you're proxying is poorly written. For instance, if I call recv(fd, buf, 4096, 0); I'm not promised 4096 bytes. The system makes a best-effort to provide it. If 1k isn't a multiple of your application's recv or send sizes, and the application is broken, then grouping the data sent into 1k blocks will break the app.
0
3,528
1
4
2010-04-09T02:24:00.000
python,sockets,tcp,portforwarding
How to correctly relay TCP traffic between sockets?
1
2
3
2,609,442
0
0
0
I'm trying to write some Python code that will establish an invisible relay between two TCP sockets. My current technique is to set up two threads, each one reading and subsequently writing 1kb of data at a time in a particular direction (i.e. 1 thread for A to B, 1 thread for B to A). This works for some applications and protocols, but it isn't foolproof - sometimes particular applications will behave differently when running through this Python-based relay. Some even crash. I think that this is because when I finish performing a read on socket A, the program running there considers its data to have already arrived at B, when in fact I - the devious man in the middle - have yet to send it to B. In a situation where B isn't ready to receive the data (whereby send() blocks for a while), we are now in a state where A believes it has successfully sent data to B, yet I am still holding the data, waiting for the send() call to execute. I think this is the cause of the difference in behaviour that I've found in some applications, while using my current relaying code. Have I missed something, or does that sound correct? If so, my real question is: is there a way around this problem? Is it possible to only read from socket A when we know that B is ready to receive data? Or is there another technique that I can use to establish a truly 'invisible' two-way relay between [already open & established] TCP sockets?
false
2,604,740
0.066568
0
0
1
I don't think that's likely to be your problem. In general, the sending application can't tell when the receiving application actually calls recv() to read the data: the sender's send() may have completed, but the TCP implementations in the source & destination OS will be doing buffering, flow control, retransmission, etc. Even without your relay in the middle, the only way for A to "consider its data to have already arrived at B" is to receive a response from B saying "yep, I got it".
0
3,528
1
4
2010-04-09T02:24:00.000
python,sockets,tcp,portforwarding
How to correctly relay TCP traffic between sockets?
1
2
3
2,604,794
0
0
0
I'm trying to run a command to install bespinclient on my Windows laptop but every time I execute the command python bootstrap.py --no-site-packages, I get an error saying: ImportError: No module named simplejson I'm using Mozilla build tools to run these Linux commands.
false
2,604,841
0.197375
0
0
5
On Ubuntu/Debian, you can install it with apt-get install python-simplejson
1
111,229
0
45
2010-04-09T02:53:00.000
python,unix
ImportError: No Module named simplejson
1
1
5
5,645,465
0
0
0
I'm using networkx to work with graphs. I have pretty large graph (it's near 200 nodes in it) and I try to find all possible paths between two nodes. But, as I understand, networkx can find only shortest path. How can I get not just shortest path, but all possible paths? UPD: path can contain each node only once. UPD2: I need something like find_all_paths() function, described here: python.org/doc/essays/graphs.html But this function doesn't work well with large number of nodes and edged =(
false
2,606,018
0
0
0
0
Dijkstra's algorithm will find the shortest path in a manner similar to a breadth first search (it substitutes a priority queue weighted by depth into the graph for the naive queue of a BFS). You could fairly trivially extend it to produce the 'N' shortest paths if you need some number of alternatives, although if you need the paths to be substantially different (e.g. scheduling the routes of security vans) you might need to be more clever about selecting paths that are significantly different from each other.
0
14,244
0
9
2010-04-09T08:39:00.000
python,networkx,igraph
Path between two nodes
1
1
3
2,606,124
0
1
0
I want to create an application that runs on the users computer, a stand-alone application, with installation and what-not, but I want the interface to be a browser, either internal and displayed as an OS window or external accessible using the browser (i.e. some http server). The reason would be because I know a little about Python, but I think I can manage as long as I have some basic roots that I can use and manipulate, and those would be HTML, CSS, and Javascript. I've yet to find a good GUI tool which I can use, and always abandon the idea after trying to mess around and eventually not getting anything.
false
2,611,910
0.07983
0
0
2
There are plenty of excellent GUI tools for the way you want to do your GUI -- HTML, CSS, and Javascript. If you don't know of any, ask in a separate question with the right tags. The Python side in such an arrangement should have no GUI of its own, but just run a subclass of the Python's standard library's HTTP server, just serving the HTML, CSS, and JS files, and data via JSON on other URLs that the JS can reach with Ajax techniques, essentially implementing storage and business logi -- so it's far from obvious what "GUI tool" you could possibly want for it?! Just develop the Python side on its own (e.g. with IDLE, Wingware, SPE, or whatever you like) and the HTML / CSS / Javascript separately, with its own "GUI tool". All that Python will do with those files is statically serve them, after all. You could be thinking of using some Python side templating, such as Mojo &c, but my recommendation is to avoid that: rather, go with the "thin server architecture" all the way, make the Python side a RESTful server of business logic and storage layers, and do all the GUI work in the browser instead.
0
25,678
0
27
2010-04-10T01:45:00.000
python,browser,desktop,httpserver
Python Desktop Application with the Browser as an interface?
1
1
5
2,611,928
0
1
0
How would I convert test cases made by Selenium IDE to Python without exporting every test case by hand? Is there any command line converter for that job? In the end I want to use Selenium RC and Pythons build in unittest to test my websites. Thanks a lot. Update: I started to write a converter but its too much work to implement all the commands. Is there any better way? from xml.dom.minidom import parse class SeleneseParser: def __init__(self,selFile): self.dom = parse(selFile) def getTestName(self): return self.dom.getElementsByTagName('title')[0].firstChild.data def getBaseUrl(self): return self.dom.getElementsByTagName('link')[0].getAttribute('href') def getNodes(self): cmds = [] nodes = self.dom.getElementsByTagName('tbody')[0].childNodes for node in nodes: if node.nodeType == node.TEXT_NODE and "\n" in node.data: continue if node.nodeType == node.COMMENT_NODE: cmds.append(node.data) if node.nodeType == node.ELEMENT_NODE: cmd = [] for c in node.childNodes: if c.nodeType == node.ELEMENT_NODE: if len(c.childNodes) == 1: cmd.append(c.childNodes[0].data) else: cmd.append("") cmds.append(cmd) return cmds class PythonConverter: def __init__(self,sourceFile): self.parser = SeleneseParser(sourceFile) self.dest = u'# -*- coding: utf-8 -*-\n\nfrom selenium import selenium\nimport unittest, time, re\n' def getHeader(self): self.dest += u'\nclass %s(unittest.TestCase):\n' % self.parser.getTestName() self.dest += u'\tdef setUp(self):\n\t\tself.verificationErrors = []\n' self.dest += u'\t\tself.selenium = selenium("localhost", 4444, "*chrome", "%s")\n' % self.parser.getBaseUrl() self.dest += u'\t\tself.selenium.start()\n' def getContent(self): self.dest += u'\n\tdef test_%s(self):\n\t\tsel = self.selenium\n' % self.parser.getTestName() nodes = self.parser.getNodes() for node in nodes: if type(node) is list: cmd,target,value = node[0],node[1],node[2] if cmd == 'store': self.dest += u'\t\t%s = "%s"\n' % (value,target) elif cmd == 'clickAndWait': self.dest += u'\t\tsel.click(u"%s")\n\t\tsel.wait_for_page_to_load("30000")\n' % (target) elif cmd == 'type': self.dest += u'\t\tsel.%s(u"%s", u"%s")\n' % (cmd,target,value) elif cmd == 'select': self.dest += u'\t\tsel.select(u"%s", u"%s")\n' % (target,value) elif cmd == 'verifyTextPresent': self.dest += u'\t\ttry: self.failUnless(sel.is_text_present(u"%s"))\n\t\texcept AssertionError, e: self.verificationErrors.append(str(e))\n' % target elif cmd == 'verifySelectedLabel': self.dest += u'\t\ttry: self.assertEqual(u"%s", sel.get_selected_label(u"%s"))\n\t\texcept AssertionError, e: self.verificationErrors.append(str(e))\n' % (value,target) elif cmd == 'verifyValue': self.dest += u'\t\ttry: self.assertEqual(u"%s", sel.get_value(u"%s"))\n\t\texcept AssertionError, e: self.verificationErrors.append(str(e))\n' % (value,target) elif cmd == 'verifyText': self.dest += u'\t\ttry: self.assertEqual(u"%s", sel.get_text(u"%s"))\n\t\texcept AssertionError, e: self.verificationErrors.append(str(e))\n' % (value,target) elif cmd == 'verifyElementPresent': self.dest += u'\t\ttry: self.failUnless(sel.is_element_present(u"%s"))\n\t\texcept AssertionError, e: self.verificationErrors.append(str(e))\n' % (target) else: self.dest += u'\t\tsel.%s(u"%s")\n' % (cmd,target) #print cmd,target,value else: self.dest += u'\t\t#%s\n' % node def getFooter(self): self.dest += u'\n\tdef tearDown(self):\n\t\tself.selenium.stop()\n\t\tself.assertEqual([], self.verificationErrors)\n' self.dest += u'\nif __name__ == "__main__":\n\tunittest.main()' def convert(self): self.getHeader() self.getContent() self.getFooter() return self.dest p = PythonConverter('test_case.html') print p.convert()
false
2,617,684
0
0
0
0
No there isn't a way but in theory it shouldn't be too difficult to do as all you need to do is have something that uses the python-rc.js to convert the file.
0
2,514
0
6
2010-04-11T16:16:00.000
python,selenium,selenium-rc,selenium-ide
How to convert Selenese (html) to Python programmatically?
1
1
2
2,620,931
0
0
0
How can I protect my web server, if I run custom users code on server. If any user can submit his python source on my server and run it. Maybe some modules or linux tools for close any network and hardware activity for this script. Thank's all for help!
false
2,618,862
0
1
0
0
In general, python is not the best language choice if you want to allow the execution of untrusted code. The JVM and .NET have much better support for sandboxing, so Jython and IronPython would be better choices.
0
620
0
5
2010-04-11T21:59:00.000
python,system
Safest python code running
1
1
3
2,619,276
0
0
0
I have a 3rd-party protocol module (SNMP) that is built on top of asyncore. The asyncore interface is used to process response messages. What is the proper technique to design a client that generate the request-side of the protocol, while the asyncore main loop is running. I can think of two options right now: Use the loop,timeout parameters of asyncore.loop() to allow my client program time to send the appropriate request. Create a client asyncore dispatcher that will be executed in the same asyncore processing loop as the receiver. What is the best option? I'm working on the 2nd solution, cause the protocol API does not give me direct access to the asyncore parameters. Please correct me if I've misunderstood the proper technique for utilizing asyncore.
true
2,626,512
1.2
0
0
1
I solved this by adding a callback function into the asyncore loop for the receiver process. The solution was somewhat specific to the module I was experiment with (pySNMP), but here is the general idea: define a function closure that returns a callable method with a stored reference to a dict and window variable. The dict tracks the expected responses, and the window is the size of the sender buffer. pass a reference to the closure function into a customized asyncore.dispatcher instance. The callback function can be executed in the writeable method invocation. set the timeout of the dispatcher to a small value. This prevents asyncore from blocking for too long, while waiting for received packets. I used .05 seconds. The lower you go, the more response your app is, but don't go too low. update the asyncore read_handle method to remove the received responses from your global dict structure. This will allow new messages to be transmitted. now kick-off the dispatcher and every loop of the asyncore, the system will call the callback function, and send any messages, up to the defined window size.
0
625
0
1
2010-04-13T01:55:00.000
python,client-server,asyncore,pysnmp
Building an SNMP Request-Response service with Python Asyncore
1
1
1
2,734,657
0
0
0
Does anyone know any more details about google's web-crawler (aka GoogleBot)? I was curious about what it was written in (I've made a few crawlers myself and am about to make another) and if it parses images and such. I'm assuming it does somewhere along the line, b/c the images in images.google.com are all resized. It also wouldn't surprise me if it was all written in Python and if they used all their own libraries for most everything, including html/image/pdf parsing. Maybe they don't though. Maybe it's all written in C/C++. Thanks in advance-
false
2,633,302
0.066568
0
0
1
The crawler is very likely written in C or C++, at least backrub's crawler was written in one of these. Be aware that the crawler only takes a snapshot of the page, then stores it in a temporary database for later processing. The indexing and other attached algorithms will extract the data, for example the image references.
0
400
0
0
2010-04-13T21:20:00.000
c++,python,c
Google Bot information?
1
2
3
2,645,336
0
0
0
Does anyone know any more details about google's web-crawler (aka GoogleBot)? I was curious about what it was written in (I've made a few crawlers myself and am about to make another) and if it parses images and such. I'm assuming it does somewhere along the line, b/c the images in images.google.com are all resized. It also wouldn't surprise me if it was all written in Python and if they used all their own libraries for most everything, including html/image/pdf parsing. Maybe they don't though. Maybe it's all written in C/C++. Thanks in advance-
true
2,633,302
1.2
0
0
0
Officially allowed languages at Google, I think, are Python/C++/Java. The bot likely uses all 3 for different tasks.
0
400
0
0
2010-04-13T21:20:00.000
c++,python,c
Google Bot information?
1
2
3
2,633,356
0
0
0
I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)
false
2,638,490
0
1
0
0
Unless you're sending a truly huge amount of data, you're probably better off using one buffer. If you use a geometric progression for growing your buffer size, the number of allocations becomes an amortized constant, and the time to allocate the buffer will generally follow.
0
867
0
5
2010-04-14T15:04:00.000
python,c,sockets,buffer,send
What is faster: multiple `send`s or using buffering?
1
3
3
2,638,568
0
0
0
I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)
false
2,638,490
0
1
0
0
A send() call implies a round-trip to the kernel (the part of the OS which deals with the hardware directly). It has a unit cost of about a few hundred clock cycles. This is harmless unless you are trying to call send() millions of times. Usually, buffering is about calling send() only once in a while, when "enough data" has been gathered. "Enough" does not mean "the whole message" but something like "enough bytes so that the unit cost of the kernel round-trip is dwarfed". As a rule of thumb, an 8-kB buffer (8192 bytes) is traditionally considered as good. Anyway, for all performance-related questions, nothing beats an actual measure. Try it. Most of the time, there not any actual performance problem worth worrying about.
0
867
0
5
2010-04-14T15:04:00.000
python,c,sockets,buffer,send
What is faster: multiple `send`s or using buffering?
1
3
3
2,638,599
0
0
0
I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)
true
2,638,490
1.2
1
0
3
Because of the way TCP congestion control works, it's more efficient to send data all at once. TCP maintains a window of how much data it will allow to be "in the air" (sent but not yet acknowledged). TCP measures the acknowledgments coming back to figure out how much data it can have "in the air" without causing congestion (i.e., packet loss). If there isn't enough data coming from the application to fill the window, TCP can't make accurate measurements so it will conservatively shrink the window. If you only have a few, small headers and your calls to send are in rapid succession, the operating system will typically buffer the data for you and send it all in one packet. In that case, TCP congestion control isn't really an issue. However, each call to send involves a context switch from user mode to kernel mode, which incurs CPU overhead. In other words, you're still better off buffering in your application. There is (at least) one case where you're better off without buffering: when your buffer is slower than the context switching overhead. If you write a complicated buffer in Python, that might very well be the case. A buffer written in CPython is going to be quite a bit slower than the finely optimized buffer in the kernel. It's quite possible that buffering would cost you more than it buys you. When in doubt, measure. One word of caution though: premature optimization is the root of all evil. The difference in efficiency here is pretty small. If you haven't already established that this is a bottleneck for your application, go with whatever makes your life easier. You can always change it later.
0
867
0
5
2010-04-14T15:04:00.000
python,c,sockets,buffer,send
What is faster: multiple `send`s or using buffering?
1
3
3
2,639,059
0
0
0
I am little stumped: I have a simple messenger client program (pure python, sockets), and I wanted to add proxy support (http/s, socks), however I am a little confused on how to go about it. I am assuming that the connection on the socket level will be done to the proxy server, at which point the headers should contain a CONNECT + destination IP (of the chat server) and authentication, (if proxy requires so), however the rest is a little beyond me. How is the subsequent connection handled, specifically the reading/writing, etc... Are there any guides on proxy support implementation for socket based (tcp) programming in Python? Thank you
false
2,646,983
0.132549
0
0
2
It is pretty simple - after you send the HTTP request: CONNECT example.com:1234 HTTP/1.0\r\nHost: example.com:1234\r\n<additional headers incl. authentication>\r\n\r\n, the server responds with HTTP/1.0 200 Connection established\r\n\r\n and then (after the double line ends) you can communicate just as you would communicate with example.com port 1234 without the proxy (as I understand you already have the client-server communication part done).
0
2,236
0
3
2010-04-15T16:07:00.000
python,proxy,tcp,sockets,socks
Python, implementing proxy support for a socket based application (not urllib2)
1
1
3
2,714,593
0
1
0
I'm using urllib2 to open a url. Now I need the html file as a string. How do I do this?
false
2,647,723
1
0
0
16
In python3, it should be changed to urllib.request.openurl('http://www.example.com/').read().decode('utf-8').
0
17,378
0
7
2010-04-15T17:48:00.000
python,string,urllib2
urllib2 to string
1
1
4
35,367,453
0
0
0
what is the advantage of using a python virtualbox API instead of using XPCOM?
false
2,652,146
0.321513
1
0
5
I would generally recommend against either one. If you need to use virtualization programmatically, take a look at libvirt, which gives you cross platform and cross hypervisor support; which lets you do kvm/xen/vz/vmware later on. That said, the SOAP api is using two extra abstraction layers (the client and server side of the HTTP transaction), which is pretty clearly then just calling the XPCOM interface. If you need local host only support, use XPCOM. The extra indirection of libvirt/SOAP doesn't help you. If you need to access virtualbox on a various hosts across multiple client machines, use SOAP or libvirt If you want cross platform support, or to run your code on Linux, use libvirt.
0
14,030
0
9
2010-04-16T10:26:00.000
python,virtualbox,xpcom
What is the advantage of using Python Virtualbox API?
1
1
3
2,655,522
0
0
0
PdfFileReader reads the content from a pdf file to create an object. I am querying the pdf from a cdn via urllib.urlopen(), this provides me a file like object, which has no seek. PdfFileReader, however uses seek. What is the simple way to create a PdfFileReader object from a pdf downloaded via url. Now, what can I do to avoid writing to disk and reading it again via file(). Thanks in advance.
false
2,653,079
0.066568
0
0
1
I suspect you may be optimising prematurely here. Most modern systems will cache files in memory for a significant period of time before they flush them to disk, so if you write the data to a temporary file, read it back in, then close and delete the file you may find that there's no significant disc traffic (unless it really is 100MB). You might want to look at using tempfile.TemporaryFile() which creates a temporary file that is automatically deleted when closed, or else tempfile.SpooledTemporaryFile() which explicitly holds it all in memory until it exceeds a particular size.
0
275
0
2
2010-04-16T12:59:00.000
python,file,urllib,file-type
Inexpensive ways to add seek to a filetype object
1
1
3
2,653,447
0
0
0
I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.
true
2,662,595
1.2
0
0
1
Ways out. Parallelise Profile your code to see where the bottleneck is. The result are often surprising. Use a single regexp (concatenate using |) rather than multiple ones.
1
156
0
2
2010-04-18T14:46:00.000
python,html,screen-scraping,hyperlink
Extract anything that looks like links from large amount of data in python
1
1
1
2,663,277
0
1
0
I currently have built a system that checks user IP, browser, and a random-string cookie to determine if he is an admin. In the worst case, someone steals my cookie, uses the same browser I do, and masks his IP to appear as mine. Is there another layer of security I should add onto my script to make it more secure? EDIT: To clarify: my website accepts absolutely NO input from users. I'm just designing a back-end admin panel to make it easier to update database entries.
false
2,670,346
0.049958
0
0
1
Https is a must, but you also have to come to terms with the fact that no site can be 100% secure. The only other way for you to get a significant improvement in security is to have very short session timeouts and provide you users with hardware tokens, but even tokens can be stolen.
0
263
0
3
2010-04-19T19:49:00.000
python,security
Web Security: Worst-Case Situation
1
2
4
2,670,489
0
1
0
I currently have built a system that checks user IP, browser, and a random-string cookie to determine if he is an admin. In the worst case, someone steals my cookie, uses the same browser I do, and masks his IP to appear as mine. Is there another layer of security I should add onto my script to make it more secure? EDIT: To clarify: my website accepts absolutely NO input from users. I'm just designing a back-end admin panel to make it easier to update database entries.
false
2,670,346
0.049958
0
0
1
THe one thing I miss besides everything that is mentioned is fixing "all other security problems". If you have a SQL injection, you're effort on the cookies is a waste of time. If you have a XSRF vuln, you're effort on the cookies is a waste of time. If you have XSS, .... If you have HPP, ... If you have ...., .... You get the point. If you really want to cover everything, I suggest you get the vulnerability landscape clear and build an attack tree (Bruce Schneier).
0
263
0
3
2010-04-19T19:49:00.000
python,security
Web Security: Worst-Case Situation
1
2
4
2,670,747
0
1
0
How to create simple web site with Python? I mean really simple, f.ex, you see text "Hello World", and there are button "submit", which onClick will show AJAX box "submit successful". I want to start develop some stuff with Python, and I don't know where to start.
true
2,681,754
1.2
0
0
3
Why don't you try out the Google AppEngine stuff? They give you a local environment (that runs on your local system) for developing the application. They have nice, easy intro material for getting the site up and running - your "hello, world" example will be trivial to implement. From there on, you can either go with some other framework (using what you have learnt, as the vanilla AppEngine stuff is pretty standard for simple python web frameworks) or carry on with the other stuff Google provides (like hosting your app for you...)
0
60,467
0
22
2010-04-21T09:40:00.000
python,html,web-applications
How to create simple web site with Python?
1
1
6
2,684,119
0
0
0
I wish to get a list of connections to a manager. I can get last_accepted from the servers' listener, but I want all connections. There HAS to be a method I am missing somewhere to return all connections to a server or manager Please help!!
false
2,686,893
0
0
0
0
Looking at multiprocessing/connection.py, the listener just doesn't seem to track all connections -- you could, however, subclass it and override accept to append accepted connections to a list.
0
712
0
0
2010-04-21T22:07:00.000
python,multiprocessing
python multiprocessing server connections
1
1
1
2,687,986
0
0
0
I know Twisted can do this well but what about just plain socket? How'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on.
false
2,697,989
0
0
0
0
If the internet comes and goes momentarily, you might not actually lose the TCP session. If you do, the socket API will throw some kind of exception, usually socket.timeout.
0
1,516
0
2
2010-04-23T11:06:00.000
python,sockets
Socket Lose Connection
1
2
2
2,698,024
0
0
0
I know Twisted can do this well but what about just plain socket? How'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on.
false
2,697,989
0.099668
0
0
1
I'm assuming you're talking about TCP. If your internet connection is out for a second, you might not lose the TCP connection at all, it'll just retransmit and resume operation. There's ofcourse 100's of other reasons you could lose the connection(e.g. a NAT gateway inbetween decided to throw out the connection silently. The other end gets hit by a nuke. Your router burns up. The guy at the other end yanks out his network cable, etc. etc.) Here's what you should do if you need to detect dead peers/closed sockets etc.: Read from the socket or in any other way wait for events of incoming data on it. This allows you to detect when the connection was gracefully closed, or an error occured on it (reading on it returns 0 or -1) - atleast if the other end is still able to send a TCP FIN/RST or ICMP packet to your host. Write to the socket - e.g. send some heartbeats every N seconds. Just reading from the socket won't detect the problem when the other end fails silently. If that PC goes offline, it can obviously not tell you that it did - so you'll have to send it something and see if it responds. If you don't want to write heartbeats every N seconds, you can atleast turn on TCP keepalive - and you'll eventually get notified if the peer is dead. You still have to read from the socket, and the keepalive are usually sent every 2 hours by default. That's still better than keeping dead sockets around for months though.
0
1,516
0
2
2010-04-23T11:06:00.000
python,sockets
Socket Lose Connection
1
2
2
2,698,055
0
0
0
I am writing a python script that downloads a file given by a URL. Unfortuneatly the URL is in the form of a PHP script i.e. www.website.com/generatefilename.php?file=5233 If you visit the link in a browser, you are prompted to download the actual file and extension. I need to send this link to the downloader, but I can't send the downloader the PHP link. How would I get the full file name in a usable variable?
false
2,705,856
0.197375
1
0
2
What you need to do is examine the Content-Disposition header sent by the PHP script. it will look something like: Content-Disposition: attachment; filename=theFilenameYouWant As to how you actually examine that header it depends on the python code you're currently using to fetch the URL. If you post some code I'll be able to give a more detailed answer.
0
171
0
0
2010-04-24T19:36:00.000
php,python,url,scripting
I want the actual file name that is returned by a PHP script
1
1
2
2,705,877
0
0
0
What are the best practices for extending an existing Python module – in this case, I want to extend the python-twitter package by adding new methods to the base API class. I've looked at tweepy, and I like that as well; I just find python-twitter easier to understand and extend with the functionality I want. I have the methods written already – I'm trying to figure out the most Pythonic and least disruptive way to add them into the python-twitter package module, without changing this modules’ core.
false
2,705,964
1
1
0
6
Don't add them to the module. Subclass the classes you want to extend and use your subclasses in your own module, not changing the original stuff at all.
1
33,640
0
29
2010-04-24T20:12:00.000
python,module,tweepy,python-module,python-twitter
How do I extend a python module? Adding new functionality to the `python-twitter` package
1
1
6
2,705,976
0
0
0
How can I get information about a user's PC connected to my socket
false
2,707,599
0.197375
0
0
3
a socket is a "virtual" channel established between to electronic devices through a network (a bunch of wires). the only informations available about a remote host are those published on the network. the basic informations are those provided in the TCP/IP headers, namely the remote IP address, the size of the receive buffer, and a bunch of useless flags. for any other informations, you will have to request from other services. a reverse DNS lookup will get you a name associated with the IP address. a traceroute will tell you what is the path to the remote computer (or at least to a machine acting as a gateway/proxy to the remote host). a Geolocation request can give you an approximate location of the remote computer. if the remote host is a server itself accessible to the internet through a registered domain name, a WHOIS request can give you the name of the person in charge of the domain. on a LAN (Local Area Network: a home or enterprise network), an ARP or RARP request will get you a MAC address and many more informations (as much as the network administrator put when they configured the network), possibly the exact location of the computer. there are many many more informations available, but only if they were published. if you know what you are looking for and where to query those informations, you can be very successful. if the remote host is quite hidden and uses some simple stealth technics (anonymous proxy) you will get nothing relevant.
0
486
0
0
2010-04-25T08:17:00.000
python,sockets
Socket: Get user information
1
1
3
2,707,933
0
0
0
I need to set timeout on python's socket recv method. How to do it?
false
2,719,017
0.090659
0
0
5
You can use socket.settimeout() which accepts a integer argument representing number of seconds. For example, socket.settimeout(1) will set the timeout to 1 second
0
301,595
0
159
2010-04-27T05:51:00.000
python,sockets,timeout
How to set timeout on python's socket recv method?
1
1
11
53,769,737
0
0
0
I want to use the htmllib module but it's been removed from Python 3.0. Does anyone know what's the replacement for this module?
false
2,730,752
0.049958
0
0
1
I believe lxml has been ported to Python 3
0
5,120
0
11
2010-04-28T15:14:00.000
python,python-3.x
Replacement for htmllib module in Python 3.0
1
2
4
2,734,917
0
0
0
I want to use the htmllib module but it's been removed from Python 3.0. Does anyone know what's the replacement for this module?
false
2,730,752
0.049958
0
0
1
I heard Beautiful soup is getting a port to 3.0.
0
5,120
0
11
2010-04-28T15:14:00.000
python,python-3.x
Replacement for htmllib module in Python 3.0
1
2
4
2,732,223
0
1
0
I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go: Context: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. The current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., /post/52 should update post with id 52 with the body contents of the HTTP message. What I do not know is how to conceptually correctly serve the (HTML) edit page. Will doing it like this: /post/52/edit violate the idea of URI, as 'edit' is not a resource, but an action? On the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page? So my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?
false
2,750,341
0.099668
1
0
2
Instead of calling it /post/52/edit, what if you called it /post/52/editor? Now it is a resource. Dilemma averted.
0
267
0
4
2010-05-01T14:43:00.000
python,rest,web.py
Is www.example.com/post/21/edit a RESTful URI? I think I know the answer, but have another question
1
2
4
2,750,368
0
1
0
I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go: Context: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. The current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., /post/52 should update post with id 52 with the body contents of the HTTP message. What I do not know is how to conceptually correctly serve the (HTML) edit page. Will doing it like this: /post/52/edit violate the idea of URI, as 'edit' is not a resource, but an action? On the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page? So my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?
false
2,750,341
0.197375
1
0
4
Another RESTful approach is to use the query string for modifiers: /post/52?edit=1 Also, don't get too hung up on the purity of the REST model. If your app doesn't fit neatly into the model, break the rules.
0
267
0
4
2010-05-01T14:43:00.000
python,rest,web.py
Is www.example.com/post/21/edit a RESTful URI? I think I know the answer, but have another question
1
2
4
2,750,379
0
1
0
I am trying to migrate a legacy mailing list to a new web forum software and was wondering if mailman has an export option or an API to get all lists, owners, members and membership types.
false
2,756,311
0.26052
1
0
4
probably too late, but the list_members LISTNAME command (executed from a shell) will give you all the members of a list. list_admins LISTNAME will give you the owners What do you mean by membership type? list_members does have an option to filter on digest vs non-digest members. I don't think there's a way to get the moderation flag without writing a script for use with withlist
0
2,988
0
2
2010-05-03T05:41:00.000
python,api,mailman
Does Mailman have an API or an export lists, users and owners option?
1
1
3
3,154,975
0
0
0
I've written a Python application that makes web requests using the urllib2 library after which it scrapes the data. I could deploy this as a web application which means all urllib2 requests go through my web-server. This leads to the danger of the server's IP being banned due to the high number of web requests for many users. The other option is to create an desktop application which I don't want to do. Is there any way I could deploy my application so that I can get my web-requests through the client side. One way was to use Jython to create an applet but I've read that Java applets can only make web-requests to the server it is deployed on and the only way to to circumvent this is to create a server side proxy which leads us back to the problem of the server's ip getting banned. This might sounds sound like and impossible situation and I'll probably end up creating a desktop application but I thought I'd ask if anyone knew of an alternate solution. Thanks.
false
2,763,274
0.066568
0
0
1
You probably can use AJAX requests made from JavaScript that is a part of client-side. Use server → client communication to give commands and necessary data to make a request …and use AJAX communication from client to 3rd party server then.
0
1,085
0
1
2010-05-04T06:33:00.000
python,urllib2,urllib
making urllib request in Python from the client side
1
1
3
2,763,308
0
0
0
I'd like to extract the info string from an internet radio streamed over HTTP. By info string I mean the short note about the currently played song, band name etc. Preferably I'd like to do it in python. So far I've tried opening a socket but from there I got a bunch of binary data that I could not parse... thanks for any hints
true
2,766,787
1.2
0
0
1
Sounds like you might need some stepping stone projects before you're ready for this. There's no reason to use a low-level socket library for HTTP. There are great tools both command line utilities and python standard library modules like urlopen2 that can handle the low level TCP and HTTP specifics for you. Do you know the URL where you data resides? Have you tried something simple on the command line like using cURL to grab the raw HTML and then some basic tools like grep to hunt down the info you need? I assume here the metadata is actually available as HTML as opposed to being in a binary format read directly by the radio streamer (which presumably is in flash perhaps?). Hard to give you any specifics because your question doesn't include any technical details about your data source.
0
2,040
0
0
2010-05-04T15:44:00.000
python,http,streaming,metadata
Parse metadata from http live stream
1
1
1
2,792,800
0
0
0
I'm looking into using Lua in a web project. I can't seem to find any way of directly parsing in pure python and running Lua code in Python. Does anyone know how to do this? Joe
false
2,767,854
0.132549
0
0
2
From your comments, it appears you a interested in a secure way of executing untrusted code. Redifining python builtins, as you suggested in a comment, is a horrible way to secure code. What you want is sandboxing, there are solutions for python, but I wouldn't recommend them. You would be much better of using Jython or IronPython, because the JVM and .NET clr, were designed with sandboxing in mind. I personally believe that in most cases, if you need to execute untrusted code, then you are putting too much or not enough trust in your users.
1
3,999
0
2
2010-05-04T18:14:00.000
python,lua,eval
Lua parser in python
1
2
3
2,768,130
0
0
0
I'm looking into using Lua in a web project. I can't seem to find any way of directly parsing in pure python and running Lua code in Python. Does anyone know how to do this? Joe
false
2,767,854
0.066568
0
0
1
@the_drow From Lua's web site: Lua is a fast language engine with small footprint that you can embed easily into your application. Lua has a simple and well documented API that allows strong integration with code written in other languages. It is easy to extend Lua with libraries written in other languages. It is also easy to extend programs written in other languages with Lua. Lua has been used to extend programs written not only in C and C++, but also in Java, C#, Smalltalk, Fortran, Ada, Erlang, and even in other scripting languages, such as Perl and Ruby. @Joe Simpson Check out Lunatic Python, it might have what you want. I know it's an old question, but other people might be looking for this answer, as well. It's a good question that deserves a good answer.
1
3,999
0
2
2010-05-04T18:14:00.000
python,lua,eval
Lua parser in python
1
2
3
18,090,375
0
0
0
I need to be able to block the urls that are stored in a text file on the hard disk using Python. If the url the user tries to visit is in the file, it redirects them to another page instead. How is this done?
false
2,774,006
0.099668
0
0
1
Doing this at the machine level is a weak solution, it would be pretty easy for a technically inclined user to bypass. Even with a server side proxy it will be very easy to bypass unless you firewall normal http traffic, at a bare minimum block ports 80, 443. You could program a proxy in python as Alex suggested, but this is a pretty common problem and there are plenty of off the shelf solutions. That being said, I think that restricting web access will do nothing but aggravate your users.
0
957
0
1
2010-05-05T14:19:00.000
python,windows,internet-explorer,url
Internet Explorer URL blocking with Python?
1
1
2
2,774,159
0
0
0
Getting attributes using minidom in Python, one uses the "attributes" property. e.g. node.attributes["id"].value So if I have <a id="foo"></a>, that should give me "foo". node.attributes["id"] does not return the value of the named attribute, but an xml.dom.minidom.Attr instance. But looking at the help for Attr, by doing help('xml.dom.minidom.Attr'), nowhere is this magic "value" property mentioned. I like to learn APIs by looking at the type hierarchy, instance methods etc. Where did this "value" property come from?? Why is it not listed in the Attr class' page? The only data descriptors mentioned are isId, localName and schemaType. Its also not inherited from any superclasses. Since I'm new to Python, would some of the Python gurus enlighten?
false
2,785,703
0
0
0
0
Geez, never noticed that before. You're not kidding, node.value isn't mentioned anywhere. It is definitely being set in the code though under def __setitem__ in xml.dom.minidom. Not sure what to say other than, it looks like you'll have to use that.
0
10,689
0
3
2010-05-07T01:52:00.000
python,xml,minidom
python xml.dom.minidom.Attr question
1
1
2
2,785,722
0
1
0
I'm writing a (tabbed) application for Facebook that requires a background process to run on a server and, periodically, upload images to an album on this application's page. What I'm trying to do is create a script that will: a) authenticate me with the app b) upload an image to a specific album All of this entirely from the command line and completely with the new Graph API. My problem right now is trying to locate the documentation that will allow me to get a token without a pop-up window of sorts. Thoughts?
false
2,791,683
0.099668
0
0
1
If you only need to authenticate as one user, you can get an access token with the offline_access permission that will last forever and just bake that into the script.
0
1,483
0
4
2010-05-07T21:04:00.000
python,facebook,oauth
Can you authenticate Facebook Graph entirely from command line with Python?
1
1
2
7,356,440
0
0
0
Does httplib.HTTPException have error codes? If so how do I get at them from the exception instance? Any help is appreciated.
true
2,791,946
1.2
0
0
5
The httplib module doesn't use exceptions to convey HTTP responses, just genuine errors (invalid HTTP responses, broken headers, invalid status codes, prematurely broken connections, etc.) Most of the httplib.HTTPException subclasses just have an associated message string (stored in the args attribute), if even that. httplib.HTTPException itself may have an "errno" value as the first entry in args (when raised through httplib.FakeSocket) but it's not a HTTP error code. The HTTP response codes are conveyed through the httplib.HTTPConnection object, though; the getresponse method will (usually) return a HTTPResponse instance with a status attribute set to the HTTP response code, and a reason attribute set to the text version of it. This includes error codes like 404 and 500. I say "usually" because you (or a library you use) can override httplib.HTTPConnection.response_class to return something else.
0
3,303
0
2
2010-05-07T22:05:00.000
python,http,exception,tcp
python httplib httpexception error codes
1
1
1
2,792,030
0
0
0
I'm using threads and xmlrpclib in python at the same time. Periodically, I create a bunch of thread to complete a service on a remote server via xmlrpclib. The problem is that, there are times that the remote server doesn't answer. This causes the thread to wait forever for a response which it never gets. Over time, number of threads in this state increases and will reach the maximum number of allowed threads on the system (I'm using fedora). I tried to use socket.setdefaulttimeout(10); but the exception that is created by that will cause the server to defunct. I used it at server side but it seems that it doesn't work :/ Any idea how can I handle this issue?
false
2,806,397
0.099668
0
0
1
You are doing what I usually call (originally in Spanish xD) "happy road programming". You should implement your programs to handle undesired cases, not only the ones you want to happen. The threads here are only showing an underlying mistake: your server can't handle a timeout, and the implementation is rigid in a way that adding a timeout causes the server to crash due to an unhandled exception. Implement it more robustly: it must be able to withstand an exception, servers can't die because of a misbehaving client. If you don't fix this kind of problem now, you may have similar issues later on.
0
110
0
0
2010-05-10T20:54:00.000
python,multithreading,xml-rpc
too many threads due to synch communication
1
2
2
4,199,610
0
0
0
I'm using threads and xmlrpclib in python at the same time. Periodically, I create a bunch of thread to complete a service on a remote server via xmlrpclib. The problem is that, there are times that the remote server doesn't answer. This causes the thread to wait forever for a response which it never gets. Over time, number of threads in this state increases and will reach the maximum number of allowed threads on the system (I'm using fedora). I tried to use socket.setdefaulttimeout(10); but the exception that is created by that will cause the server to defunct. I used it at server side but it seems that it doesn't work :/ Any idea how can I handle this issue?
false
2,806,397
0
0
0
0
It seems like your real problem is that the server hangs on certain requests, and dies if the client closes the socket - the threads are just a side effect of the implementation. If I'm understanding what you're saying correctly, then the only way to fix this would be to fix the server to respond to all requests, or to be more robust with network failure, or (preferably) both.
0
110
0
0
2010-05-10T20:54:00.000
python,multithreading,xml-rpc
too many threads due to synch communication
1
2
2
2,806,488
0
0
0
I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP/socketid in determining what data belongs to whom. However, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received. The only way I can see to do this is to use some sort of client/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance. Any information would be appreciated. Thanks.
false
2,808,092
0
0
0
0
If you absolutely need to verify that a particular user is a particular user then you need to use some form of encryption where the user signs their messages. This can be done pretty quickly because the user only needs to generate a hash of their message and then sign (encrypt) the hash. For your game application you probably don't need to worry about this. Most ISPs wont allow their users to spoof IP addresses thus you need to only worry about users behind NAT in which you may have multiple users running from the same IP address. In this case, and the general one, you can fairly safely identify unique users based on a tuple containing ip address and UDP port.
0
4,369
0
8
2010-05-11T04:33:00.000
python,security,encryption,cryptography,udp
UDP security and identifying incoming data
1
3
6
2,808,130
0
0
0
I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP/socketid in determining what data belongs to whom. However, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received. The only way I can see to do this is to use some sort of client/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance. Any information would be appreciated. Thanks.
false
2,808,092
0
0
0
0
I would look into the Garage Games networking library. It is written in C++ and uses UDP. It is designed for low latency and is considered one of the best for games. If I remember correctly they would actually calculate the likely position of the player both on the client side and the server side. It would do this for many aspects to ensure integrity of the data. It also would do a crc check on the client software and compare against the server software to make sure they matched. I am not sure you can license it separately anymore so you may have to license the game engine (100 bucks). It would at least give you some insight on a proven approach to UDP for games. Another possibility is looking into the PyGame networking code. It may have already addressed the issues you are facing.
0
4,369
0
8
2010-05-11T04:33:00.000
python,security,encryption,cryptography,udp
UDP security and identifying incoming data
1
3
6
7,210,998
0
0
0
I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP/socketid in determining what data belongs to whom. However, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received. The only way I can see to do this is to use some sort of client/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance. Any information would be appreciated. Thanks.
false
2,808,092
0
0
0
0
I'm breaking this down into four levels of security. Extremely Insecure - Anyone on the network can spoof a valid request/response with generally available prior knowledge. (ie syslog) Very Insecure - Anyone on the network can spoof a valid request/response only if they have at least read access to the wire. (Passive MITM) (ie http accessable forum with browser cookies) Somewhat Insecure - Anyone in the network can spoof a valid request/response if they can read AND make changes to the wire (Active MITM) (ie https site with self-signed cert) Secure - Requests/Responses cannot be spoofed even with full access to the wire. (ie https accessable ecommerce site) For Internet games the very insecure solution might actually be acceptable (It would be my choice) It requires no crypto. Just a field in your apps UDP packet format with some kind of random practically unguessable session identifier ferried around for the duration of the game. Somewhat insecure requires a little bit of crypto but none of the trust/PKI/PSK needed to prevent Active-MITM of the secure solution. With somewhat insecure if the data payloads were not sensitive you could use an integrity only cipher with (TCP) TLS/ (UDP) DTLS to reduce processing overhead and latency at the client and server. For games UDP is a huge benefit because if there is packet loss you don't want the IP stack to waste time retransmitting stale state - you want to send new state. With UDP there are a number of clever schemes such as non-acknowledged frames (world details which don't matter so much if their lost) and statistical methods of duplicating important state data to counter predictable levels of observed packet loss. At the end of the day I would recommend go very insecure or somewhat insecure /w DTLS integrity only.
0
4,369
0
8
2010-05-11T04:33:00.000
python,security,encryption,cryptography,udp
UDP security and identifying incoming data
1
3
6
2,815,170
0
0
0
How can I distinguish between a broadcasted message and a direct message for my ip? I'm doing this in python.
false
2,830,326
0
1
0
0
Basically what you need to do is create a raw socket, receive a datagram, and examine the destination address in the header. If that address is a broadcast address for the network adapter the socket is bound to, then you're golden. I don't know how to do this in Python, so I suggest looking for examples of raw sockets and go from there. Bear in mind, you will need root access to use raw sockets, and you had better be real careful if you plan on sending using a raw socket. As you might imagine, this will not be a fun thing to do. I suggest trying to find a way to avoid doing this.
0
83
0
0
2010-05-13T21:09:00.000
python
Distinguishing between broadcasted messages and direct messages
1
1
1
2,830,485
0
0
0
Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!
false
2,830,880
1
0
0
8
The GIL is not held by the Python interpreter when doing network operations. If you are doing work that is network-bound (like a crawler), you can safely ignore the effects of the GIL. On the other hand, you may want to measure your performance if you create lots of threads doing processing (after downloading). Limiting the number of threads there will reduce the effects of the GIL on your performance.
1
5,361
0
10
2010-05-13T23:02:00.000
python,multithreading,gil
Does a multithreaded crawler in Python really speed things up?
1
2
5
2,830,905
0
0
0
Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!
false
2,830,880
0.039979
0
0
1
Another consideration: if you're scraping a single website and the server places limits on the frequency of requests your can send from your IP address, adding multiple threads may make no difference.
1
5,361
0
10
2010-05-13T23:02:00.000
python,multithreading,gil
Does a multithreaded crawler in Python really speed things up?
1
2
5
2,830,933
0
1
0
Recently I needed to generate a huge HTML page containing a report with several thousand row table. And, obviously, I did not want to build the whole HTML (or the underlying tree) in memory. As result, I built the page with the old good string interpolation, but I do not like the solution. Thus, I wonder whether there are Python templating engines that can yield resulting page content by parts. UPD 1: I am not interested in listing all available frameworks and templating engines. I am interested in templating solutions that I can use separately from any framework and which can yield content by portions instead of building the whole result in memory. I understand the usability enhancements from partial content loading with client scripting, but that is out of the scope of my current question. Say, I want to generate a huge HTML/XML and stream it into a local file.
false
2,832,915
0.039979
0
0
1
You don't need a streaming templating engine - I do this all the time, and long before you run into anything vaguely heavy server-side, the browser will start to choke. Rendering a 10000 row table will peg the CPU for several seconds in pretty much any browser; scrolling it will be bothersomely choppy in chrome, and the browser mem usage will rise regardless of browser. What you can do (and I've previously implemented, even though in retrospect it turns out not to be necessary) is use client-side xslt. Printing the xslt processing instruction and the opening and closing tag using strings is easy and fairly safe; then you can stream each individual row as a standalone xml element using whatever xml writer technique you prefer. However - you really don't need this, and likely never will - if ever your html generator gets too slow, the browser will be an order of magnitude more problematic. So, unless you benchmarked this and have determined you really have a problem, don't waste your time. If you do have a problem, you can solve it without fundamentally changing the method - in memory generation can work just fine.
0
1,055
0
4
2010-05-14T09:02:00.000
python,templates
Python templates for huge HTML/XML
1
2
5
2,897,474
0
1
0
Recently I needed to generate a huge HTML page containing a report with several thousand row table. And, obviously, I did not want to build the whole HTML (or the underlying tree) in memory. As result, I built the page with the old good string interpolation, but I do not like the solution. Thus, I wonder whether there are Python templating engines that can yield resulting page content by parts. UPD 1: I am not interested in listing all available frameworks and templating engines. I am interested in templating solutions that I can use separately from any framework and which can yield content by portions instead of building the whole result in memory. I understand the usability enhancements from partial content loading with client scripting, but that is out of the scope of my current question. Say, I want to generate a huge HTML/XML and stream it into a local file.
false
2,832,915
0.07983
0
0
2
It'd be more user-friendly (assuming they have javascript enabled) to build the table via javascript by using e.g. a jQuery plugin which allows automatical loading of contents as soon as you scroll down. Then only few rows are loaded initially and when the user scrolls down more rows are loaded on demand. If that's not a solution, you could use three templates: one for everything before the rows, one for everything after the rows and a third one for the rows. Then you first send the before-rows template, then generate the rows and send them immediately, then the after-rows template. Then you will have only one block/row in memory instead of the whole table.
0
1,055
0
4
2010-05-14T09:02:00.000
python,templates
Python templates for huge HTML/XML
1
2
5
2,832,958
0
1
0
I'm trying to develop an app using Django 1.1 on Webfaction. I'd like to get the IP address of the incoming request, but when I use request.META['REMOTE_ADDR'] it returns 127.0.0.1. There seems to be a number of different ways of getting the address, such as using HTTP_X_FORWARDED_FOR or plugging in some middleware called SetRemoteAddrFromForwardedFor. Just wondering what the best approach was?
true
2,840,329
1.2
0
0
1
I use the middleware because this way I don't have to change the app's code. If I want to migrate my app to other hosting servers, I only need to modify the middleware without affecting other parts. Security is not an issue because on WebFaction you can trust what comes in from the front end server.
0
748
0
0
2010-05-15T13:46:00.000
python,django
Django: What's the correct way to get the requesting IP address?
1
1
2
2,840,883
0
0
0
Sometimes I have to send a message to a specific IP and sometimes I have to broadcast the message to all the IP's in my network. At the other end I have to distinguish between a broadcast and a normal one, but recvfrom() just returns the address the message came from; there is no difference between them. Can anyone help me distinguish them? UDP is the protocol.
false
2,848,098
0.761594
0
0
5
I don't think it's possible with Python's socket module. UDP is a very minimalistic protocol, and the only way to distinguish between a broadcast and a non-broadcast UDP packet is by looking at the destination address. However, you cannot inspect that part of the packet with the BSD socket API (if I remember it correctly), and the socket module exposes the BSD socket API only. Your best bet would probably be to use the first byte of the message to denote whether it is a broadcast or a unicast message.
0
116
0
2
2010-05-17T10:00:00.000
python
How to identify a broadcasted message?
1
1
1
2,848,539
0
1
0
I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here: I'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Edit: I'd use jQuery for this option. Learn some basic XSLT and produce HTML to the correct spec this way. Produce the site with PHP (for example) and then generate a static site. Write a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT. Useful information: The XML will likely change at some point, so I'd like to be able to easily regenerate the site. I'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content). I'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.
false
2,850,534
0.049958
0
0
1
I would go with the PHP option. The reason being is that when the XML changes your site content "should" automatically change without you having to touch your PHP code. Creating a Python script to generate lots of static pages just seems like a bad idea to me and with javascript you will have your cross-browser headaches (unless you are using a framework maybe). Use the server side languages for these kind of tasks, it is what they were made for.
0
1,365
0
3
2010-05-17T15:47:00.000
python,html,xml,ajax,xslt
Producing a static HTML site from XML content
1
3
4
2,850,582
0
1
0
I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here: I'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Edit: I'd use jQuery for this option. Learn some basic XSLT and produce HTML to the correct spec this way. Produce the site with PHP (for example) and then generate a static site. Write a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT. Useful information: The XML will likely change at some point, so I'd like to be able to easily regenerate the site. I'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content). I'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.
false
2,850,534
0.099668
0
0
2
I would go with the XSLT option, controlled via parameters to generate different pages from the same XML source if needed. It's really the tool made for XML transformations.
0
1,365
0
3
2010-05-17T15:47:00.000
python,html,xml,ajax,xslt
Producing a static HTML site from XML content
1
3
4
2,850,603
0
1
0
I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here: I'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Edit: I'd use jQuery for this option. Learn some basic XSLT and produce HTML to the correct spec this way. Produce the site with PHP (for example) and then generate a static site. Write a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT. Useful information: The XML will likely change at some point, so I'd like to be able to easily regenerate the site. I'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content). I'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.
false
2,850,534
0
0
0
0
Go with what you are most comfortable with. If it's straightforward you could use (for example) php to generate a page and then use a command line script (in python or php) to create cached files for you.
0
1,365
0
3
2010-05-17T15:47:00.000
python,html,xml,ajax,xslt
Producing a static HTML site from XML content
1
3
4
2,850,635
0
0
0
I recently downloaded and installed feedparser with python, I tried to run it but Netbeans shouts on import: ImportError: No module named feedparser restarted the Netbeans, still no go.
true
2,852,301
1.2
0
0
1
Netbeans by default uses Jython, if you go to Tools>Python Platforms and see that Jython is the default. Switch it to Python and so the installed libraries would work. If you already have a project, you should right click on it, choose Python and on the platform choose Python instead of Jython.
1
1,032
0
0
2010-05-17T19:59:00.000
python,netbeans,feedparser
adding the feedparser module to python
1
1
2
2,856,391
0
0
0
I am programming web interface which communicates with cisco switches via telnet. I want to make such system which will be storing one telnet connection per switch and every script (web interface, cron jobs, etc.) will have access to it. This is needed to make a single query queue for each device and prevent huge cisco processor load caused by several concurent telnet connections. How do I can do this? updated Option with connection handling daemon is good and will work in the best way. Sharing telnet connection object between scripts may be difficult to implement and debug. But this option is interesting because interface is using only by couple of operators and cron jobs.
true
2,856,356
1.2
0
0
1
The usual way would be to have a process running in the background that keeps hold of the persistent telnet connections and commands queued to go down them. Then have the front-end scripts connect to it (eg. via a unix socket) to queue commands and get the results asynchronously. But this might be overkill. How many people are you expecting to be using a switch interface concurrently? A lightweight alternative for just the web part of it would be to keep the telnet connection object in the web scripts, and configure the web server/gateway to only launch one instance of your webapp at once.
0
1,334
0
0
2010-05-18T10:30:00.000
python,django,telnet,telnetlib
Python (Django). Store telnet connection
1
1
1
2,856,596
0
1
0
Is there a function or method I could call in Python That would tell me if the data is RSS or HTML?
true
2,882,549
1.2
0
0
0
Filetypes should generally be determined out-of-band. eg. if you are fetching the file from a web server, the place to look would be the Content-Type header of the HTTP response. If you're fetching a local file, the filesystem would have a way of determining filetype—on Windows that'd be looking at the file extension. If none of that is available, you'd have to resort to content sniffing. This is never wholly reliable, and RSS is particularly annoying because there are multiple incompatible versions of it, but about the best you could do would probably be: Attempt to parse the content with an XML parser. If it fails, the content isn't well-formed XML so can't be RSS. Look at the document.documentElement.namespaceURI. If it's http://www.w3.org/1999/xhtml, you've got XHTML. If it's http://www.w3.org/1999/02/22-rdf-syntax-ns#, you've got RSS (of one flavour). If the document.documentElement.tagName is rss, you've got RSS (of a slightly different flavour). If the file couldn't be parsed as XML, it could well be HTML (or some tag-soup approximation of it). It's conceivable it might also be broken RSS. In that case most feed tools would reject it. If you need to still detect this case you'd be reduced to looking for strings like <html or <rss or <rdf:RSS near the start of the file. This would be even more unreliable.
0
81
0
1
2010-05-21T13:44:00.000
python,html,rss
Identifying if a data is RSS or HTML on python
1
2
2
2,883,114
0
1
0
Is there a function or method I could call in Python That would tell me if the data is RSS or HTML?
false
2,882,549
0.197375
0
0
2
You could always analyze it yourself to search for an xml tag (for RSS) or html tag (for HTML).
0
81
0
1
2010-05-21T13:44:00.000
python,html,rss
Identifying if a data is RSS or HTML on python
1
2
2
2,882,574
0
0
0
I am going to handle XML files for a project. I had earlier decided to use lxml but after reading the requirements, I think ElemenTree would be better for my purpose. The XML files that have to be processed are: Small in size. Typically < 10 KB. No namespaces. Simple XML structure. Given the small XML size, memory is not an issue. My only concern is fast parsing. What should I go with? Mostly I have seen people recommend lxml, but given my parsing requirements, do I really stand to benefit from it or would ElementTree serve my purpose better?
false
2,908,440
0
0
0
0
lxml is basically a superset of ElementTree so you could start with ElementTree and then if you have performance or functionality issues then you could change to lxml. Performance issues can only be studied by you using your own data,
0
372
0
4
2010-05-25T20:52:00.000
python,lxml,celementtree
Which Python XML library should I use?
1
1
3
2,908,479
0
0
0
I'm trying to manually create the file descriptor associated with a socket in python and then loaded directly into memory with mmap. Create a file into memory with mmap is simple, but I can not find a way to associate the file with a socket. Thanks for your responses. The problem I have is I can not make more of a number of sockets for python (or operating system) I get the error: "[errno 24] Too many open files." I think the error is because I can not create more file descriptors on disk, so I want to create them in memory. To avoid this limitation. Any suggestions?
false
2,922,548
0.099668
0
0
1
Why do you want to load this into memory using mmap? If you are on a unix variant, you can create a unix socket which is a file descriptor which can be used just like any other socket. A socket and a memory-mapped file are two distinct entities - it is probably not a good idea to try and mix them. Perhaps it would be helpful to take a step back and discuss what you are trying to do at a higher level.
1
2,393
0
0
2010-05-27T15:38:00.000
python,sockets,file-descriptor
change file descriptor for socket in python
1
1
2
2,922,605
0
0
0
I have domain on a shared hosting provider. How do I find the direct IP address of my domain using Python? Is it possible to post to a script on my domain using the IP address and not the website itself? Thanks.
false
2,924,736
0
0
0
0
I guess the IP should be static so do you really need to look it up more than once? You need to specify the domain name so that the webserver knows which host configuration to use if you don't have a dedicated IP or your host is the default for that webserver
0
725
0
0
2010-05-27T20:40:00.000
python,cgi
IP address of domain on shared host
1
1
3
2,924,798
0
0
0
I am trying to create a directory with news articles collected from an rss feed, meaning that whenever there is a link to an article within the rss feed, I would like for it to be downloaded in a directory with the title of the specific article as the filename as as a text file. Is that something Python can help me do ? Thank you for your help :-)
false
2,927,543
0.099668
1
0
1
Of course. BeautifulSoup, lxml, urllib2, urlgrabber.
0
442
0
0
2010-05-28T08:22:00.000
python,rss
Downloading from links in an rss feed
1
1
2
2,927,551
0
1
0
i would like to know, what is the best way to send files between python and C# and vice versa. I have my own protocol which work on socket level, and i can send string and numbers in both ways. Loops works too. With this i can send pretty much anything, like package of users id, if it is simple data. But soon i will start sending whole files, maybe xml or executables. Simple server with files is no an option because i want sending files from client too. I was thinking about serialization but i don't know it is the best solution, but if it is i will love some tips from stackoverflow community. EDIT: I added django to question and chose http.
true
2,930,211
1.2
0
0
0
The easier way on my use case was send files using HTTP because with python i have additionaly running django.
0
347
0
1
2010-05-28T15:17:00.000
c#,python,sockets
Send files between python+django and C#
1
1
2
2,965,960
0
1
0
We're writing a web-based tool to configure our services provided by multiple servers. This includes interfaces configuration, dhcp configs etc. etc. Having configs in database and views that generate proper output, how to send it/make it available for servers? I'm thinking about sending it through scp and invoking reload command to services through ssh. I'm also thinking about using Func to do all the job, as this is Python tool and will seemingly integrate with python-based (django) config tool. Any other proposals?
false
2,932,007
0
0
0
0
It really depends what you're intending to do, as the question is a little vague. The other answers cover the tools available; choosing one over the other comes down to purpose. Are you intending to manage servers, and services on those servers? If so, try Puppet, CFEngine, or some other tool for managing server configurations. Or, more specifically, are you looking for a deployment/buildout tool that talks to servers? So that you can type in something along the lines of "mytool deploy myproject", and have your project propagate to all the servers? In which case, fabric would be the tool to use. Generally a good configuration will consist of both anyway... but for what it's worth, from the sound of it (managing DHCP/network/etc.), Puppet's the way to go.
0
226
0
0
2010-05-28T19:46:00.000
python,linux,administration,func
Methods of sending web-generated config files to servers and restarting services
1
1
4
3,130,135
0
0
0
when using Python's stock XML tools such as xml.dom.minidom for XML writing, a file would always start off like <?xml version="1.0"?> [...] While this is perfectly legal XML code, and it's even recommended to use the header, I'd like to get rid of it as one of the programs I'm working with has problems here. I can't seem to find the appropriate option in xml.dom.minidom, so I wondered if there are other packages which do allow to neglect the header. Cheers, Nico
false
2,933,262
0
0
0
0
If you're set on using minidom, just scan back in the file and remove the first line after writing all the XML you need.
0
9,270
0
12
2010-05-29T00:28:00.000
python,xml
How to write an XML file without header in Python?
1
2
8
2,933,332
0
0
0
when using Python's stock XML tools such as xml.dom.minidom for XML writing, a file would always start off like <?xml version="1.0"?> [...] While this is perfectly legal XML code, and it's even recommended to use the header, I'd like to get rid of it as one of the programs I'm working with has problems here. I can't seem to find the appropriate option in xml.dom.minidom, so I wondered if there are other packages which do allow to neglect the header. Cheers, Nico
false
2,933,262
0
0
0
0
Purists may not like to hear this, but I have found using an XML parser to generate XML to be overkill. Just generate it directly as strings. This also lets you generate files larger than you can keep in memory, which you can't do with DOM. Reading XML is another story.
0
9,270
0
12
2010-05-29T00:28:00.000
python,xml
How to write an XML file without header in Python?
1
2
8
2,933,289
0
1
0
Is it possible for my python web app to provide an option the for user to automatically send jobs to the locally connected printer? Or will the user always have to use the browser to manually print out everything.
false
2,936,384
0
0
0
0
If your Python webapp is running inside a browser on the client machine, I don't see any other way than manually for the user. Some workarounds you might want to investigate: if you web app is installed on the client machine, you will be able to connect directly to the printer, as you have access to the underlying OS system. you could potentially create a plugin that can be installed on the browser that does this for him, but I have no clue as how this works technically. what is it that you want to print ? You could generate a pdf that contains everything that the user needs to print, in one go ?
0
1,545
0
0
2010-05-29T19:48:00.000
python,web-applications,printing
python web script send job to printer
1
1
2
2,936,475
0
0
0
is there any library or way exist from which I can convert my xml records to yaml format ?
false
2,943,862
0.148885
0
0
3
The difference between XML and YAML is significant enough to warrant a redesign of the schema you are using to store your data. You should write a script to parse your XML records and output YAML formatted data. There are some methods out there to convert any generic XML into YAML, but the results are far less usable than a method designed specifically for your schema.
0
6,025
0
10
2010-05-31T13:37:00.000
python,xml,tags,yaml
is there anything exist to convert xml -> yaml directly?
1
1
4
2,955,385
0
0
0
lets say you run third party program on your computer whitch create a process named example.exe how do i determinate if this process is running and how many windows does he open? How do i intercept network communication between this windows and server? my goal is to create an app whitch will be monitoring network trafic between example.exe and its home server in order to analyze data and save to database, and finally simulate user interaction to get more relevant data
false
2,945,074
0
0
0
0
You could use wireshark from wireshark.org to sniff the network traffic (or any other packet sniffer).
0
620
0
2
2010-05-31T17:30:00.000
python,networking,communication
python intercepting communication
1
1
2
2,945,291
0
0
0
I wonder if it is better add an element by opening file, search 'good place' and add string which contains xml code. Or use some library... i have no idea. I know how can i get nodes and properties from xml through for example lxml but what's the simpliest and the best way to add?
false
2,977,779
0.099668
0
0
1
The safest way to add nodes to an XML document is to load it into a DOM, add the nodes programmatically and write it out again. There are several Python XML libraries. I have used minidom, but I have no reason to recommend it specifically over the others.
0
4,733
0
4
2010-06-04T21:01:00.000
python,xml
add xml node to xml file with python
1
1
2
2,977,799
0
1
0
I'm building my startup and I'm thinking ahead for shared use of services. So far I want to allow people who have a user account on one app to be able to use the same user account on another app. This means I will have to build an authentication server. I would like some opinions on how to allow an app to talk to the authentication server. Should I use curl? Should I use Python's http libs? All the code will be in Python. All it's going to do is ask the authentication server if the person is allowed to use that app and the auth server will return a JSON user object. All authorization (roles and resources) will be app independent, so this app will not have to handle that. Sorry if this seems a bit newbish; this is the first time I have separated authentication from the actual application.
false
2,986,317
0.066568
0
0
1
Assuming you plan to write your own auth client code, it isn't event-driven, and you don't need to validate an https certificate, I would suggest using python's built-in urllib2 to call the auth server. This will minimize dependencies, which ought to make deployment and upgrades easier. That being said, there are more than a few existing auth-related protocols and libraries in the world, some of which might save you some time and security worries over writing code from scratch. For example, if you make your auth server speak OpenID, many off-the-self applications and servers (including Apache) will have auth client plugins already made for you.
0
308
0
0
2010-06-06T23:14:00.000
python,authentication,rest
Talking to an Authentication Server
1
2
3
2,986,411
0
1
0
I'm building my startup and I'm thinking ahead for shared use of services. So far I want to allow people who have a user account on one app to be able to use the same user account on another app. This means I will have to build an authentication server. I would like some opinions on how to allow an app to talk to the authentication server. Should I use curl? Should I use Python's http libs? All the code will be in Python. All it's going to do is ask the authentication server if the person is allowed to use that app and the auth server will return a JSON user object. All authorization (roles and resources) will be app independent, so this app will not have to handle that. Sorry if this seems a bit newbish; this is the first time I have separated authentication from the actual application.
false
2,986,317
0
0
0
0
Your question isn't really a programming problem so much as it is an architecture problem. What I would recommend for your specific situation is to setup an LDAP server for authentication, authorization, and accounting (AAA). Then have your applications use that (every language has modules and libraries for LDAP). It is a reliable, secure, proven, and well-known way of handling such things. Even if you strictly want to enforce HTTP-based authentication it is easy enough to slap an authentication server in front of your LDAP and call it a day. There's even existing code to do just that so you won't have to re-invent the wheel.
0
308
0
0
2010-06-06T23:14:00.000
python,authentication,rest
Talking to an Authentication Server
1
2
3
2,986,610
0
0
0
I am trying to write some unit tests for a small web service written with Cherrypy and I am wondering what's the best way to figure out that the server has started, so i don't get connection refused if I try to connect too early to the service ?
true
2,988,636
1.2
0
0
4
I got it figured out: cherrypy.engine.start(); cherrypy.server.wait() it's the way to go. Otherwise, I think you can get away with some tricks with cherrypy.server.bus.states
0
250
0
4
2010-06-07T10:20:00.000
python,cherrypy
cherrypy when to know that the server has started
1
1
1
2,989,432
0
0
0
I'm writing an application that sends files over network, I want to develop a custom protocol to not limit myself in term on feature richness (http wouldn't be appropriate, the nearest thing is the bittorrent protocol maybe). I've tried with twisted, I've built a good app but there's a bug in twisted that makes my GUI blocking, so I've to switch to another framework/strategy. What do you suggest? Using raw sockets and using gtk mainloop (there are select-like functions in the toolkit) is too much difficult? It's viable running two mainloops in different threads? Asking for suggestions
true
2,991,852
1.2
0
0
1
Disclaimer: I have little experience with network applications. That being said, the raw sockets isn't terribly difficult to wrap your head around/use, especially if you're not too worried about optimization. That takes more thought, of course. But using GTK and raw sockets should be fairly straightforward. Especially since you've used the twisted framework, which IIRC, just abstracts some of the more nitty-gritty details of socket managing.
0
1,158
0
1
2010-06-07T17:47:00.000
python,networking
networking application and GUI in python
1
2
4
2,991,942
0
0
0
I'm writing an application that sends files over network, I want to develop a custom protocol to not limit myself in term on feature richness (http wouldn't be appropriate, the nearest thing is the bittorrent protocol maybe). I've tried with twisted, I've built a good app but there's a bug in twisted that makes my GUI blocking, so I've to switch to another framework/strategy. What do you suggest? Using raw sockets and using gtk mainloop (there are select-like functions in the toolkit) is too much difficult? It's viable running two mainloops in different threads? Asking for suggestions
false
2,991,852
0.049958
0
0
1
Two threads: one for the GUI, one for sending/receiving data. Tkinter would be a perfectly fine toolkit for this. You don't need twisted or any other external libraries or toolkits -- what comes out of the box is sufficient to get the job done.
0
1,158
0
1
2010-06-07T17:47:00.000
python,networking
networking application and GUI in python
1
2
4
2,991,935
0
1
0
I'm at the moment working on a web page where the users who visit it should have the possibility to create an event in my web page's name. There is a Page on Facebook for the web page which should be the owner of the user created event. Is this possible? All users are authenticated using Facebook Connect, but since the event won't be created in their name I don't know if that's so much of help. The Python SDK will be used since the event shall be implemented server side. / D
false
3,005,640
0
0
0
0
This is possible, using the access token provided for your page you can publish to this as you would with a user. If you want to post FROM the USER than you need to use the current user's access token, if you want to post FROM the PAGE then using the access token from the page you can publish to that
0
700
0
1
2010-06-09T12:12:00.000
python,django,facebook
Create event for another owner using Facebook Graph API
1
1
1
6,583,766
0
0
0
I am looking for a way of programmatically testing a script written with the asyncore Python module. My test consists of launching the script in question -- if a TCP listen socket is opened, the test passes. Otherwise, if the script dies before getting to that point, the test fails. The purpose of this is knowing if a nightly build works (at least up to a point) or not. I was thinking the best way to test would be to launch the script in some kind of sandbox wrapper which waits for a socket request. I don't care about actually listening for anything on that port, just intercepting the request and using that as an indication that my test passed. I think it would be preferable to intercept the open socket request, rather than polling at set intervals (I hate polling!). But I'm a bit out of my depths as far as how exactly to do this. Can I do this with a shell script? Or perhaps I need to override the asyncore module at the Python level? Thanks in advance, - B
false
3,014,686
0
1
0
0
Another option is to mock the socket module before importing the asyncore module. Of course, then you have to make sure that the mock works properly first.
0
838
1
0
2010-06-10T13:19:00.000
python,testing,sockets,wrapper
How can I build a wrapper to wait for listening on a port?
1
1
2
3,019,494
0
0
0
how can i send an xml file on my system to an http server using python standard library??
false
3,020,979
0.099668
0
0
1
You can achieve that through a standard http post request.
0
5,264
0
7
2010-06-11T07:35:00.000
python,xml,http
send xml file to http using python
1
1
2
3,021,000
0
0
0
I need a simple graphics library that supports the following functionality: Ability to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent), Ability to load bitmap images, Ability to read current color of pixel in a given coordinate. Ideally using JavaScript or Python. Seems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?
false
3,021,514
0.119427
0
0
3
PyGame can do all of those things. OTOH, I don't think it embeds into a GUI too well.
0
1,047
0
2
2010-06-11T09:00:00.000
javascript,python,graphics,canvas,svg
Simple graphics API with transparency, polygons, reading image pixels?
1
3
5
3,022,580
0
0
0
I need a simple graphics library that supports the following functionality: Ability to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent), Ability to load bitmap images, Ability to read current color of pixel in a given coordinate. Ideally using JavaScript or Python. Seems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?
false
3,021,514
0
0
0
0
I voted for PyGame, but I would also like to point out that the new QT graphics library seems quite capable. I have not used PyQT with QT4 yet, but I really like PyQT development with QT3.
0
1,047
0
2
2010-06-11T09:00:00.000
javascript,python,graphics,canvas,svg
Simple graphics API with transparency, polygons, reading image pixels?
1
3
5
3,023,182
0
0
0
I need a simple graphics library that supports the following functionality: Ability to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent), Ability to load bitmap images, Ability to read current color of pixel in a given coordinate. Ideally using JavaScript or Python. Seems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?
false
3,021,514
0.07983
0
0
2
I ended up going with Canvas. The "secret" of polygons is using paths. Thanks, "tur1ng"!
0
1,047
0
2
2010-06-11T09:00:00.000
javascript,python,graphics,canvas,svg
Simple graphics API with transparency, polygons, reading image pixels?
1
3
5
3,027,643
0
0
0
I'm working a script that will upload videos to YouTube with different accounts. Is there a way to use HTTPS or SOCKS proxies to filter all the requests. My client doesn't want to leave any footprints for Google. The only way I found was to set the proxy environment variable beforehand but this seems cumbersome. Is there some way I'm missing? Thanks :)
false
3,026,881
0
0
0
0
Setting an environment variable (e.g. import os; os.environ['BLAH']='BLUH' once at the start of your program "seems cumbersome"?! What does count as "non-cumbersome" for you, pray?
0
1,129
0
0
2010-06-11T23:34:00.000
python,api,youtube,gdata
How to use a Proxy with Youtube API? (Python)
1
1
1
3,027,001
0
0
0
Is there a way I can programmatically determine the status of a download in Chrome or Mozilla Firefox? I would like to know if the download was aborted or completed successfully. For writing the code I'd be using either Perl, PHP or Python. Please help. Thank You.
false
3,029,824
-0.197375
1
0
-2
There are scripts out there that output the file in chunks, recording how many bytes they've echoed out, but those are completely unreliable and you can't accurately ascertain whether or not the user successfully received the complete file. The short answer is no, really, unless you write your own download manager (in Java) that runs a callback to your server when the download completes.
0
676
0
0
2010-06-12T19:18:00.000
php,python,perl,download
Programmatically determining the status of a file download
1
1
2
3,029,877
0
0
0
I have to send F2 key to telnet host. How do I send it using python...using getch() I found that the character < used for the F2 key but when sending >, its not working. I think there is a way to send special function keys but I am not able to find it. If somebody knows please help me. Thanks in advance
true
3,035,390
1.2
1
0
4
Extended keys (non-alphanumeric or symbol) are composed of a sequence of single characters, with the sequence depending on the terminal you have told the telnet server you are using. You will need to send all characters in the sequence in order to make it work. Here, using od -c <<< 'CtrlVF2' I was able to see a sequence of \x1b0Q with the xterm terminal.
0
2,908
0
4
2010-06-14T06:40:00.000
python,telnet
how to send F2 key to remote host using python
1
1
2
3,035,415
0
0
0
I'm looking to find a way to in real-time find the shortest path between nodes in a huge graph. It has hundreds of thousands of vertices and millions of edges. I know this question has been asked before and I guess the answer is to use a breadth-first search, but I'm more interested in to know what software you can use to implement it. For example, it would be totally perfect if it already exist a library (with python bindings!) for performing bfs in undirected graphs.
false
3,038,661
0
0
0
0
Depending on what kind of additional information you have, A* may be extremely efficient. In particular, if given a node you can compute an estimate of the cost from that node to the goal, A* is optimally efficient.
0
24,381
0
16
2010-06-14T15:50:00.000
python,graph,shortest-path,dijkstra,breadth-first-search
Efficiently finding the shortest path in large graphs
1
1
7
3,042,109
0
0
0
I have a python script that accepts a file from the user and saves it. Is it possible to not upload the file immediately but to que it up and when the server has less load to upload it then. Can this be done by transferring the file to the browsers storage area or taking the file from the Harddrive and transferring to the User's RAM?
false
3,040,290
0.53705
0
0
3
There is no reliable way to do what you're asking, because fundamentally, your server has no control over the user's browser, computer, or internet connection. If you don't care about reliability, you might try writing a bunch of javascript to trigger the upload at a scheduled time, but it just wouldn't work if the user closed his browser, navigated away from your web page, turned off his computer, walked away from his wifi signal, etc. If your web site is really so heavily loaded that it buckles when lots of users upload files at once, it might be time to profile your code, use multiple servers, or perhaps use a separate upload server to accept files and then schedule transfer to your main server later.
0
100
0
0
2010-06-14T19:32:00.000
python,file,architecture,file-upload
Python timed file upload
1
1
1
3,040,346
0
0
0
Does anyone know of a memory efficient way to generate very large xml files (e.g. 100-500 MiB) in Python? I've been utilizing lxml, but memory usage is through the roof.
false
3,049,188
0.099668
1
0
2
The only sane way to generate so large an XML file is line by line, which means printing while running a state machine, and lots of testing.
0
4,751
0
11
2010-06-15T21:27:00.000
python,xml,lxml
Generating very large XML files in Python?
1
2
4
3,049,245
0