Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | I'm wondering if there's a clever pattern for request-scoping arbitrary information without resorting to either TLS or putting the information in the session.
Really, this would be for contextual attributes that I'd like to not look up more than once in a request path, but which are tied to a request invocation and there's no good reason to let them thresh around in the session.
Something like a dict that's pinned to the request where I can shove things or lazy load them. I could write a wrapper for request and swap it out in a middleware, but I figured I'd check to see what best-practice might be here? | true | 4,466,923 | 1.2 | 0 | 0 | 1 | Just assign the dictionary directly to the request. You can do that in middleware or in your view, as you like. | 0 | 1,377 | 0 | 0 | 2010-12-17T01:18:00.000 | python,django | Best way to request-scope data in Django? | 1 | 1 | 2 | 4,468,901 | 0 |
0 | 0 | How would I look for all URLs on a web page and then save them to individual variables with urllib2 In Python? | false | 4,475,929 | 0 | 0 | 0 | 0 | You could simply download the raw html with urllib2, then simply search through it. There might be easier ways but you could do this:
1:Download the source code.
2:Use strings library to split it into a list.
3:Search the first 7 characters of each section-->
4:If the first 7 characters are http://, write that to a variable.
Why do you need separate variables though? Wouldn't it be easier to save them all to a list, using list.append(URL_YOU_JUST_FOUND), every time you find another url? | 0 | 97 | 0 | 0 | 2010-12-18T00:38:00.000 | python | How would I look for all URLs on a web page and then save them to a individual variables with urllib2 In Python? | 1 | 2 | 3 | 4,476,298 | 0 |
0 | 0 | How would I look for all URLs on a web page and then save them to individual variables with urllib2 In Python? | false | 4,475,929 | 0 | 0 | 0 | 0 | You don't do it with urllib2 alone. What are you looking for is parsing urls in a web page.
You get your first page using urllib2, read its contents and then pass it through parser like Beautifulsoup or as the other poster explained, you can regex to search the contents of the page too. | 0 | 97 | 0 | 0 | 2010-12-18T00:38:00.000 | python | How would I look for all URLs on a web page and then save them to a individual variables with urllib2 In Python? | 1 | 2 | 3 | 4,475,970 | 0 |
0 | 0 | Is it possible to empty a job queue on a Gearman server? I am using the python driver for Gearman, and the documentation does not have any information about emptying queues. I would imagine that this functionality should exist, possibly, with a direct connection to the Gearman server. | true | 4,510,903 | 1.2 | 0 | 0 | 5 | As far as i have been able to tell from the docs and using gearman with PHP, the only way to clear the job queue is to restart to the gearmand job server. If you are using persistent job queues, you will also need to empty whatever you are using as the persistent storage, if this is DB storage, you will need to empty the appropriate tables of all the rows.
stop gearmand --> empty table rows --> start gearmand
Hope this is clear enough. | 0 | 11,464 | 0 | 10 | 2010-12-22T15:50:00.000 | python,message-queue,gearman | Is it possible to empty a job queue on a Gearman server | 1 | 1 | 3 | 4,745,861 | 0 |
1 | 0 | what is the right way to do it if the URL has some unicode chars in it, and is escaped in the client side using javascript ( escape(text) )? For example, if my url is: domain.com/?text=%u05D0%u05D9%u05DA%20%u05DE%u05DE%u05D9%u05E8%u05D9%u05DD%20%u05D0%u05EA%20%u05D4%u05D8%u05E7%u05E1%u05D8%20%u05D4%u05D6%u05D4
I tried:
text = urllib.unquote(request.GET.get('text'))
but I got the exact same string back (%u05D0%u05D9%u05DA%20%u05DE ... ) | false | 4,513,083 | 0.291313 | 0 | 0 | 3 | eventually what I did is changed the client side from escape(text) to urlEncodeComponent(text)
and then in the python side used:
request.encoding = 'UTF-8'
text = unicode(request.GET.get('text', None))
Not sure this is the best thing to do, but it works in English and Hebrew | 0 | 1,994 | 0 | 0 | 2010-12-22T19:47:00.000 | javascript,python,unicode,escaping | How to convert unicode escape sequence URL to python unicode? | 1 | 1 | 2 | 4,513,304 | 0 |
0 | 0 | is there a way to get the date of friendship creation for both my friends and followers in twitter?
especially for python-twitter.... | true | 4,517,265 | 1.2 | 1 | 0 | 3 | Twitter doesn't preserve the date a friendship or follow is created, and it doesn't return it in the API. Going forward you can query friends/ids and followers/id every day and record any new relationships with the current date in a database. | 0 | 316 | 0 | 1 | 2010-12-23T09:11:00.000 | python,twitter,python-twitter | can i get date of friendship in twitter? | 1 | 1 | 1 | 4,518,740 | 0 |
0 | 0 | I'm PHP/MySQL developper
I studied python well as a desktop programming two years ago but I don't use it on the web how can I use python to build dynamic web sites and easily uploads these sites to any hosting providers | false | 4,525,562 | 0.066568 | 1 | 0 | 1 | You cant' upload python files and use it on any webhosting.
You can use them if the host allows it.
Some frameworks are Django or Pylons. | 0 | 529 | 0 | 0 | 2010-12-24T09:58:00.000 | python | how can I use python to build dynamic web sites? | 1 | 2 | 3 | 4,525,588 | 0 |
0 | 0 | I'm PHP/MySQL developper
I studied python well as a desktop programming two years ago but I don't use it on the web how can I use python to build dynamic web sites and easily uploads these sites to any hosting providers | true | 4,525,562 | 1.2 | 1 | 0 | 1 | there are many web frameworks such as Django & web2py , you should check them out | 0 | 529 | 0 | 0 | 2010-12-24T09:58:00.000 | python | how can I use python to build dynamic web sites? | 1 | 2 | 3 | 4,525,568 | 0 |
0 | 0 | I'm still trying to figure out WebSockets.
I'm sending over data from the javascript client to the python server as JSON strings, but they arrive fragmented.
How can I make sure I've received the entire message before I start to parse it? | true | 4,529,042 | 1.2 | 0 | 0 | 1 | You need to read up on socket programming in general.
Reading some data from a websocket does not mean you've received everything the other side wanted to send.
Ideally you'd prefix your messages with a header that contains the size of the payload. Then after you read the header (say, terminated with LF, or being a fix 4 bytes, etc) you can figure out exactly how many more bytes to read to get the full message.
Anything you read after that becomes your next header. Etc. | 0 | 102 | 0 | 1 | 2010-12-25T02:01:00.000 | javascript,python,websocket | How do I make sure a message gets to its destination using WebSockets? | 1 | 1 | 1 | 4,529,059 | 0 |
1 | 0 | I am building a GAE site that uses AJAX/JSON for almost all its tasks including building the UI elements, all interactions and client-server requests. What is a good way to test it for highloads so that I could have some statistics about how much resources 1000 average users per some period of time would take. I think I can create some Python functions for this purpose. What can you advise? Thanks. | false | 4,529,913 | 0.049958 | 1 | 0 | 1 | You can get a free linux micro instance from EC2 and then run ab (apache benchmark) with lots of requests. You can change number of requests, concurrent requests and you can even launch multiple EC2 instances from different data centers. | 0 | 193 | 0 | 2 | 2010-12-25T09:37:00.000 | python,google-app-engine,load-testing | How to test my GAE site for performance | 1 | 2 | 4 | 4,536,343 | 0 |
1 | 0 | I am building a GAE site that uses AJAX/JSON for almost all its tasks including building the UI elements, all interactions and client-server requests. What is a good way to test it for highloads so that I could have some statistics about how much resources 1000 average users per some period of time would take. I think I can create some Python functions for this purpose. What can you advise? Thanks. | false | 4,529,913 | 0 | 1 | 0 | 0 | If you have the budget for it, a professional load testing tool will save you a lot of time and produce more accurate results. Some of those tools handle AJAX apps better than others. I will naturally recommend our product (Web Performance Load Tester) and one of our engineers will help you get it working with your site. You should, of course, evaluate other products to see what works best for your site. Load Impact and Browser Mob are online services that in many cases handle AJAX better than the more traditional tools (except ours!), but they also have downsides. | 0 | 193 | 0 | 2 | 2010-12-25T09:37:00.000 | python,google-app-engine,load-testing | How to test my GAE site for performance | 1 | 2 | 4 | 4,547,737 | 0 |
0 | 0 | I've checked so many articles, but can't find one for server to server email receiving. I want to write a program or some code just acts as an email receiver, not SMTP server or something else.
Let's suppose I have a domain named example.com, and a gmail user [email protected] sends me an email to [email protected], or a yahoo user [email protected] sends me an email to [email protected]. Now, what do I do to receive this email? I prefer to write this code in Python or Perl.
Regards,
David | false | 4,530,323 | 0.049958 | 1 | 0 | 1 | "reveive" is not a word. I'm really not sure if you mean "receive" or "retrieve".
If you mean "receive" then you probably do want an SMTP server, despite your claim. An SMTP server running on a computer is responsible for listening for network requests from other SMTP servers that wish to deliver mail to that computer.
The SMTP server then, typically, deposits the mail in a directory where it can be read by the recipient. They can usually be configured (often in combination with tools such as Procmail) to do stuff to incoming email (such as pass it to a program for manipulation along the way, this allows you to avoid having to write a full blown SMTP server in order to capture some emails).
If, on the other hand, you mean "retrieve", then you are probably looking to find a library that will let your program act as an IMAP or POP client. These protocols are used to allow remote access to a mailbox. | 0 | 2,845 | 0 | 1 | 2010-12-25T12:58:00.000 | python,perl,email | How to receive an email on server? Better using Python or Perl | 1 | 1 | 4 | 4,530,368 | 0 |
0 | 0 | I'm trying to write a simple program that logs on to a site, does something and logs out. The problem is that the login form has three inputs: username, password and a recaptcha. I input all of them manually. The problem is I don't know how to display the captcha image or how to send the text.
Can someone explain how to do it? | false | 4,533,879 | 0.291313 | 0 | 0 | 3 | It is unreasonable to except that someone will post a complete solution for your problem.
Here are the steps, just start by trying to complete them, post questions if you get stuck.
very generally speaking:
Get the content of the site (use the urllib2 to fetch the page)
Parse the recaptcha image link and download the image (BeautifulSoup for parsing the link, urllib2 again for downloading the image)
prompt yourself with the image and input for the code (use Tkinter for example)
send login info & captcha( urllib2)
do stuff (urllib2 again)
There's probably some token that you also have to fetch that identifies your Captcha image. Use firebug to watch for the requests sent when submitting the Captcha. | 0 | 799 | 0 | 3 | 2010-12-26T12:52:00.000 | python | Handling reCaptcha forms? | 1 | 1 | 2 | 4,534,157 | 0 |
0 | 0 | I am trying to fetch data from a webpage using urllib2. The page is visible on the browser but through the script I keep getting HTTPError: HTTP Error 403: Forbidden
I also tried mimicking a browser request by changing the user-agent string but no success.
Any ideas on this? | false | 4,546,086 | 0 | 0 | 0 | 0 | :) Am trying to get quotes from NSE too ! like pythonFoo says you need additional headers. Hower only Accept is sufficient.
The user-agent can say python ( stay true ! ) | 0 | 893 | 0 | 0 | 2010-12-28T12:38:00.000 | python,urllib2,fetch,http-status-code-403,httplib2 | Python fetch data 403 | 1 | 1 | 3 | 4,570,621 | 0 |
0 | 0 | i got this error while running my function.
"socket.error: [Errno 98] Address already in use"
how can i close the address already in use and start new connection with port in python? | false | 4,568,040 | 0 | 0 | 0 | 0 | Stop the program or the service which is the port which you are trying to use. Alternatively, for whatever program which you are trying to write, use a PORT number which is a sufficiently high number (> 1024 for sure) and is unused. | 0 | 2,831 | 0 | 2 | 2010-12-31T04:17:00.000 | python,sockets | python socket programming error | 1 | 2 | 2 | 4,568,045 | 0 |
0 | 0 | i got this error while running my function.
"socket.error: [Errno 98] Address already in use"
how can i close the address already in use and start new connection with port in python? | false | 4,568,040 | 0.291313 | 0 | 0 | 3 | These scenarios will raise error "[Errno 98] Address already in use" when you create a socket at certain port:
The port was't closed. When you created a socket, but forgot to close it, or annother program hold that.
You have close the socket(or kill the process), but the port stay at TIME_WAIT status in 2 MSL(about 2 minutes).
Try "netstat" command to view port usage
such as
netstat -na
or
netstat -na |grep 54321 | 0 | 2,831 | 0 | 2 | 2010-12-31T04:17:00.000 | python,sockets | python socket programming error | 1 | 2 | 2 | 4,568,824 | 0 |
1 | 0 | I need to scrape about 100 websites that are very similar in the content that they provide.
My first doubt. Should be possible to write a generic script to scrape all the 100 websites or in scraping techniques is only possible to write scripts for particular websites. (Dumb question.). I think I should ask what possibility is easier. Write 100 different scripts for each website is hard.
Second question. My primary language is PHP, but after searching here on Stackoverflow I found that one of the most advanced scrapers is "Beautiful Soup" in Python. Should be possible to make calls in PHP to "Beautiful Soup" in Python? Or should be better to do all the script in Python?
Give me some clues on how should I go.
Sorry for my weak english.
Best Regards, | false | 4,585,490 | 0 | 1 | 0 | 0 | We do something sort of like this with RSS feeds using Python -- we use ElementTree since RSS is usually guaranteed to be well-formed. Beautiful Soup is probably better suited for parsing HTML.
Insofar as dealing with 100 different sites, try to write an abstraction that works on most of them and transforms the page into a common data-structure you can work with. Then override parts of the abstraction to handle individual sites which differ from the norm.
Scrapers are usually I/O bound -- look into coroutine libraries like eventlet or gevent to exploit some I/O parallelism and speed up the whole process. | 0 | 4,219 | 0 | 3 | 2011-01-03T14:59:00.000 | php,python,screen-scraping | Webscraping Techniques using PHP or Python | 1 | 2 | 4 | 4,586,678 | 0 |
1 | 0 | I need to scrape about 100 websites that are very similar in the content that they provide.
My first doubt. Should be possible to write a generic script to scrape all the 100 websites or in scraping techniques is only possible to write scripts for particular websites. (Dumb question.). I think I should ask what possibility is easier. Write 100 different scripts for each website is hard.
Second question. My primary language is PHP, but after searching here on Stackoverflow I found that one of the most advanced scrapers is "Beautiful Soup" in Python. Should be possible to make calls in PHP to "Beautiful Soup" in Python? Or should be better to do all the script in Python?
Give me some clues on how should I go.
Sorry for my weak english.
Best Regards, | false | 4,585,490 | 0 | 1 | 0 | 0 | I've done this a few ways.
1: with grep, sed, and awk. This is about the same as 2: regex. These methods are very direct, but fail whenever the HTML structure of the site changes.
3: PHP's XML/HTML parser DomDocument. This is far more reliable than regex, but I found it annoying to work with (I hate the mixture of PHP arrays and objects). If you want to use PHP, PHPQuery is probably a good solution, as Thai suggested.
4: Python and BeautifulSoup. I can't say enough good things about BeautifulSoup, and this is the method I recommend. I found my code feels cleaner in Python, and BeautifulSoup was very easy and efficient to work with. Good documentation, too.
You will have to specialize your script for each site. It depends on what sort of information you wish to extract. If it was something standard like body title, of course you wouldn't have to change anything, but it's likely the info you want is more specific? | 0 | 4,219 | 0 | 3 | 2011-01-03T14:59:00.000 | php,python,screen-scraping | Webscraping Techniques using PHP or Python | 1 | 2 | 4 | 4,585,784 | 0 |
1 | 0 | I've an HTML document I'm trying to break into separate, smaller chunks. Say, take each < h3 > header and turn into its own separate file, using only the HTML encoded within that chunk (along with html, head, body, tags).
I am using Python's Beautiful Soup which I am new to, but seems easy to use for easy tasks such as this (Any better suggestions like lxml or Mini-dom?). So:
1) How do I go, 'parse all < h3 >s and turn each into a separate doc'? Anything from pointers to the right direction to code snippets to online documentation (found quite little for Soup) will be appreciated.
2) Logically, finding the tag won't be enough - I need to physically 'cut it out' and put it in a separate file (and remove it from original). Perhaps parsing the text lines instead of nodes would be easier (albeit super-ugly, parsing raw text from a formed structure...?)
3) Similarly related - suppose I want to delete a certain attribute from all tags of a type (like, delete the alignment attribute of all images). This seems easy but I've failed - any help will be appreciated!
Thanks for any help! | false | 4,588,345 | 0.379949 | 0 | 0 | 2 | Yes, you use BeautifulSoup or lxml. Both have methods to find the nodes you want to extract. You can then also recreate HTML from the node objects, and hence save that HTML to new files. | 0 | 214 | 0 | 1 | 2011-01-03T21:03:00.000 | python,html,beautifulsoup | How can/should I break an html document into parts using Python? (Techno- and logically) | 1 | 1 | 1 | 4,588,436 | 0 |
0 | 0 | I have a python program sitting in server side managing user location informations, each friend has a pair of (longtitude, latitude), given a (longtitude, latitude) point, how I can find the nearby(say within 5KM) friends efficiently?
I have 10K users online...
Thanks.
Bin | false | 4,622,988 | 0.099668 | 0 | 0 | 2 | Make a dict {graticule: [users]} (a "graticule" is a block of 1 degree latitude x 1 degree longitude; so you can basically just round the values). To find nearby users, first get users from the same and adjacent graticules (since the target could be near an edge), then filter them with a basic bounding-box test (i.e. what are the minimum longitude/latitude that are possible for someone within the desired radius), then do a detailed test (if you need accuracy then you are in for some more complex math than just Pythagoras). | 0 | 2,000 | 0 | 4 | 2011-01-07T06:18:00.000 | python,algorithm | algorithm to find the nearby friends? | 1 | 1 | 4 | 4,623,782 | 0 |
0 | 0 | I am trying access a REST API.
I can get it working in Curl/REST Client (the UI tool), with preemptive authentication enabled.
But, using urllib2, it doesn't seem to support this by default and I can't find a way to turn it on.
Thanks :) | false | 4,628,610 | 0 | 0 | 0 | 0 | Depending on what kind of authentication is required, you can send the Authorization headers manually by adding them to your request before you send out a body. | 0 | 2,937 | 0 | 2 | 2011-01-07T17:52:00.000 | python,http,urllib2 | does urllib2 support preemptive authentication authentication? | 1 | 1 | 3 | 4,629,038 | 0 |
0 | 0 | Is it possible to send a MIME message as it is, without adding any headers? For example, if I have a correct MIME message with all headers and content saved to a text file, is it possible to use the contents of this file without modification and send it via SMTP?
Apparently both python's SMTP.sendmail and PHP smtp::mail require at least "To:" and "From:", and passing the complete message to these functions doesn't seem to work. | false | 4,634,171 | 0 | 1 | 0 | 0 | You could read up to the first blank line, use those as additional headers, then send the rest in the body. | 0 | 161 | 0 | 1 | 2011-01-08T13:52:00.000 | php,python,email | Sending a MIME email prepared beforehand (in PHP or Python) | 1 | 1 | 2 | 4,634,184 | 0 |
1 | 0 | What I'm asking may sound strange, but I really need it..
What I need is a url that I can setup (like a free server with php support, but something reliable) and call that url from python with the arguments I need and it would write the values to a file.
I don't know if what I really need is a server with php (I hope not).
Is there anyway of doing it with google docs? are there any other services you guys came across that where any good?
Ps. I need this for a program that shows the user some thumbnails and lets the user tell what kind of pic it is ( it's done by clicking a button wich I hope will call the URL I'm asking for with the arguments)
Clear enough?
Ps. Not like captcha. I just need to call for example http:/aaa.a/file.xxx?id=1&tag=funny
So that I present my user with the images, and as they click to choose the apropriate tag, that url writes it to file (a file I can later have acces to) | true | 4,649,343 | 1.2 | 0 | 0 | 2 | You need a script that you can execute on the server. This can be written in any language you want. Python works fine. Unless loads of people call the URL at once you can use CGI (it's a bit slow when using Python). | 0 | 250 | 0 | 1 | 2011-01-10T17:03:00.000 | python,html,url | Call a url that writes to a file on a server (free) | 1 | 1 | 1 | 4,650,080 | 0 |
0 | 0 | I'm using python GData to work with Google Calendar. I'm making really simple requests (e.g. create event), using OAuth authorization.
Usually this works OK, but sometimes I'm receiving lots of 302 redirects, that leads to "Maximum redirects count reached" exception.
If I re-try same request, it's usually works correct.
I can't figure out, why is this happening, looks like it's a random event.
As a walkthrough I wrote a code which retries requests few times, if there is such error, but may be there is an explanation of this behavior or even solutions to evade it? | true | 4,657,001 | 1.2 | 0 | 0 | 0 | Answer from Google support forum:
This might happen due to some issues in the Calendar servers and is not an error on your part. The best way to "resolve" this issue is to retry again. | 0 | 76 | 0 | 1 | 2011-01-11T11:14:00.000 | python,gdata-api,google-data-api,gdata-python-client | Python GData lib causes too much redirects | 1 | 1 | 1 | 4,792,423 | 0 |
0 | 0 | I'm using Python 2.6 and Windows Server 2008.
The server has two IP addresses 1 internal, 1 external.
I need Python to use the external IP address, but while doing so I get this:
socket.error: [Error 10049] The requested address is not valid in its context
To be more precise I'm using Django's runserver command for who is familiar with it
Edit:
ipconfig only brings up the internal IP address, while all services I have running are using the external IP without any problems!
Any ideas? | true | 4,657,347 | 1.2 | 0 | 0 | 17 | That's an error Windows gives when you're trying to bind to an address on the local machine that's not assigned to any of the adapters on the machine. If ipconfig doesn't show it, you can't bind to it.
If the external address is on a router that is NAT'ing requests from it to the server's internal address, you can't bind to it because it's on a different machine. This is probably the case. You might want to bind to socket.INADDR_ANY (or its Django equivalent). This will bind to all addresses that are on the machine right now.
Also note that if the external address is NAT'ed to your internal one, binding to the internal one should be enough. | 1 | 25,060 | 0 | 7 | 2011-01-11T11:57:00.000 | python,windows,django,networking,windows-server-2008 | Socket error "IP address not valid in its context" - Python | 1 | 1 | 1 | 4,657,548 | 0 |
0 | 0 | I am interested to make a chatbot. My script is currently working fine with imified.com bot. however imified is down almost everday. so i am looking for my own solution.
during my findings, I have found (through this site) openfire and I have configured it and it is working fine even with gmails users.
but i am still not getting what I need.
I need to request a URL (with the chat scripts and some other user data something like imified provides) when each gmail or other external users send me a message. let me explain.
my openfire is hosted and working for mybot.com and my id is: [email protected].
now a gmail user say [email protected] added me in his gtalk/piding and we can communicate each other. he can send me message and I can reply.
but I need a robot instead of me. when [email protected] (and any other user) sends me a message, I need to request a URL so that i can dynamically generate response based on the message he/she sent.
in which way I should go for achieving this? Is there any way to customize openfire to do so?
or should I make a php/python (i need to learn python though) script that will listen to xmpp ports and generate responses? if so, any helpful scripts that may guide me?
bunch of thanks for reading it and thanks in advance for providing any response. | true | 4,657,611 | 1.2 | 1 | 0 | 1 | The OpenFire understand XMPP, what you need is XMPP library/API (like XMPP4R if you are Rubyist). Using it your app will login to OpenFire (by sending gmail/yahoo credentials) and others will see you as online. But when they will reply to you, you will be notified in your application. Where you can receive the message, process it, and send response (by writing a required program/logic).
We have done it in our SMS Chat application with Gmail/Yahoo messenger friends/contacts. | 0 | 2,089 | 0 | 0 | 2011-01-11T12:29:00.000 | java,php,python,xmpp | XMPP, openfire and bot issue | 1 | 1 | 4 | 8,296,818 | 0 |
1 | 0 | Given a news article webpage (from any major news source such as times or bloomberg), I want to identify the main article content on that page and throw out the other misc elements such as ads, menus, sidebars, user comments.
What's a generic way of doing this that will work on most major news sites?
What are some good tools or libraries for data mining? (preferably python based) | true | 4,672,060 | 1.2 | 0 | 0 | 13 | There's no way to do this that's guaranteed to work, but one strategy you might use is to try to find the element with the most visible text inside of it. | 0 | 24,718 | 0 | 59 | 2011-01-12T17:46:00.000 | python,web-scraping,html-parsing,webpage | Web scraping - how to identify main content on a webpage | 1 | 2 | 10 | 4,672,098 | 0 |
1 | 0 | Given a news article webpage (from any major news source such as times or bloomberg), I want to identify the main article content on that page and throw out the other misc elements such as ads, menus, sidebars, user comments.
What's a generic way of doing this that will work on most major news sites?
What are some good tools or libraries for data mining? (preferably python based) | false | 4,672,060 | 1 | 0 | 0 | 6 | It might be more useful to extract the RSS feeds (<link type="application/rss+xml" href="..."/>) on that page and parse the data in the feed to get the main content. | 0 | 24,718 | 0 | 59 | 2011-01-12T17:46:00.000 | python,web-scraping,html-parsing,webpage | Web scraping - how to identify main content on a webpage | 1 | 2 | 10 | 4,672,151 | 0 |
0 | 0 | How do I setup a server so I can get emails and parse them in python? | false | 4,672,697 | 0 | 1 | 0 | 0 | There's a bunch of services on the web that will make it easier for you to send and receive e-mails using their API. This would relieve you from the pain of setting up, running and administrering your own e-mail service. | 1 | 194 | 0 | 1 | 2011-01-12T18:48:00.000 | python | How to setup parsing emails? | 1 | 1 | 2 | 12,867,404 | 0 |
0 | 0 | I'd like to do the following with Python:
Computer 1 starts SSH server (probably using twisted or paramiko)
Computer 1 connects to Server 1 (idle connection)
Computer 2 connects to Server 1
Server 1 forwards Computer 2's connection to Computer 1 (connection no longer idle)
Computer 1 forwards Server 1's connection to listening SSH port (on computer 1)
Result being Computer 2 now has a SSH session with Computer 1, almost as if Computer 2 had started a normal SSH session (but with Server 1's IP instead of Computer 1's)
I need this because I can't port forward on Computer 1's network (the router doesn't support it). | false | 4,686,104 | 0.099668 | 0 | 0 | 1 | I'd use ssh to create a remote tunnel (-R) from the server to the local system. If you're insistent on doing this with Python then there's the subprocess module. | 0 | 660 | 1 | 1 | 2011-01-13T22:33:00.000 | python,networking,ssh,portforwarding | Establish SSH Connection Between Two Isolated Machines Using a 3rd System | 1 | 1 | 2 | 4,686,141 | 0 |
0 | 0 | I'm writing a python script checking/monitoring several server/websites status(response time and similar stuff), it's a GUI program and I use separate thread to check different server/website, and the basic structure of each thread is using an infinite while loop to request that site every random time period(15 to 30 seconds), once there's changes in website/server each thread will start a new thread to do a thorough check(requesting more pages and similar stuff).
The problem is, my internet connection always got blocked/jammed/messed up after several hours running of this script, the situation is, from my script side I got urlopen error timed out each time it's requesting a page, and from my FireFox browser side I cannot open any site. But the weird thing is, the moment I close my script my Internet connection got back on immediately which means now I can surf any site through my browser, so it must be the script causing all the problem.
I've checked the program carefully and even use del to delete any connection once it's used, still get the same problem. I only use urllib2, urllib, mechanize to do network requests.
Anybody knows why such thing happens? How do I debug this problem? Is there a tool or something to check my network status once such situation occurs? It's really bugging me for a while...
By the way I'm behind a VPN, does it have something to do with this problem? Although I don't think so because my network always get back on once the script closed, and the VPN connection never drops(as it appears) during the whole process.
[Updates:]
Just found more info about this problem, when my program brings down the internet connection, well, it's not totally "down", I mean, I cannot open any site in my browser or always get urlopen error timed out, but I still can get reply using "ping google.com" in cmd line. And when I manually dropped the VPN connection then redial, without closing my program it starts to work again and also I can surf the net through my browser. Why this happening? | false | 4,690,890 | 0 | 0 | 0 | 0 | You could possibly be creating more threads than you expect - monitor the result of threading.active_count() to test this.
If possible try to rule out the VPN at your end (or post the relevant guts of the code so we can test it).
(Nettiquete) If you're not doing so already, only use network.http.max-connections-per-server threads per monitored site/host.
(For reference) urlopen returns a file-like object - use .close() or del on this object or the socket will be sat in a CLOSE_WAIT state until a timeout.
Hopefully these points are, well, pointers. | 0 | 643 | 0 | 2 | 2011-01-14T12:10:00.000 | python,multithreading,network-programming,urllib2,mechanize | My python program always brings down my internet connection after several hours running, how do I debug and fix this problem? | 1 | 1 | 2 | 4,691,668 | 0 |
0 | 0 | I am designing a website for a local server on our lan, so that anyone who tires to access that IP from a browser sees a web page and when he clicks on some link on that web page then a directory or some folder from that server should open.
I am using python for this purpose and the server is just like another PC with windows installed. | false | 4,693,740 | 0.197375 | 0 | 0 | 1 | If you just want to redirect the user to your file server, then it sort of depends on what operating system they're using. If everybody's going to be on Windows, then you should be able to include a link to "//Your-Fileserver-Name/Path1/Path2". Obviously you have to share the appropriate files on your server using Windows file-sharing. | 0 | 261 | 0 | 0 | 2011-01-14T17:03:00.000 | python,sockets | How to open a directory/folder on a machine on LAN using python? | 1 | 1 | 1 | 4,693,820 | 0 |
1 | 0 | It's a GUI program I wrote in python checking website/server status running on my XP SP3, multi threads are used to check different site/server. After several hours running, the program starts to get urlopen error timed out all the time, and this always happens right after a POST request from a server(not a certain one, might be A or B or C), and it's also not the first POST request causing the problem, normally after several hours running and it happens to make a POST request at an unknown moment, all you get from then on is urlopen error timed out.
I'm still able to ping but cannot browse any site, once the program closed everything's fine. It's definitely the program causing this problem, well I just don't know how to debug/check what the problem is, also don't know if it's from OS side or my program wasting too many resources/connections(are you still able to ping when too many connections used?), would anybody please help me out? | true | 4,697,623 | 1.2 | 0 | 0 | 3 | Are you sure you are closing TCP sessions after each request? Try to check netstat information from time to time and if you'll see that the number of active/established sessions is rising it means that you have some problems in your script.
Yes, usually you can ping even if you are out of free TCP sockets. | 0 | 204 | 0 | 0 | 2011-01-15T01:55:00.000 | python,multithreading,network-programming,urllib2,python-multithreading | Able to ping but cannot browse after several hours running of my python program | 1 | 1 | 1 | 4,697,664 | 0 |
1 | 0 | How do you utilize proxy support with the python web-scraping framework Scrapy? | false | 4,710,483 | 1 | 0 | 0 | 9 | that would be:
export http_proxy=http://user:password@proxy:port | 0 | 69,413 | 0 | 50 | 2011-01-17T06:17:00.000 | python,scrapy | Scrapy and proxies | 1 | 1 | 9 | 14,401,562 | 0 |
0 | 0 | Is there a way to get python to read modules from a network?
We have many machines and it would be a too much effort to update each machine manually each time I change a module so I want python to get the modules from a location on the network.
Any ideas? | false | 4,710,588 | 0.033321 | 1 | 0 | 1 | How I ended up doing this:
Control Panel\All Control Panel Items\System >> Advanced >> Environment Variables >> System Variables >> New >> Name = PYTHONPATH, value = \server\scriptFolder
Thanks everyone for all the help :) | 0 | 10,613 | 0 | 4 | 2011-01-17T06:37:00.000 | python,networking,module,import,centralized | Importing module from network | 1 | 2 | 6 | 4,734,071 | 0 |
0 | 0 | Is there a way to get python to read modules from a network?
We have many machines and it would be a too much effort to update each machine manually each time I change a module so I want python to get the modules from a location on the network.
Any ideas? | true | 4,710,588 | 1.2 | 1 | 0 | 4 | Mount your network location into your file-system and add that path to your PYTHONPATH. That way, Python on your local machine will be able to see the modules which are present in the remote location.
You cannot directly import from modules remotely, like specifying a js file in html. | 0 | 10,613 | 0 | 4 | 2011-01-17T06:37:00.000 | python,networking,module,import,centralized | Importing module from network | 1 | 2 | 6 | 4,710,633 | 0 |
1 | 0 | I am coding a HTML scraper which gets values from a table on a website. I also need to grab the URL of an image, but the problem is this image is dynamically generated via javascript - and when i get contents of the website via urllib, the Javascript does not run or show in the resulting HTML.
Is there any way to enable Javascript to run on pages which are accessed via urllib? | true | 4,720,342 | 1.2 | 0 | 0 | 2 | No, you'd need some sort of JS interpreter for that. There might be Python-Browser integrations to help parsing this kind of page. | 0 | 5,105 | 0 | 5 | 2011-01-18T03:55:00.000 | javascript,python,urllib2 | Python: Processing Javascript with urllib2? | 1 | 1 | 1 | 4,721,276 | 0 |
1 | 0 | I have been searching for an xml serialization library that can serialize and deserialize an (Java/Python) object into xml and back. I am using XStream right now for Java. If XStream had a python version to deserialize from xml generated by Xstream that would have done it for me. Thrift or such other libraries is not going to work unless they allow the data format to be xml. I am looking for suggestion for any library that can do it. - Thanks | true | 4,730,082 | 1.2 | 0 | 0 | 1 | Since Java and Python objects are so different in themselves, it's almost impossible to do this, unless you on both sides restrict the types allowed and such things.
And in that case, I'd recommend you use JSON, which is a nice interoperability format, even though it's not XML.
Otherwise you could easily write a library that takes XStream XML and loads it into Python objects, but it will always be limited to whatever is similar between Java and Python. | 0 | 1,140 | 0 | 1 | 2011-01-18T23:06:00.000 | java,python,xml,serialization | XML serialization library interoperability between Java and Python | 1 | 1 | 4 | 4,730,212 | 0 |
0 | 0 | I'm trying to create a very simple script that uses python's xmpppy to send a message over facebook chat.
import xmpp
FACEBOOK_ID = "[email protected]"
PASS = "password"
SERVER = "chat.facebook.com"
jid=xmpp.protocol.JID(FACEBOOK_ID)
C=xmpp.Client(jid.getDomain(),debug=[])
if not C.connect((SERVER,5222)):
raise IOError('Can not connect to server.')
if not C.auth(jid.getNode(),PASS):
raise IOError('Can not auth with server.')
C.send(xmpp.protocol.Message("[email protected]","Hello world",))
This code works to send a message via gchat, however when I try with facebook I recieve this error:
An error occurred while looking up _xmpp-client._tcp.chat.facebook.com
When I remove @chat.facebook.com from the FACEBOOK_ID I get this instead:
File "gtalktest.py", line 11, in
if not C.connect((SERVER,5222)):
File "/home/john/xmpppy-0.3.1/xmpp/client.py", line 195, in connect
if not CommonClient.connect(self,server,proxy,secure,use_srv) or secureNone and not secure: return self.connected
File "/home/john/xmpppy-0.3.1/xmpp/client.py", line 179, in connect
if not self.Process(1): return
File "/home/john/xmpppy-0.3.1/xmpp/dispatcher.py", line 302, in dispatch
handler['func'](session,stanza)
File "/home/john/xmpppy-0.3.1/xmpp/dispatcher.py", line 214, in streamErrorHandler
raise exc((name,text))
xmpp.protocol.HostUnknown: (u'host-unknown', '')
I also notice any time I import xmpp I get the following two messages when running:
/home/john/xmpppy-0.3.1/xmpp/auth.py:24: DeprecationWarning: the sha module is deprecated; use the hashlib module instead
import sha,base64,random,dispatcher
/home/john/xmpppy-0.3.1/xmpp/auth.py:26: DeprecationWarning: the md5 module is deprecated; use hashlib instead
import md5
I'm fairly new to solving these kinds of problems, and advise, or links to resources that could help me move forward in solve these issues would be greatly appreciated. Thanks for reading! | false | 4,732,230 | 0.197375 | 1 | 0 | 2 | I also started the same project, and was trapped into same problem. I found the solution too. You have to write the UserName of facebook (Hence You must opt one Username) and that too in small Caps. This is the most important part. Most probably you too like me would not be writing it in small Caps. | 0 | 2,814 | 1 | 3 | 2011-01-19T06:03:00.000 | python,facebook,chat,xmpppy | xmpppy and Facebook Chat Integration | 1 | 1 | 2 | 5,268,496 | 0 |
1 | 0 | I'm looking for a public private key solution that I can use with a javascript client and python backend. The aim is to send data encrypted from the client to the server... Are the any solutions? Thanks for hints. | false | 4,737,721 | 0 | 1 | 0 | 0 | Use SSL for your connections to the server. Probably the easiest way to do that is to use HTTP for communication and also to run a proxy (say, Apache) on the server that can do HTTPS and forwards requests to the actual server application. | 0 | 401 | 0 | 1 | 2011-01-19T16:17:00.000 | javascript,python,encryption,pgp | public private key solution for javascript and python | 1 | 1 | 1 | 4,738,545 | 0 |
1 | 0 | I have five httpd.conf files which differ only in the port number that they are listening on. All of the other data between the sites is the same. Is there any way to track this as a single file in Mercurial? So that if I make a different change to the httpd.conf file, I could push this to all five, and keep the port numbers separate.
Thanks,
Kevin | false | 4,752,695 | 0.066568 | 0 | 0 | 1 | I don't think you can; that's not a common thing to do in a VCS. If I'm wrong, I'm happy to be corrected.
Perhaps you can make the "master" httpd.conf file a template, and have a build script that generates the five files you want, passing in the appropriate port number for each file. That way you isolate the change points in your file, and keep the common bits, well, common. There are reams of templateing languages out there. Or you can simply use sed. Or do as nmichaels suggested and use Apache's capabilities directly.
There are lots of ways to skin this particular cat, but I don't think Mercurial is going to help you directly. | 0 | 67 | 0 | 2 | 2011-01-20T21:18:00.000 | python,apache,mercurial,dvcs | Track small differences in 5 files with Mercurial? | 1 | 1 | 3 | 4,752,795 | 0 |
0 | 0 | I am currently writing a non-web program using mostly python and have gotten to the point where I need to create a structure for save and settings files.
I decided to use xml over creating my own format but have come into a bit of a barrier when trying to figure out what to use to actually process the xml. I want to know if I can get some of the pros and cons about packages that are available for python since there are a lot and I'm not entirely sure which ones seem best from just looking at the documentation.
I basically want to know if there is a package or library that will let me write and read data from an xml file with relative ease by just knowing what the tag name I'm looking for is.
P.S. My app is mostly geared to be used on Linux if it makes any difference. | true | 4,789,397 | 1.2 | 0 | 0 | 3 | If your data is only for the use of your Python programs, pickle might be an easier solution. | 0 | 124 | 0 | 1 | 2011-01-25T02:23:00.000 | python,xml | xml processing options in python | 1 | 1 | 2 | 4,789,451 | 0 |
0 | 0 | On Linux, urllib.urlopen("https://www.facebook.com/fql.php?query=SELECT first_name FROM user") will have the spaces automatically quoted and run smoothly.
(By the way, the URL is fictional)
However on mac, this is not the case. Somehow the URL is not escaped, and an error would be thrown. I have checked both python versions to be at least 2.6 and the version of urllib to be 1.17
Is this a bug? | true | 4,796,720 | 1.2 | 0 | 0 | 6 | urlopen documentation doesn't promise you to escape anything. Use urllib.quote() to escape it yourself. | 0 | 68 | 0 | 1 | 2011-01-25T17:21:00.000 | python,networking | Urllib trouble across platforms | 1 | 1 | 1 | 4,796,814 | 0 |
0 | 0 | Im trying to send multiple data items over a tcp socket from an android client to a python server. The user on the client side can make multiple choices so im using a number sent as a character to differentiate between request types. I have to send the choice and specific data depending on the choice.For the current selection (choice no 1 in this case) I need the choice and 2 string fields and an image. I have the image transfer working on its own and the choice working on its own. The problem I am having now is that the buffer reading in the choice is also reading in the byte stream of the image straight after it. | true | 4,820,021 | 1.2 | 0 | 0 | 2 | TCP will combine your writes into one packet if they are written before the packet is sent, meaning the information will be received concatenated.
A common solution would be to incorporate an 'opcode' before your data.
For example, prefix CHOICE before sending your integer. When you read CHOICE in your python script, you know you are receiving an integer and thus read just that much data.
Before you send your image, prefix it with IMG and the number of bytes to read. This way you can read just as many bytes as needed, then look for the next opcode.
Your packet should then look like this:CHOICE1IMG<number of bytes><image bytestream>
Obviously your opcode can be whatever you want, this is just an example. | 0 | 121 | 0 | 0 | 2011-01-27T17:58:00.000 | java,python,android,sockets | distinguishable socket input | 1 | 1 | 1 | 4,820,327 | 0 |
0 | 0 | Hello
I'm have a task - create script, what will walk in internet and change proxy from list.
I'm see in module urllib2: ProxyHandler and HTTPPasswordMgr, but manual is poor.
1. In documentation ProxyHandler get dict with many proxy-server, how to I'm can select from list and use for url_open?
2. HTTPPasswordMgr have method add_password, but for what it? How to it will select auth-data for proxy, for what it have: realm?
3. How to right use multiple proxy in urllib2? I'm think only create a list with all my proxy and create new 'opener' for each request.
Thanks | true | 4,822,860 | 1.2 | 0 | 0 | -1 | ProxyHandler can use a different proxy for different protocols (HTTP, etc.) but I don't think it will help you. You should be able to write your own class for your needs without much difficulty. | 0 | 1,084 | 0 | 0 | 2011-01-27T22:48:00.000 | python,proxy,urllib2 | How to use multiple proxy in urllib2? | 1 | 1 | 1 | 4,823,009 | 0 |
0 | 0 | Let's say I want to connect an app I created to a website (SSL) to get values from the content at a certain page.
How can I do it? | true | 4,828,238 | 1.2 | 0 | 0 | 0 | httplib2 if you wanna make https call. Check out the doc for more info.
--Sai | 0 | 1,531 | 0 | 0 | 2011-01-28T12:32:00.000 | python | Connect to a website using Python | 1 | 1 | 3 | 4,828,260 | 0 |
0 | 0 | Right now I'm using a proxy where I can see the headers that are sent. I'm wondering if there's a simpler pycurl method of grabbing the headers that were sent in an http request. Ive tried using HEADERFUNCTION already but it gives you the response headers and not the ones i'm looking for. | true | 4,839,670 | 1.2 | 0 | 0 | 1 | libcurl itself provides this data in the DEBUGFUNCTION callback, so if there's no current support for that in pycurl I figure it should be added and it shouldn't be too hard to do it... | 0 | 230 | 0 | 0 | 2011-01-29T21:49:00.000 | python,curl,http-headers,pycurl | is there an easy way of fetching the sent headers of an http request for pycurl/curl? | 1 | 2 | 2 | 4,860,930 | 0 |
0 | 0 | Right now I'm using a proxy where I can see the headers that are sent. I'm wondering if there's a simpler pycurl method of grabbing the headers that were sent in an http request. Ive tried using HEADERFUNCTION already but it gives you the response headers and not the ones i'm looking for. | false | 4,839,670 | 0 | 0 | 0 | 0 | There is indeed support for the debugfunction callback. Alternatively, if you just need the data for debugging purposes, set the verbose option to 1 on your Curl instance and the data will be sent to stdout. | 0 | 230 | 0 | 0 | 2011-01-29T21:49:00.000 | python,curl,http-headers,pycurl | is there an easy way of fetching the sent headers of an http request for pycurl/curl? | 1 | 2 | 2 | 4,903,643 | 0 |
1 | 0 | I have one design decision to make.
In my web(ajax) application we need to decide where should we put user interface logic ?
Should It be completely loaded via javascript ( pure single page ) . and Only data comes and go.
or
Should server send some format (XML) which translated via javascript to dynamically create rich user interface. ( semi-ajax ). so some data and ui comes and go.
Which option is better ? ( speed, ease of development, platform independence )
Thanks. | false | 4,842,449 | 0.049958 | 0 | 0 | 1 | The biggest influence is whether you are concerned about initial page load time. If you don't mind having all the UI there at page load, your app can be more responsive by just shuttling data instead of UI. If you want faster load and don't mind larger AJAX requests, sending some UI markup isn't bad. If you have the server power to pre-render UI with data and send the fully-ready marked-up data to the user, their browser will perform more quickly, and initial page-load should be fast.
Which course you choose should depend on the task at hand. Not all requests need be handled the same way. | 0 | 993 | 0 | 2 | 2011-01-30T10:44:00.000 | python,ajax,web-applications,user-interface,pyjamas | rich web client vs thin web client | 1 | 3 | 4 | 4,842,562 | 0 |
1 | 0 | I have one design decision to make.
In my web(ajax) application we need to decide where should we put user interface logic ?
Should It be completely loaded via javascript ( pure single page ) . and Only data comes and go.
or
Should server send some format (XML) which translated via javascript to dynamically create rich user interface. ( semi-ajax ). so some data and ui comes and go.
Which option is better ? ( speed, ease of development, platform independence )
Thanks. | false | 4,842,449 | 0.099668 | 0 | 0 | 2 | I faced similar dilemma few months back. As Lennart (above) says it makes sense to go for pyjamas or similar library if your app is more desktopish.
Further one of the biggest advantage of pyjamas provide is logically well separated backend and frontend code. IMO that is very important.
If your app is not like a desktop app (like ours was), then multipage offers more advantages, such as single change wont breaks entire app, easier to maintain etc. You might want to consider can have your app server serve json and other web server serves static content and js. Js would request json app server for data. That way we managed to keep out frontend and backend separate. Further we chose mootools as js lib over pyjamas. Ofcourse it is upto your taste and need of application. We did use python template server side templates but at compile time not at runtime like usual approach. This needed to change our thinking a little but the offered many advantages.
I end up telling you my story but I thought it's relevant and hope that helps. | 0 | 993 | 0 | 2 | 2011-01-30T10:44:00.000 | python,ajax,web-applications,user-interface,pyjamas | rich web client vs thin web client | 1 | 3 | 4 | 4,844,422 | 0 |
1 | 0 | I have one design decision to make.
In my web(ajax) application we need to decide where should we put user interface logic ?
Should It be completely loaded via javascript ( pure single page ) . and Only data comes and go.
or
Should server send some format (XML) which translated via javascript to dynamically create rich user interface. ( semi-ajax ). so some data and ui comes and go.
Which option is better ? ( speed, ease of development, platform independence )
Thanks. | false | 4,842,449 | 0 | 0 | 0 | 0 | Which option is better ? ( speed, ease of development, platform independence )
Platform independence, if you mean cross-browser compatibility, is a HUGE reason to use pyjamas because the python code includes a sane override infrastructure which handles everything for you. No more JS compatibility classes.
Anyway Pyjamas is all about loading the client app and then using json-rpc for the data only. That's because it's faster (once the app loaded up), easier to separate server and client, easier to maintain since all the UI code is in widgets in one place.
I've seen stuff like DokuWiki which use a php script to serve up javascript and my first thought was "WHY?" but it works pretty well I guess. It probably makes sense if you mostly have static pages with the occasional bit of JS for decoration. | 0 | 993 | 0 | 2 | 2011-01-30T10:44:00.000 | python,ajax,web-applications,user-interface,pyjamas | rich web client vs thin web client | 1 | 3 | 4 | 4,886,016 | 0 |
0 | 0 | So, I have a list of a bunch of city and state/region combinations (where some cities do not have a paired state/region) and I'd like to use these to fill in country, continent, (and state/region where it's not supplied) information. Where multiple regions would fit, I'm willing to accept any of them, though the biggest one would be best. What's the simplest library for Python that will let me do this?
An example: given "Istanbul," I'd want something like:
{istanbul, istanbul province, turkey, europe} | false | 4,844,811 | 0.066568 | 0 | 0 | 1 | You don't need any library for this at all, really. What you do need is a list of geographic locations. If you don't have that there are geolocation services online you can use, that will do these things for you, accessible via http (and hence urllib). You might need a library to interpret the response, that could be XML, for example. | 0 | 11,246 | 0 | 4 | 2011-01-30T18:59:00.000 | python | How Can I Determine a Region, Country, and Continent Based on a City Using Python? | 1 | 1 | 3 | 4,844,899 | 0 |
1 | 0 | I have recently started to work with Scrapy. I am trying to gather some info from a large list which is divided into several pages(about 50). I can easily extract what I want from the first page including the first page in the start_urls list. However I don't want to add all the links to these 50 pages to this list. I need a more dynamic way. Does anyone know how I can iteratively scrape web pages? Does anyone have any examples of this?
Thanks! | false | 4,876,799 | 0 | 0 | 0 | 0 | Why don't you want to add all the links to 50 pages? Are the URLs of the pages consecutive like www.site.com/page=1, www.site.com/page=2 or are they all distinct? Can you show me the code that you have now? | 0 | 1,171 | 0 | 1 | 2011-02-02T16:08:00.000 | python,web-scraping,scrapy | Recursive use of Scrapy to scrape webpages from a website | 1 | 2 | 2 | 4,889,635 | 0 |
1 | 0 | I have recently started to work with Scrapy. I am trying to gather some info from a large list which is divided into several pages(about 50). I can easily extract what I want from the first page including the first page in the start_urls list. However I don't want to add all the links to these 50 pages to this list. I need a more dynamic way. Does anyone know how I can iteratively scrape web pages? Does anyone have any examples of this?
Thanks! | false | 4,876,799 | 0.099668 | 0 | 0 | 1 | use urllib2 to download a page. Then use either re (regular expressions) or BeautifulSoup (an HTML parser) to find the link to the next page you need. Download that with urllib2. Rinse and repeat.
Scapy is great, but you dont need it to do what you're trying to do | 0 | 1,171 | 0 | 1 | 2011-02-02T16:08:00.000 | python,web-scraping,scrapy | Recursive use of Scrapy to scrape webpages from a website | 1 | 2 | 2 | 4,940,212 | 0 |
0 | 0 | I have a wmv file at a particular url that I want to grab and save as a file using Python. My script uses urllib2 to authenticate and read the bytes and save them locally in chunks. However, once I open the file, no video player recognizes it. When I download the wmv manually from a browser, the file plays fine, but oddly enough ends up being about 500kb smaller than the file I end up with using Python. What's going on? Is there header information I need to somehow exclude? | false | 4,883,101 | 0 | 0 | 0 | 0 | From what I understand, urllib works at the HTTP level and should properly remove headers in subsequent chunks. I took a look at the data returned by read() and it's all bytes. | 0 | 316 | 0 | 0 | 2011-02-03T06:29:00.000 | python,urllib2,urllib,python-2.6,wmv | Downloading wmv from a url in Python 2.6 | 1 | 2 | 3 | 4,888,107 | 0 |
0 | 0 | I have a wmv file at a particular url that I want to grab and save as a file using Python. My script uses urllib2 to authenticate and read the bytes and save them locally in chunks. However, once I open the file, no video player recognizes it. When I download the wmv manually from a browser, the file plays fine, but oddly enough ends up being about 500kb smaller than the file I end up with using Python. What's going on? Is there header information I need to somehow exclude? | false | 4,883,101 | 0 | 0 | 0 | 0 | I was writing my file with mode 'w' on a Windows machine. Writing binary data should be done with mode 'wb' or the EOLs will be incorrect. | 0 | 316 | 0 | 0 | 2011-02-03T06:29:00.000 | python,urllib2,urllib,python-2.6,wmv | Downloading wmv from a url in Python 2.6 | 1 | 2 | 3 | 4,900,216 | 0 |
0 | 0 | In another question I asked for "the best" language for a certain purpose. Realizing this goal was a bit too much to start, I simplified my idea :) But there were really useful language hints. So I decided on Scala for the desktop-app and consider between Perl and Python on the webserver.
I want to program something like an asynchronous chat (little bit like an email). So you start your program pick your name and add a friend with his unique id. Then you can write him a simple message and when your friends start up his pc, launches the "chat.exe" he receives the mail (internet is required) and is able to answer. No special functions, smiley's or text formatting, just simple for learning purpose.
My concept is: Use Scala for the "chat.exe" (Or is just a "chat.jar" possible?) which communicates via SOCKET with a Perl/Python Framework which handles the requests.
So you type "Hello there" and click on send. This message is transfered via SOCKET to a Perl/Python script which reads the request an put this message in a MySQL database. On the otherside the chat.exe of your friend checks for new messages and if there is one, the Perl/Python script transfer the message. Also via SOCKET.
Do you think this works out? Is SOCKET appropriate and fits in? Or perhaps REST? But I think for REST-Requests you have to use the URI (http://example.com/newmessage/user2/user3/Hi_how_are_you). This looks very unsecure.
Look forward to your comments!
Have a nice day,
Kurt | false | 4,891,911 | 0 | 1 | 0 | 0 | To implement something like that you would need to go through a MQ System like perhaps ActiveMQ instead of using plain sockets. | 0 | 609 | 0 | 0 | 2011-02-03T21:25:00.000 | python,perl,sockets,scala,client-server | Does this Scala Perl/Python architecture make sense | 1 | 2 | 3 | 4,891,991 | 0 |
0 | 0 | In another question I asked for "the best" language for a certain purpose. Realizing this goal was a bit too much to start, I simplified my idea :) But there were really useful language hints. So I decided on Scala for the desktop-app and consider between Perl and Python on the webserver.
I want to program something like an asynchronous chat (little bit like an email). So you start your program pick your name and add a friend with his unique id. Then you can write him a simple message and when your friends start up his pc, launches the "chat.exe" he receives the mail (internet is required) and is able to answer. No special functions, smiley's or text formatting, just simple for learning purpose.
My concept is: Use Scala for the "chat.exe" (Or is just a "chat.jar" possible?) which communicates via SOCKET with a Perl/Python Framework which handles the requests.
So you type "Hello there" and click on send. This message is transfered via SOCKET to a Perl/Python script which reads the request an put this message in a MySQL database. On the otherside the chat.exe of your friend checks for new messages and if there is one, the Perl/Python script transfer the message. Also via SOCKET.
Do you think this works out? Is SOCKET appropriate and fits in? Or perhaps REST? But I think for REST-Requests you have to use the URI (http://example.com/newmessage/user2/user3/Hi_how_are_you). This looks very unsecure.
Look forward to your comments!
Have a nice day,
Kurt | false | 4,891,911 | 0.066568 | 1 | 0 | 1 | Use Scala for the "chat.exe" (Or is just a "chat.jar" possible?)
Step 1. Figure that out. Actually write some stuff and see what you can build.
which communicates via SOCKET with a Perl/Python Framework which handles the requests.
Not meaningful. All internet communication is done with sockets. Leave this sentence out and you don't lose any meaning.
This message is transfered via SOCKET to a Perl/Python script which reads the request an put this message in a MySQL database.
A little useful information. Sockets, however, go without saying.
On the otherside the chat.exe of your friend checks for new messages and if there is one, the Perl/Python script transfer the message. Also via SOCKET.
Right. Sockets, again, don't mean much.
On top of sockets there are dozens of protocols. FTP, Telnet, HTTP, SMTP, etc., etc.
Step 2 is to figure out which protocol you want to use. REST, by the way is a particular use of HTTP. You should really, really look very closely at HTTP and REST before dismissing them.
This looks very unsecure
Not clear why you're saying this. I can only guess that you don't know about HTTP security features.
A lazy programmer might do this.
Install Python, Django, MySQL-Python and Piston.
Define a Django Model, configure the defaults so that model is exposed as a secure RESTful set of services.
That's sort of it for the server side message GET, POST, PUT and DELETE are all provided by Django, Piston and the Django ORM layer. Authentication can be any of a variety of mechanisms. I'm a big fan of HTTP Digest authentication. | 0 | 609 | 0 | 0 | 2011-02-03T21:25:00.000 | python,perl,sockets,scala,client-server | Does this Scala Perl/Python architecture make sense | 1 | 2 | 3 | 4,892,391 | 0 |
0 | 0 | I already know there are ssh modules for Python, that's not for what I'm looking for.
What I want to have is an python script to do the following:
> connect to an [ input by user ] SSH host
> connect using the credentials [ provided by the user ]
> run command on the SSH host [ telnet to [host - input by user ]
> Select menu item in the telnet session
Thanks in advance,
Best regards, | false | 4,896,785 | 0.028564 | 1 | 0 | 1 | There are many libraries to do that.
Subprocess
Pexpect
Paramiko (Mostly used)
Fabric
Exscript
You can check their documentation for the implementation. | 0 | 74,237 | 1 | 10 | 2011-02-04T10:12:00.000 | python,automation,ssh,telnet | Python script - connect to SSH and run command | 1 | 1 | 7 | 39,664,919 | 0 |
1 | 0 | I am currently working on a project to create simple file uploader site that will update the user of the progress of an upload.
I've been attempting this in pure python (with CGI) on the server side but to get the progress of the file I obviously need send requests to the server continually. I was looking to use AJAX to do this but I was wondering how hard it would be to, instead of changing to some other framerwork (web.py for instance), just write my own web server for receiving the XML HTTP Requests?
My main problem is that sending the request is done from HTML and Javascript so it all seems like magic trickery at the moment.
Can anyone advise me as to the best way to go about receiving these requests on the server?
EDIT: It seems that a framework would be the way to go. Would web.py be a good route to take? | false | 4,898,066 | 0 | 0 | 0 | 0 | There's absolutely no need to write your own web server. Plenty of options exist, including lightweight ones like nginx.
You should use one of those, and either your own custom WSGI code to receive the request, or (better) one of the microframeworks like Flask or Bottle. | 0 | 1,735 | 0 | 1 | 2011-02-04T12:39:00.000 | python,http,xmlhttprequest | Creating a python web server to recieve XML HTTP Requests | 1 | 1 | 3 | 4,898,364 | 0 |
0 | 0 | I'm currently writing a set of unit tests for a Python microblogging library, and following advice received here have begun to use mock objects to return data as if from the service (identi.ca in this case).
However, surely by mocking httplib2 - the module I am using to request data - I am tying the unit tests to a specific implementation of my library, and removing the ability for them to function after refactoring (which is obviously one primary benefit of unit testing in the firt place).
Is there a best of both worlds scenario? The only one I can think of is to set up a microblogging server to use only for testing, but this would clearly be a large amount of work. | false | 4,914,582 | 0.099668 | 1 | 0 | 1 | Not sure what your problem is. The mock class is part of the tests, conceptually at least. It is ok for the tests to depend on particular behaviour of the mock objects that they inject into the code being tested. Of course the injection itself should be shared across unit tests, so that it is easy to change the mockup implementation. | 0 | 223 | 0 | 1 | 2011-02-06T16:35:00.000 | python,unit-testing | Using mock objects without tying down unit tests | 1 | 2 | 2 | 4,914,696 | 0 |
0 | 0 | I'm currently writing a set of unit tests for a Python microblogging library, and following advice received here have begun to use mock objects to return data as if from the service (identi.ca in this case).
However, surely by mocking httplib2 - the module I am using to request data - I am tying the unit tests to a specific implementation of my library, and removing the ability for them to function after refactoring (which is obviously one primary benefit of unit testing in the firt place).
Is there a best of both worlds scenario? The only one I can think of is to set up a microblogging server to use only for testing, but this would clearly be a large amount of work. | true | 4,914,582 | 1.2 | 1 | 0 | 1 | You are right that if you refactor your library to use something other than httplib2, then your unit tests will break. That isn't such a horrible dependency, since when that time comes it will be a simple matter to change your tests to mock out the new library.
If you want to avoid that, then write a very minimal wrapper around httplib2, and your tests can mock that. Then if you ever shift away from httplib2, you only have to change your wrapper. But notice the number of lines you have to change is the same either way, all that changes is whether they are in "test code" or "non-test code". | 0 | 223 | 0 | 1 | 2011-02-06T16:35:00.000 | python,unit-testing | Using mock objects without tying down unit tests | 1 | 2 | 2 | 4,915,281 | 0 |
0 | 0 | I have a "I just want to understand it" question..
first, I'm using python 2.6.5 on Ubuntu.
So.. threads in python (via thread module) are only "threads", and is just tells the GIL to run code blocks from each "thread" in a certain period of time and so and so.. and there aren't actually real threads here..
So the question is - if I have a blocking socket in one thread, and now I'm sending data and block the thread for like 5 seconds. I expected to block all the program because it is one C command (sock.send) that is blocking the thread. But I was surprised to see that the main thread continue to run.
So the question is - how can GIL is able to continue and run the rest of the code after it reaches a blocking command like send? Isn't it have to use real thread in here?
Thanks. | false | 4,920,471 | 0.321513 | 0 | 0 | 5 | GIL (the Global Interpreter Lock) is just a lock, it does not run anything by itself. Rather, the Python interpreter captures and releases that lock as necessary. As a rule, the lock is held when running Python code, but released for calls to lower-level functions (such as sock.send). As Python threads are real OS-level threads, threads will not run Python code in parallel, but if one thread invokes a long-running C function, the GIL is released and another Python code thread can run until the first one finishes. | 1 | 2,910 | 0 | 5 | 2011-02-07T10:58:00.000 | python,python-multithreading | python threads & sockets | 1 | 1 | 3 | 4,921,216 | 0 |
0 | 0 | I want to run a Python script every so many minutes. The script starts by fetching the newest article from a rss-feed (using feedparser). What I want, is when the newest article is the same as the last time it ran, the script just ends. How do I accomplish this? | false | 4,924,781 | 0.197375 | 0 | 0 | 3 | You could store the state in a temporary file. E.g. write the title into a temporary file if there isn't a temporary file already, and next time read from the file and compare the read title with the new title. | 0 | 373 | 0 | 3 | 2011-02-07T18:08:00.000 | python | Do nothing if RSS feed hasn't changed | 1 | 1 | 3 | 4,924,807 | 0 |
0 | 0 | I need to install on one of my Windows PC's some software that will periodically send a short HTTP POST request to my remote development server. The request is always the same and should get sent every minute.
What would you recommend as the best approach for that?
The things I considered are:
1. Creating a Windows service
2. Using a script in python (I have cygwin installed)
3. Scheduled task using a batch file (although I don't want the black cmd window to pop up in my face every minute)
Thanks for any additional ideas or hints on how to best implement it. | false | 4,935,257 | 0.066568 | 0 | 0 | 1 | This is trivially easy with a scheduled task which is the native Windows way to schedule tasks! There's no need for cygwin or Python or anything like that.
I have such a task running on my machine which pokes my Wordpress blog every few hours. The script is just a .bat file which calls wget. The task is configured to "Run whether user is logged on or not" which ensures that it runs when I'm not logged on. There's no "black cmd window".
You didn't say which version of Windows you are on and if you are on XP (unlucky for you if you are) then the configuration is probably different since the scheduled task interface changed quite a bit when Vista came out. | 0 | 2,416 | 1 | 1 | 2011-02-08T15:56:00.000 | python,http,windows-services,scheduled-tasks | How to periodically create an HTTP POST request from windows | 1 | 2 | 3 | 4,955,373 | 0 |
0 | 0 | I need to install on one of my Windows PC's some software that will periodically send a short HTTP POST request to my remote development server. The request is always the same and should get sent every minute.
What would you recommend as the best approach for that?
The things I considered are:
1. Creating a Windows service
2. Using a script in python (I have cygwin installed)
3. Scheduled task using a batch file (although I don't want the black cmd window to pop up in my face every minute)
Thanks for any additional ideas or hints on how to best implement it. | false | 4,935,257 | 0.132549 | 0 | 0 | 2 | If you have cygwin, you probably have cron - run a python script from your crontab. | 0 | 2,416 | 1 | 1 | 2011-02-08T15:56:00.000 | python,http,windows-services,scheduled-tasks | How to periodically create an HTTP POST request from windows | 1 | 2 | 3 | 4,935,443 | 0 |
1 | 0 | I am working on a project that will involve parsing HTML.
After searching around, I found two probable options: BeautifulSoup and lxml.html
Is there any reason to prefer one over the other? I have used lxml for XML some time back and I feel I will be more comfortable with it, however BeautifulSoup seems to be much common.
I know I should use the one that works for me, but I was looking for personal experiences with both. | true | 4,967,103 | 1.2 | 0 | 0 | 44 | The simple answer, imo, is that if you trust your source to be well-formed, go with the lxml solution. Otherwise, BeautifulSoup all the way.
Edit:
This answer is three years old now; it's worth noting, as Jonathan Vanasco does in the comments, that BeautifulSoup4 now supports using lxml as the internal parser, so you can use the advanced features and interface of BeautifulSoup without most of the performance hit, if you wish (although I still reach straight for lxml myself -- perhaps it's just force of habit :)). | 0 | 44,819 | 0 | 40 | 2011-02-11T08:49:00.000 | python,beautifulsoup,lxml | BeautifulSoup and lxml.html - what to prefer? | 1 | 2 | 4 | 4,967,121 | 0 |
1 | 0 | I am working on a project that will involve parsing HTML.
After searching around, I found two probable options: BeautifulSoup and lxml.html
Is there any reason to prefer one over the other? I have used lxml for XML some time back and I feel I will be more comfortable with it, however BeautifulSoup seems to be much common.
I know I should use the one that works for me, but I was looking for personal experiences with both. | false | 4,967,103 | 0 | 0 | 0 | 0 | lxml's great. But parsing your input as html is useful only if the dom structure actually helps you find what you're looking for.
Can you use ordinary string functions or regexes? For a lot of html parsing tasks, treating your input as a string rather than an html document is, counterintuitively, way easier. | 0 | 44,819 | 0 | 40 | 2011-02-11T08:49:00.000 | python,beautifulsoup,lxml | BeautifulSoup and lxml.html - what to prefer? | 1 | 2 | 4 | 4,968,489 | 0 |
1 | 0 | good day stackoverflow!
im not sure if any of you has tried this but basically i want to accomplish something like this:
- a python program continuously sends data to my website
- using that data computations will be made and images on the website are animate
so my questions are:
1. what method should i use to communicate python to the website? the easier and simpler the better (tried reading up on django and my nose bled)
2. is javascript the best way to move my images? or is flash better?
3. if flash is better, is it possible to use the input from python and pass it to flash? | false | 4,979,005 | 0 | 0 | 0 | 0 | Images are not shown "on your website", they are shown "in your users' browsers".
The browser needs to request the animation information from your website, which needs to request it from (wherever it comes from). Ideally, the website will cache the data so that 20 browser requests result in just one website request.
How close to realtime is this information?
Where does it come from? Is it a service you can run on the webserver?
How often does the browser need to be updated?
You should look for information on AJAX (letting the browser make asynchronous requests from the website). | 0 | 614 | 0 | 0 | 2011-02-12T16:01:00.000 | python,html | receive data from a python program to animate an object on a webpage | 1 | 1 | 2 | 4,979,028 | 0 |
0 | 0 | I'm trying to open a webpage using urllib.request.urlopen() then search it with regular expressions, but that gives the following error:
TypeError: can't use a string pattern on a bytes-like object
I understand why, urllib.request.urlopen() returns a bytestream, so re doesn't know the encoding to use. What am I supposed to do in this situation? Is there a way to specify the encoding method in a urlrequest maybe or will I need to re-encode the string myself? If so what am I looking to do, I assume I should read the encoding from the header info or the encoding type if specified in the html and then re-encode it to that? | false | 4,981,977 | -0.085505 | 0 | 0 | -3 | after you make a request req = urllib.request.urlopen(...) you have to read the request by calling html_string = req.read() that will give you the string response that you can then parse the way you want. | 0 | 110,274 | 0 | 65 | 2011-02-13T02:05:00.000 | python,regex,encoding,urllib | How to handle response encoding from urllib.request.urlopen() , to avoid TypeError: can't use a string pattern on a bytes-like object | 1 | 1 | 7 | 4,981,998 | 0 |
1 | 0 | I keep getting mismatched tag errors all over the place. I'm not sure why exactly, it's the text on craigslist homepage which looks fine to me, but I haven't skimmed it thoroughly enough. Is there perhaps something more forgiving I could use or is this my best bet for html parsing with the standard library? | false | 4,983,203 | 0.26052 | 0 | 0 | 4 | The mismatched tag errors are likely caused by mismatched tags. Browsers are famous for accepting sloppy html, and have made it easy for web page coders to write badly formed html, so there's a lot of it. THere's no reason to believe that creagslist should be immune to bad web page designers.
You need to use a grammar that allows for these mismatches. If the parser you are using won't let you redefine the grammar appropriately, you are stuck. (There may be a better Python library for this, but I don't know it).
One alternative is to run the web page through a tool like Tidy that cleans up such mismatches, and then run your parser on that. | 0 | 1,545 | 0 | 4 | 2011-02-13T08:29:00.000 | python,parsing,python-3.x,xml.etree | Need help parsing html in python3, not well formed enough for xml.etree.ElementTree | 1 | 1 | 3 | 4,983,230 | 0 |
0 | 0 | Trying to achieve:
same firefox profile throughout tests
Problem:
Tests are spread over 30 different files, instantiating a selenium object, and thus creating a firefox profile, in the first test won't persist to the following test because the objects die once script ends IIRC
Can't specify profile because I'm writing a test suite supposed to be run on different machines
Possible solutions:
Creating a selenium object in some common code that stays in memory throughout the tests. I am running each test by spawning a new python process and waiting for it to end. I am unsure how to send objects in memory to a new python object.
Any help is appreciated, thanks.
edit: just thought of instead of spawning a child python process to run the test, I just instantiate the test class that selenium IDE generated, removing the setUp and tearDown methods in all 30 tests, instantiating one selenium object in the beginning, then passing said selenium object to every test that's instantiated. | false | 4,987,773 | 0 | 1 | 0 | 0 | You can specify the firefox profile while running the server itself. The command would look like
java -jar selenium-server.jar -firefoxProfileTemplate "C:\Selenium\Profiles" where "C:\Selenium\Profiles" would be your path where firefox template files are stored. | 0 | 1,706 | 0 | 6 | 2011-02-14T00:04:00.000 | python,selenium | Keeping Firefox profile persistent in multiple Selenium tests without specifying a profile | 1 | 1 | 2 | 5,481,430 | 0 |
1 | 0 | I am not really a programmer but am asking this out of general curiosity. I visited a website recently where I logged in, went to a page, and without leaving, data on that page refreshes before my eyes.
Is it possible to mimic a browser (I was using Chrome) and log into the site, navigate to a page, and "scrape" that data that is coming in using Python? I would like to store and analyze it.
If so, taking this one step further, is it possible to interact with the website? Click a button that I know the name of?
Thanks in advance. | true | 4,999,485 | 1.2 | 0 | 0 | 3 | If the data "refreshes before your eyes" it is probably AJAX (javascript in the page pulling new page-data from the server).
There are two ways of approaching this;
using Selenium you can wrap an actual browser which will load the page, run the javascript, then you can grab page-bits from the active page.
you can look at what the AJAX in the page is doing (how it is asking for updates, what it is getting back) and write python code to emulate that.
both take a fair bit of of time and effort to set up; Selenium is a bit more robust, direct python queries is a bit more efficient, YMMV. | 0 | 1,502 | 0 | 0 | 2011-02-15T02:52:00.000 | python,screen-scraping | Log Into Website and Scrape Streaming Data | 1 | 1 | 2 | 4,999,545 | 0 |
0 | 0 | I'm trying to determine which files in the Python library are strictly necessary for my script to run. Right now I'm trying to determine where _io.py is located. In io.py (no underscore), the _io.py module (with underscore) is imported on line 60. | false | 5,003,276 | 0 | 1 | 0 | 0 | Try the DLLs folder under your base python install directory if you are on windows. It contains .pyd modules Ignacio mentions. I had a similar problem with a portable install. Including the DLLs folder contents to my install fixed it. I am using python 2.5. | 1 | 2,239 | 0 | 3 | 2011-02-15T11:50:00.000 | python,embed,portability | Python: import _io | 1 | 2 | 4 | 5,003,499 | 0 |
0 | 0 | I'm trying to determine which files in the Python library are strictly necessary for my script to run. Right now I'm trying to determine where _io.py is located. In io.py (no underscore), the _io.py module (with underscore) is imported on line 60. | false | 5,003,276 | 0.049958 | 1 | 0 | 1 | Not all Python modules are written in Python. Try looking for _io.so or _io.pyd. | 1 | 2,239 | 0 | 3 | 2011-02-15T11:50:00.000 | python,embed,portability | Python: import _io | 1 | 2 | 4 | 5,003,294 | 0 |
0 | 0 | I've made a python module file and uploaded in the SVN Repo (say string_utl.py which does string related operation). Is the anyway that I can access the the file Direclty for the SVN. Though I checkout the file Locally from the SVN to my computer and access it from there. But that not the point. I'm thinking of a local repository where all of my coworkers can access and modify the code.
I though add the Lsvn location in the sys.path list but it didn't worked.
I did it like this
sys.path.append ("http://lsvn/svn/lsvn/QRM_Helper/Helpful_Script/");
But didn't worked.
I tried it another way like this
urllib.urlopen(some_url)
as I'm using Python 3 it said to use urllib2.urlopen() but in my case it didn't worked either. It gave the following error that the module doesn't exist. | false | 5,025,579 | 0 | 0 | 0 | 0 | I just got the answer. I'll use the Pysvn to checkout the repo to a local folder. And I'll add the folder to sys.path list. Using this I can access the folders module.
Thank everyone for helping me out
But I've a slight problem... I didn't had the admin right in my office computer(I'm using windows) so I installed the pysvn in my laptop and copied the lib files from my laptop to my office computer..
Though I can pysvn in my computer (Office comp).but the Python crashes for no certain reason when I access Pysvn.. Is the copy paste could be the reason for that. | 0 | 4,844 | 0 | 2 | 2011-02-17T05:59:00.000 | python,svn,urllib,sys.path | Is there anyway I can access File from Online SVN in python? | 1 | 1 | 4 | 5,037,759 | 0 |
1 | 0 | I'm doing a RSS spider. How do you do for controlling the last crawl
date?
Right now what was I thinking is this:
Put in a control file the last pub_date that I have crawled.
Then when the crawl starts, it checks the last pub_date against the
new pub_dates. If there are new items, then start crawling, if not, do
nothing.
How does everyone else resolve this? | false | 5,040,388 | 0 | 0 | 0 | 0 | I store all data in database as well, and calculate a hash value out of the data. That way you can look up the hash very quickly, and carry out de-dup operation on the fly. | 0 | 231 | 0 | 0 | 2011-02-18T10:50:00.000 | python,web-crawler,scrapy | Scrapy: RSS control pub_date | 1 | 2 | 2 | 12,648,771 | 0 |
1 | 0 | I'm doing a RSS spider. How do you do for controlling the last crawl
date?
Right now what was I thinking is this:
Put in a control file the last pub_date that I have crawled.
Then when the crawl starts, it checks the last pub_date against the
new pub_dates. If there are new items, then start crawling, if not, do
nothing.
How does everyone else resolve this? | false | 5,040,388 | 0.099668 | 0 | 0 | 1 | I store all data in the database (including last crawl date and post dates) and take all dates I need from database. | 0 | 231 | 0 | 0 | 2011-02-18T10:50:00.000 | python,web-crawler,scrapy | Scrapy: RSS control pub_date | 1 | 2 | 2 | 5,042,647 | 0 |
1 | 0 | I would like to create a google chrome extension. Specifically, I'd like to make a packaged app, but not a hosted app. Am I correct in thinking this limits me to JavaScript (and HTML/CSS)?
My problem is that I need to do some complex math (singular value decomposition, factor analysis) and I don't want to write algorithms for this in javascript. Python already has libraries for the functions I need (SciPy), but I can't find any indication that I can make a Chrome extension using python.
Is this correct? Do I have any other options? | false | 5,048,436 | 1 | 0 | 0 | 7 | Although you mentioned you don't want it to be a hosted app, but this is one typical scenario where a hosted app can do.
SciPy is not a package that is easy to deploy. Even if you are writing a installed application based on SciPy, it requires some effort to deploy this dependency. A web application can help here where you put most of the hard-to-deploy dependencies on the server side (which is a one-off thing). And the client side can be really light. | 0 | 65,662 | 0 | 49 | 2011-02-19T02:23:00.000 | python,google-chrome-extension | Chrome extension in python? | 1 | 1 | 7 | 9,693,740 | 0 |
0 | 0 | I am using a server to send some piece of information to another server every second. The problem is that the other server response is few kilobytes and this consumes the bandwidth on the first server ( about 2 GB in an hour ). I would like to send the request and ignore the return ( not even receive it to save bandwidth ) ..
I use a small python script for this task using (urllib). I don't mind using any other tool or even any other language if this is going to make the request only. | false | 5,049,244 | 0.066568 | 0 | 0 | 1 | Sorry but this does not make much sense and is likely a violation of the HTTP protocol. I consider such an idea as weird and broken-by-design. Either make the remote server shut up or configure your application or whatever is running on the remote server on a different protocol level using a smarter protocol with less bandwidth usage. Everything else is hard being considered as nonsense. | 0 | 1,828 | 0 | 3 | 2011-02-19T06:20:00.000 | python,response | How can i ignore server response to save bandwidth? | 1 | 2 | 3 | 5,049,353 | 0 |
0 | 0 | I am using a server to send some piece of information to another server every second. The problem is that the other server response is few kilobytes and this consumes the bandwidth on the first server ( about 2 GB in an hour ). I would like to send the request and ignore the return ( not even receive it to save bandwidth ) ..
I use a small python script for this task using (urllib). I don't mind using any other tool or even any other language if this is going to make the request only. | false | 5,049,244 | 0.132549 | 0 | 0 | 2 | A 5K reply is small stuff and is probably below the standard TCP window size of your OS. This means that even if you close your network connection just after sending the request and checking just the very first bytes of the reply (to be sure that request has been really received) probably the server already sent you the whole answer and the packets are already on the wire or on your computer.
If you cannot control (i.e. trim down) what is the server reply for your notification the only alternative I can think to is to add another server on the remote machine waiting for a simple command and doing the real request locally and just sending back to you the result code. This can be done very easily may be even just with bash/perl/python using for example netcat/wget locally.
By the way there is something strange in your math as Glenn Maynard correctly wrote in a comment. | 0 | 1,828 | 0 | 3 | 2011-02-19T06:20:00.000 | python,response | How can i ignore server response to save bandwidth? | 1 | 2 | 3 | 5,049,588 | 0 |
0 | 0 | Is it possible to display the percentage a file has downloaded in python while using httplib2? I know you can with urllib2 but I want to use httplib2. | true | 5,055,605 | 1.2 | 0 | 0 | 2 | No. httplib2 doesn't have any kind of progress beacon callback, so it simply blocks until the request is finished. | 0 | 687 | 0 | 1 | 2011-02-20T06:21:00.000 | python,progress-bar,httplib2 | httplib2 download progress bar in python | 1 | 1 | 2 | 5,055,776 | 0 |
1 | 0 | Is there any way to get Python to run on a web browser, other than silverlight?
I'm pretty sure not, but it never hurts to ask (usually). | false | 5,060,382 | 0 | 0 | 0 | 0 | Short of writing your own browser plugin — no. | 0 | 717 | 0 | 1 | 2011-02-20T22:02:00.000 | python,client-side | web: client-side python? | 1 | 3 | 5 | 5,060,400 | 0 |
1 | 0 | Is there any way to get Python to run on a web browser, other than silverlight?
I'm pretty sure not, but it never hurts to ask (usually). | false | 5,060,382 | 0.07983 | 0 | 0 | 2 | Haven't tried it myself but Pyjamas (http://pyjs.org/) claims to contain a Python-to-Javascript compiler. Not exactly what you're asking for but might be worth a look. | 0 | 717 | 0 | 1 | 2011-02-20T22:02:00.000 | python,client-side | web: client-side python? | 1 | 3 | 5 | 5,060,425 | 0 |
1 | 0 | Is there any way to get Python to run on a web browser, other than silverlight?
I'm pretty sure not, but it never hurts to ask (usually). | false | 5,060,382 | 0.039979 | 0 | 0 | 1 | skulpt is an interesting new project | 0 | 717 | 0 | 1 | 2011-02-20T22:02:00.000 | python,client-side | web: client-side python? | 1 | 3 | 5 | 9,805,718 | 0 |
0 | 0 | On a host with multiple network interfaces, is it possible to bind the connect method from the Python smtplib to an specific source address? | true | 5,073,575 | 1.2 | 1 | 0 | 0 | No such option - at least not without hacking smtplib.connect() yourself. | 0 | 984 | 0 | 1 | 2011-02-22T03:01:00.000 | python,ip-address,connect,smtplib | Python smtplib: bind to specific source IP address in a machine with multiple network interfaces | 1 | 1 | 2 | 5,073,815 | 0 |
0 | 0 | is there any way to specify dns server should be used by socket.gethostbyaddr()? | true | 5,078,338 | 1.2 | 0 | 0 | 4 | Please correct me, if I'm wrong, but isn't this operating system's responsibility? gethostbyaddr is just a part of libc and according to man:
The gethostbyname(), gethostbyname2() and gethostbyaddr() functions each return a
pointer to an object with the following structure describing an internet host refer-
enced by name or by address, respectively. This structure contains either the infor-
mation obtained from the name server, named(8), or broken-out fields from a line in
/etc/hosts. If the local name server is not running these routines do a lookup in
/etc/hosts.
So I would say there's no way of simply telling Python (from the code's point of view) to use a particular DNS, since it's part of system's configuration. | 0 | 5,038 | 0 | 5 | 2011-02-22T12:51:00.000 | python,sockets,dns | python: how to tell socket.gethostbyaddr() which dns server to use | 1 | 1 | 2 | 5,078,532 | 0 |
0 | 0 | I have 2 computers behind diferent NAT's and FTP-server. How can i connect computers to each other without server program?
I read about STUN and UDP hole punching, but as i see, it needs some server side program, isn't it?
It's will be used in python program. | true | 5,078,749 | 1.2 | 0 | 0 | 0 | To do this without a server you could setup port forwarding on one of the NAT routers. e.g. machine1 behind nat1, machine2 behind nat2. Setup a port on nat1 forward to the FTP port of machine1. You should then be able to FTP from machine2 using the public IP address of nat1. Use passive mode FTP to avoid having to open up more ports through the NAT routers. | 0 | 493 | 0 | 0 | 2011-02-22T13:28:00.000 | python,ftp,nat | Can I connect 2 NAT-covered computers using FTP? | 1 | 1 | 1 | 5,078,819 | 0 |
0 | 0 | I was wondering if you guys know any websites or have any ideas on beginner-intermediate level Python network programming projects/practice problems to practice Python network programming? I just finished reading "Foundations of Python Network Programming" and am looking for practice assignments that aren't too difficult to hone my skills.. I've made a simple localhost client/server that lets you add/subtract/multiply/divide numbers.. the "client" passes in 2 numbers and an operation to the server, the server does the calculation and returns the value. Any ideas on what I can do that would be good practice for network programming that doesn't involve installing libraries?
Thanks! | false | 5,083,701 | 0 | 0 | 0 | 0 | My first net project was a web spider that traversed the web (Obviously), and created a DB to be used as a search engine.
Spider & web search engine in Python (I used mod_python for the web page, but I'd recommend Django), and the DB in MySQL.
Create a GUI for managing the DB or Spider (Or both, up to you).
Ended up using:
Sockets
DB interactions
wxPython
Threading | 0 | 4,415 | 0 | 2 | 2011-02-22T20:40:00.000 | python,networking,project | Python network programming project? | 1 | 1 | 2 | 5,618,464 | 0 |
1 | 0 | I use selenium-rc to test an asp.net website, test script is written in Python and the browser is Firefox 3.6. But When selenium open the first page of the website, A download dialog appear, not the web page, seems the page is proccessed as application/octet-stream, this cause my test script can not run successfully.
Seems this behavior happened on some asp.net websites, I select other asp sites to test, and found some asp sites have the same issue.
My question is why this happend? and how to fix this?
Edit: I use IE to do this test again, seems it's ok. So is this an issue of Firefox? | true | 5,086,524 | 1.2 | 0 | 0 | 0 | Ok, I upgrade my selenium-server to 2.0b2, now it's ok. | 0 | 218 | 0 | 0 | 2011-02-23T02:54:00.000 | asp.net,python,selenium-rc | Why selenium can not open some asp page? | 1 | 1 | 1 | 5,086,635 | 0 |
0 | 0 | Should it be possible to send a plain, single http POST request (not chunk-encoded), in more than one segment? I was thinking of using httplib.HTTPConnection and calling the send method more than once (and calling read on the response object after each send).
(Context: I'm collaborating to the design of a server that offers services analogous to transactions [series of interrelated requests-responses]. I'm looking for the simplest, most compatible HTTP representation.) | false | 5,093,622 | 0.197375 | 0 | 0 | 1 | After being convinced by friends that this should be possible, I found a way to do it. I override httplib.HTTPResponse (n.b. httplib.HTTPConnection is nice enough to let you specify the response_class it will instantiate).
Looking at socket.py and httplib.py (especially _fileobject.read()), I had noticed that read() only allowed 2 things:
read an exact number of bytes (this returns immediately, even if the connection is not closed)
read all bytes until the connection is closed
I was able to extend this behavior and allow free streaming with just a few lines of code. I also had to set the will_close member of my HTTPResponse to 0.
I'd still be interested to hear if this is considered acceptable or abusive usage of HTTP. | 0 | 724 | 0 | 2 | 2011-02-23T16:18:00.000 | python,http | multiple send on httplib.HTTPConnection, and multiple read on HTTPResponse? | 1 | 1 | 1 | 5,122,434 | 0 |
0 | 0 | I need to parse a large (>800MB) XML file from Jython. The XML is not deeply nested, containing about a million relevant elements. I need to convert these elements into real objects.
I've used nu.xom.* successfully before, but now that I've switched from Java to Jython, the library fails with the following message:
The parser has encountered more than
"64,000" entity expansions in this
document; this is the limit imposed by
the application.
I have not found a way to fix this, so I probably have to look for another XML library. It could be either Java or Jython-compatible Python and should be efficient. Pythonic would be great, nu.xom.* is simple but not very pythonic. Do you have any suggestions? | false | 5,094,042 | 0 | 0 | 0 | 0 | there is a lxml python library, that can parse large files, without loading data to memory.
but i don't know if i jython compatible | 0 | 3,156 | 0 | 0 | 2011-02-23T16:50:00.000 | java,python,xml,jython,xom | Best way to parse large XML document in Jython | 1 | 2 | 4 | 5,108,534 | 0 |
0 | 0 | I need to parse a large (>800MB) XML file from Jython. The XML is not deeply nested, containing about a million relevant elements. I need to convert these elements into real objects.
I've used nu.xom.* successfully before, but now that I've switched from Java to Jython, the library fails with the following message:
The parser has encountered more than
"64,000" entity expansions in this
document; this is the limit imposed by
the application.
I have not found a way to fix this, so I probably have to look for another XML library. It could be either Java or Jython-compatible Python and should be efficient. Pythonic would be great, nu.xom.* is simple but not very pythonic. Do you have any suggestions? | false | 5,094,042 | 0.148885 | 0 | 0 | 3 | Try using the SAX parser, it is great for streaming large XML files. | 0 | 3,156 | 0 | 0 | 2011-02-23T16:50:00.000 | java,python,xml,jython,xom | Best way to parse large XML document in Jython | 1 | 2 | 4 | 5,094,073 | 0 |
0 | 0 | I'm working on a hobby project consisting of a multi-player web browser game.
It is my first and I have just stumbled into the latency issue.
I am trying to make user control as smooth as possible and latency is getting in the way.
I reckon that average latencies might be around 80-200ms and that for virtually-smooth control a command-action delay needs to be less than 100ms.
I have a few questions:
Would it be good practice to try and send user actions 100ms before required? e.g. User keeps the '->' arrow key pressed, I submit the right arrow key action 100ms before action needs to be submitted to a server.
How do developees keep consistency/synchonise between what is happening on the online server and on the client?
Any tips or recommendations?
Thanks guys, help would be very much appreciated. :) | true | 5,094,697 | 1.2 | 0 | 0 | 5 | Question 1) Yes, but if you're doing real time movement like that, I would consider rendering it locally (using collision detection and what not) and then validation on the server to ensure they didn't cheat it (i.e. update the position on the server every second, and make sure they could have gone from A to B in one second, etc.)
Question 2) Every so often (quarter, half, full second) you send a packet with environment updates of what other players did and what npcs did and the like.
Question 3) Develop then profile. Make it the way you want it to logically. Then, if you find the playability is too laggy, work on optimizing the interface and networking layer. You might find it to be just fine! | 0 | 356 | 0 | 4 | 2011-02-23T17:42:00.000 | python,pygame,latency | How should I deal with latency in game development? | 1 | 2 | 2 | 5,094,771 | 0 |
0 | 0 | I'm working on a hobby project consisting of a multi-player web browser game.
It is my first and I have just stumbled into the latency issue.
I am trying to make user control as smooth as possible and latency is getting in the way.
I reckon that average latencies might be around 80-200ms and that for virtually-smooth control a command-action delay needs to be less than 100ms.
I have a few questions:
Would it be good practice to try and send user actions 100ms before required? e.g. User keeps the '->' arrow key pressed, I submit the right arrow key action 100ms before action needs to be submitted to a server.
How do developees keep consistency/synchonise between what is happening on the online server and on the client?
Any tips or recommendations?
Thanks guys, help would be very much appreciated. :) | false | 5,094,697 | 0.099668 | 0 | 0 | 1 | One important thing is to load resources when needed. That is, in most 3D 'moving'-games, load resources as approaching the objects needing them. | 0 | 356 | 0 | 4 | 2011-02-23T17:42:00.000 | python,pygame,latency | How should I deal with latency in game development? | 1 | 2 | 2 | 5,094,815 | 0 |
1 | 0 | Does anybody know how to make http request from Google App Engine without waiting a response?
It should be like a push data with http without latency for response. | false | 5,107,675 | 0.099668 | 0 | 0 | 2 | Use the taskqueue. If you're just pushing data, there's no sense in waiting for the response. | 0 | 1,253 | 1 | 4 | 2011-02-24T16:43:00.000 | python,http,google-app-engine,asynchronous,request | async http request on Google App Engine Python | 1 | 2 | 4 | 5,111,470 | 0 |
1 | 0 | Does anybody know how to make http request from Google App Engine without waiting a response?
It should be like a push data with http without latency for response. | false | 5,107,675 | 0 | 0 | 0 | 0 | I've done this before by setting doing a URLFetch and setting a very low value for the deadline parameter. I put 0.1 as my value, so 100ms. You need to wrap the URLFetch in a try/catch also since the request will timeout. | 0 | 1,253 | 1 | 4 | 2011-02-24T16:43:00.000 | python,http,google-app-engine,asynchronous,request | async http request on Google App Engine Python | 1 | 2 | 4 | 5,111,384 | 0 |
0 | 0 | How can I post an image to Facebook using Python? | false | 5,118,178 | 0.099668 | 0 | 0 | 2 | Unfortunately, the Python SDK is discontinued now. Will not work. Need to use Javascript / PHP / iOS / Android APIs | 0 | 7,965 | 0 | 6 | 2011-02-25T14:08:00.000 | python,http,curl,urllib2,urllib | Post picture to Facebook using Python | 1 | 1 | 4 | 9,482,535 | 0 |
0 | 0 | I need to parse html emails that will be similar but not exactly the same. I will be looking for things like dates, amounts, vendors, ect., but depending on who the email came from, the markup will be different.
How could I parse out those common things from lots of different html markup in python?
Thanks for your suggestions. | false | 5,120,129 | 0.132549 | 0 | 0 | 2 | BeautifulSoup or lxml are decent HTML parsers. BeautifulSoup is a bit more handy but has some odds and ends. | 1 | 6,334 | 0 | 1 | 2011-02-25T16:54:00.000 | python,html,parsing | Python html parsing | 1 | 1 | 3 | 5,120,192 | 0 |
0 | 0 | I need to do a client-server application, the client will be made with python-gtk,
all procedures will be on server-side to free the client of this workload.
So i did search on google about client-server protocols and i found that CORBA and RPC are closer from what i had in mind, BUT also i want to made this app ready to accept web and mobile clients, so i found REST and SOAP.
From all that reading i found myself with this doubts, should i implement two different protocols, one for gtk-client (like RPC or CORBA) and another for web and mobile (REST or SOAP)?
Can i use REST or SOAP for all? | false | 5,124,408 | 0.291313 | 0 | 0 | 3 | Use REST. It's the simplest, and therefore the most widely accessible. If you really find a need for SOAP, RPC, or CORBA later, you can add them then. | 0 | 490 | 0 | 3 | 2011-02-26T01:17:00.000 | python,rest,soap,rpc,corba | What protocol to use in client-server app communication with python? | 1 | 2 | 2 | 5,124,484 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.