Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I am new to OOP and am writing a small tool in Python that checks Bitcoin prices using a JSON load from the web Bitcoin() class, it monitors the prices Monitor(), notifies the user when thresholds are met Notify() and uses a console-interface Interface() for now to do so. I have created a Bitcoin() class that can read the prices and volumes from the JSON load. The __init__ definition connects to the web using socket. Since every instance of this class would result in a new socket, I would only need/want one instance of this class running. Is a class still the best way to approach this? What is the best way to get other classes and instances to interact with my Bitcoin() instance? Should I global a Bitcoin() instance? Pass the instance as an argument to every class that needs it?
false
15,817,484
0
0
0
0
OOP is a tool, not a goal, you can make a decision whether to use it or not. If you use a Python module, you can achieve encapsulation without ever writing "class".
1
161
0
0
2013-04-04T17:10:00.000
python,oop
Should I still create a class, if it can only have one instance?
1
1
6
15,817,576
0
1
0
Is Javascript the only language that can utilise the DOM API? Is there a DOM wrapper for Python?
false
15,887,916
0.132549
0
0
2
Internet Explorer has support for client-side VBScript, but nobody really uses it. Javascript is an implementation of ECMAScript, by Brendan Eich at Netscape. It became the de-facto standard. However, most languages have libraries written that can traverse an html document in the server side. In Python a common one is called Beautiful Soup.
0
541
0
0
2013-04-08T20:03:00.000
python,browser,scripting,client-side
Clientside Scripting Language
1
1
3
15,887,967
0
0
0
I'm crawling a bunch of web pages using Python's request library, but occasionally the crawler will stumble upon an absolutely mammoth page, be it a PDF or video or otherwise gargantuan file. Is there a good way to limit the maximum size of file it will download?
true
15,896,639
1.2
0
0
1
The urlopen object has a method info() which gives all kinds of useful header information, including Content-Length Occassionally this is not correctly set but should be in most cases and will help
0
535
0
0
2013-04-09T08:11:00.000
python,python-2.7,python-requests
Cap download size with Python requests library
1
1
1
15,896,851
0
0
0
I searched a lot for built web service like Google Talk, using Google Application Engine and Python. For that first step is to check the status of online user on the Gmail. I found many code of it on python using XMPP library but it work only on python not using Google Application Engine. There is also suggestion of using XMPP python API but for sending message we have to provide JID like [email protected] and message send.We can not send message from one email Id to another Email Id directly. Now I want to perform Oauth authentication in python for gtalk at domain level can anyone tell me how to do this?
false
15,898,775
0
1
0
0
I think you are confused. Python runs ON appengine. Also theres a working java xmpp example provided.
0
323
0
0
2013-04-09T09:51:00.000
google-app-engine,python-2.7,google-talk
Gtalk Service On Google App Engine Using Python
1
2
2
15,903,171
0
0
0
I searched a lot for built web service like Google Talk, using Google Application Engine and Python. For that first step is to check the status of online user on the Gmail. I found many code of it on python using XMPP library but it work only on python not using Google Application Engine. There is also suggestion of using XMPP python API but for sending message we have to provide JID like [email protected] and message send.We can not send message from one email Id to another Email Id directly. Now I want to perform Oauth authentication in python for gtalk at domain level can anyone tell me how to do this?
false
15,898,775
0
1
0
0
You can only send messages from your app. There are two options: [email protected] or anything@your_app_id.appspotchat.com. If you wanted to behave like an arbitrary xmpp client, you'll have to use a third party xmpp library running over HTTP and handle the authentication with the user's XMPP server.
0
323
0
0
2013-04-09T09:51:00.000
google-app-engine,python-2.7,google-talk
Gtalk Service On Google App Engine Using Python
1
2
2
15,904,726
0
0
0
I'm aware that I can manually code in a proxy with user/password, but is it possible to get Python to just pull the proxy settings AND authentication from IE?
false
15,903,189
0
0
0
0
For simple authentication (basic, digest), yes. Simply make sure your system's proxy settings (e.g, in IE) are in the format: joe:[email protected]:3128 If your proxy is using some other more complicated form of authentication (e.g., ntlm, kerberos), this is not so easy.
0
90
0
0
2013-04-09T13:24:00.000
python,proxy
Can I get Python to use an authenticated Proxy without userinput?
1
1
1
15,903,624
0
1
0
I wrote a little script that copies files from bucket on one S3 account to the bucket in another S3 account. In this script I use bucket.copy_key() function to copy key from one bucket in another bucket. I tested it, it works fine, but the question is: do I get charged for copying files between S3 to S3 in same region? What I'm worry about that may be I missed something in boto source code, and I hope it's not store the file on my machine, than send it to another S3. Also (sorry, if its to much questions in one topic) if I upload and run this script from EC2 instance will I get charge for bandwidth?
true
15,956,099
1.2
0
1
3
If you are using the copy_key method in boto then you are doing server-side copying. There is a very small per-request charge for COPY operations just as there are for all S3 operations but if you are copying between two buckets in the same region, there is no network transfer charges. This is true whether you run the copy operations on your local machine or on an EC2 instance.
0
322
0
1
2013-04-11T18:24:00.000
python,amazon-web-services,amazon-s3,boto,data-transfer
Will I get charge for transfering files between S3 accounts using boto's bucket.copy_key() function?
1
1
1
15,957,021
0
0
0
I'm writing a simple Twitter bot in Python and was wondering if anybody could answer and explain the question for me. I'm able to make Tweets, but I haven't had the bot retweet anyone yet. I'm afraid of tweeting a user's tweet multiple times. I plan to have my bot just run based on Windows Scheduled Tasks, so when the script is run (for example) the 3rd time, how do I get it so the script/bot doesn't retweet a tweet again? To clarify my question: Say that someone tweeted at 5:59pm "#computer". Now my twitter bot is supposed to retweet anything containing #computer. Say that when the bot runs at 6:03pm it finds that tweet and retweets it. But then when the bot runs again at 6:09pm it retweets that same tweet again. How do I make sure that it doesn't retweet duplicates? Should I create a separate text file and add in the IDs of the tweets and read through them every time the bot runs? I haven't been able to find any answers regarding this and don't know an efficient way of checking.
false
15,958,980
0
1
0
0
Twitter is set such that you can't retweet the same thing more than once. So if your bot gets such a tweet, it will be redirected to an Error 403 page by the API. You can test this policy by reducing the time between each run by the script to about a minute; this will generate the Error 403 link as the current feed of tweets remains unchanged.
0
2,022
0
0
2013-04-11T21:14:00.000
python,twitter
How do I make sure a twitter bot doesn't retweet the same tweet multiple times?
1
2
4
30,488,072
0
0
0
I'm writing a simple Twitter bot in Python and was wondering if anybody could answer and explain the question for me. I'm able to make Tweets, but I haven't had the bot retweet anyone yet. I'm afraid of tweeting a user's tweet multiple times. I plan to have my bot just run based on Windows Scheduled Tasks, so when the script is run (for example) the 3rd time, how do I get it so the script/bot doesn't retweet a tweet again? To clarify my question: Say that someone tweeted at 5:59pm "#computer". Now my twitter bot is supposed to retweet anything containing #computer. Say that when the bot runs at 6:03pm it finds that tweet and retweets it. But then when the bot runs again at 6:09pm it retweets that same tweet again. How do I make sure that it doesn't retweet duplicates? Should I create a separate text file and add in the IDs of the tweets and read through them every time the bot runs? I haven't been able to find any answers regarding this and don't know an efficient way of checking.
false
15,958,980
0
1
0
0
You should store somewhere the timestamp of the latest tweet processed, that way you won't go throught the same tweets twice, hence not retweeting a tweet twice. This should also make tweet processing faster (because you only process each tweet once).
0
2,022
0
0
2013-04-11T21:14:00.000
python,twitter
How do I make sure a twitter bot doesn't retweet the same tweet multiple times?
1
2
4
15,959,518
0
0
0
I'm trying to get the current url after a series of navigations in Selenium. I know there's a command called getLocation for ruby, but I can't find the syntax for Python.
false
15,985,339
0.099668
0
0
2
Another way to do it would be to inspect the url bar in chrome to find the id of the element, have your WebDriver click that element, and then send the keys you use to copy and paste using the keys common function from selenium, and then printing it out or storing it as a variable, etc.
0
291,478
0
227
2013-04-13T07:20:00.000
python,selenium,selenium-webdriver
How do I get current URL in Selenium Webdriver 2 Python?
1
1
4
55,485,019
0
0
0
I've a client (currently in C#, a python version in progress) which gets computer data such as CPU %, disk space etc. and sends it to a server. I don't know how to manage if my client looses connection with the server. I have to continue collecting information but where to stock them? Just a buffer? Is using a log file a better solution? Any ideas?
false
16,001,064
0
0
0
0
I'd create a log file on the HDD and put in the last recorded data and time. Then just read it out when needed again.
0
57
0
0
2013-04-14T15:58:00.000
c#,python,buffer
How to store datas while connection is lost?
1
1
1
16,002,014
0
1
0
is there a way to trace all the calls made by a web page when loading it? Say for example I went in a video watching site, I would like to trace all the GET calls recursively until I find an mp4/flv file. I know a way to do that would be to follow the URLs recursively, but this solution is not always suitable and quite limitative( say there's a few thousand links, or the links are in a file which can't be read). Is there a way to do this? Ideally, the implementation could be in python, but PHP as well as C is fine too
false
16,114,358
0
1
0
0
Chrome provides a built-in tool for seeing the network connections. Press Ctrl+Shift+J to open the JavaScript Console. Then open the Network tab to see all of the GET/POST calls.
0
98
0
0
2013-04-19T22:23:00.000
php,python,html,networking
Tracing GET/POST calls
1
1
4
16,115,090
0
0
0
I had putty on one server and run a python script available on that server. That script keep on throwing output on terminal. Later on, my internet connection went off but even then i was expecting my script to complete it job as script is on running on that server. But when internet connection resumed, I found that script has not done its job. So is this expected ? If yes, then what to do to make sure that script runs on server even though internet connection goes off in-between? Thanks in advance !!!
false
16,117,044
0
1
0
0
On the server, you can install tmux or screen. These programs run the program in the background and enable you to open a 'window', If I use tmux: Open tmux: tmux Detach (run in background): press Ctrl-b d reattach (open a 'window'): tmux attach
0
289
1
0
2013-04-20T05:37:00.000
python,shell,python-2.7,putty
to keep the script running even after internet connection goes off
1
1
2
16,117,169
0
0
0
I need to use python with Java in a project in which graphs (the kind with nodes and edges) plays a large role. I want to visualize those graphs in a simple GUI and update its node labels/edge weights/whatever every second or so. I also want to load graphs from files in graphml form. Networkx is advised by many people, but doesn't seem to work with Jython, is that correct? If not, I get a SyntaxError: 'import *' not allowed with 'from .' error from inside the Networkx egg. Even if it's works, I would need Numpy and matplotlib to work and I'm not sure those work with Jython. So firstly, could you help me with solving these NetworkX issues. Secondly, are there alternatives to Networkx that you could recommend for my purposes?
false
16,159,985
0
0
0
0
Jython is python language spec inside of the JVM much like JRuby. NetworkX source code is C or fortran (don't remember which). Numpy/Scipy are C based (great packages for scientific computing). Matplotlib is c (for display of the graphs). NetworkX will help create graphs, matplotlib will help display them but both may not work in Jython. If you need c based resources try jpype; its older (python 2.7) but will allow some functionality between c and java using JNI (java native interface). What I have done is create graphs in python then switch over to Gephi to visualize and display the graphs. Gephi is java based and a up and coming free tool.
0
1,298
0
1
2013-04-23T02:39:00.000
java,python,graph,jython,networkx
Jython graph library
1
1
1
16,269,881
0
0
0
I'm missing something with protobuffers. Here is some questions that I have that I'm having difficulties answering. Is a .proto file enough to have get all the data out of? In the address book example that is on the site they seem to write the data out to a file and consume it out of a file (separate than the .proto file itself). I understand that the proto serializes the object structure and I know that it can serialize a message however I'm having a hard time finding where to put the data and retrieve within one contained .proto file itself. If question above is answered as I think it would my assumption is that one team can create the proto file and serialize the data with with java and another team can simply take the file and use python to deserialze it is that correct assumption?
false
16,161,992
0.291313
0
0
3
Is a .proto file enough to have get all the data out of? The proto file is used to define the structure of the message. Each field is given a tag number. As long as you have the right proto file, the data can be de-serialized correctly. Yes the proto file will suffice. one team can create the proto file and serialize the data with with java and another team can simply take the file and use python to deserialze it is that correct assumption? One team can create the structure needed to define the data being sent / received and others can use that definition to communicate. As long as both teams use the same .proto file and the tag numbers are assigned correctly, you should have no trouble doing what you're asking.
0
165
0
2
2013-04-23T05:57:00.000
java,python,protocol-buffers
Please help me understand protocol buffers
1
2
2
16,162,045
0
0
0
I'm missing something with protobuffers. Here is some questions that I have that I'm having difficulties answering. Is a .proto file enough to have get all the data out of? In the address book example that is on the site they seem to write the data out to a file and consume it out of a file (separate than the .proto file itself). I understand that the proto serializes the object structure and I know that it can serialize a message however I'm having a hard time finding where to put the data and retrieve within one contained .proto file itself. If question above is answered as I think it would my assumption is that one team can create the proto file and serialize the data with with java and another team can simply take the file and use python to deserialze it is that correct assumption?
false
16,161,992
0
0
0
0
Think of it this way Client side java code - proto encoder - - bytes ======= network ======= - - bytes - proto decoder Server side java code Instead of the network, the bytes may be written to a file by one side and read from it by the other. Whichever. In order for proto encoder and proto decoder to do their job, they need to understand the format of the bytes coming in. proto file describes that format. It's somewhat analogous to sending an image over the network: both sides need to know it's a png file. If one sends png and the other tries to decode jpg, things won't work. The same way as jpg vs png describe the image format, proto files describe the data format.
0
165
0
2
2013-04-23T05:57:00.000
java,python,protocol-buffers
Please help me understand protocol buffers
1
2
2
16,162,532
0
1
0
I have a scrip (using Python) that submits to a form on www.example.com/form/info.php currently my script will: - open Firefox - enter name, age, address - press submit what I want to do is have a web form (with name, age, address) on LAMP and when the user press submit it adds those options to the selenium script (to be put into www.example.coom/form/info.php) and submits it directly in the browser. Is this possible? UPDATE: I know this is possible using mechanize, because i have tested it out, but it doesnt so well with javascript which is why i am using selenium.
false
16,219,178
0
0
0
0
The script has to run first and create the browser session. Currently AFAIK there is no way to take webdriver control of a browser that is already open.
0
114
0
0
2013-04-25T15:43:00.000
java,python,selenium
Is it possible to run selenium script from web browser?
1
1
1
16,219,793
0
0
0
I wrote a web crawler to crawl product infomation from www.amazon.com by using urllib2,but it seems that amazon limit the connection for each IP to 1. When I start more than one thread to crawl simultaneously, it raises HTTP Error 503: Service Temporarily Unavailable. I want to start more threads to crawl fast,so how can I fix this error?
false
16,264,992
0
0
0
0
You should probably switch to use the Amazon API for product queries.
0
201
0
1
2013-04-28T16:26:00.000
python,http,network-programming
how to crawl web pages fast when the number of connection is limited
1
2
3
16,265,073
0
0
0
I wrote a web crawler to crawl product infomation from www.amazon.com by using urllib2,but it seems that amazon limit the connection for each IP to 1. When I start more than one thread to crawl simultaneously, it raises HTTP Error 503: Service Temporarily Unavailable. I want to start more threads to crawl fast,so how can I fix this error?
false
16,264,992
0.066568
0
0
1
Short version: you can't, and it would be a bad idea to even try.
0
201
0
1
2013-04-28T16:26:00.000
python,http,network-programming
how to crawl web pages fast when the number of connection is limited
1
2
3
16,265,020
0
1
0
I have an endpoint that must send an image in the response. The original image is a file in the server that I open with python (open().read()) and save it in the NDB as BlobProperty (ndb.BlobProperty()). My protoRPC message is a BytesField. If I go in the apis-explorer the picture comes with the correct value, but it doesn't work in my JS Client. I've been trying to just read the file, encode and decode base64 but the JS is still not recognizing it. Does anyone have an idea how to solve it? How can I send the base64 image via Endpoints? Thank you!
true
16,273,227
1.2
0
0
1
The way it finally worked was just open the file with the (open().read()) and save it in the NDB. The response message was a BytesField just sending the string of the open().read(), without any encoding. The console in my browser was not reading the value of the field in the answer, but it works normal in my app.
0
406
0
1
2013-04-29T07:22:00.000
google-app-engine,python-2.7,base64,google-cloud-endpoints,protorpc
Send image as base64 via Google Endpoints
1
1
1
16,276,321
0
0
0
I'm using the Python library HSAudioTag, and I'm trying to read the track number in my files, however, without fail, the file returns 0 as the track number, even if it's much higher. Does anybody have any idea how to fix this? Thanks.
true
16,288,787
1.2
1
0
0
The solution was to go into the code, and change the following lines to: Line 118: self.track = u'' Lines 149-152: self.track = int(self._fields.get(TRACK, u'')) + 1
0
87
0
0
2013-04-29T21:45:00.000
python
Python HSAudioTag for WMA files always returns 0?
1
1
1
21,212,477
0
0
0
There is a node where I ssh into and start a script remotely by Robot Framework (SSHLibrary.Start Command or Execute Command). This remote script starts a telnet connection to another node which is hidden from outside. This telnet call seems to be a blocking event to Robot. I use RIDE for test execution and it simply stops working. I can send stop signals inefficiently. Is it possible to spawn telnet within ssh?
false
16,298,022
0.132549
1
0
2
We haven't exactly used the method with telnet but with another ssh session or other shells that we cannot access otherwise... Open an ssh connection to the first machine. On this connection, use SSHLibrary keywords like Set Prompt, Write and Read or Read Until Prompt to manually open a telnet connection to the next machine. Write and Read Keywords can be used a bit like the expect and spawn...
0
2,735
0
4
2013-04-30T10:47:00.000
python,testing,ssh,telnet,robotframework
Is there a way to use telnet within an ssh connection in Robot Framework?
1
1
3
16,315,259
0
0
0
Everything is in the title! Is there a way to define the download directory for selenium-chromedriver used with python? In spite of many research, I haven't found something conclusive... As a newbie, I've seen many things about "the desired_capabilities" or "the options" for Chromedriver but nothing has resolved my problem... (and I still don't know if it will!) To explain a little bit more my issue: I have a lot of url to scan (200 000) and for each url a file to download. I have to create a table with the url, the information i scrapped on it, AND the name of the file I've just downloaded for each webpage. With the volume I have to treat, I've created threads that open multiple instance of chromedriver to speed up the treatment. The problem is that every downloaded file arrives in the same default directory and I'm no more able to link a file to an url... So, the idea is to create a download directory for every thread to manage them one by one. If someone have the answer to my question in the title OR a workaround to identify the file downloaded and link it with the current url, I will be grateful!
false
16,328,801
0.066568
0
0
1
I have faced recently the same issue. Tried a lot of solutions found in the Internet, no one helped. So finally I came to this: Launch chrome with empty user-data-dir (in /tmp folder) to let chrome initialize it Quit chrome Modify Default/Preferences in newly created user-data-dir, add those fields to the root object (just an example): "download": { "default_directory": "/tmp/tmpX7EADC.downloads", "directory_upgrade": true } Launch chrome again with the same user-data-dir Now it works just fine. Another tip: If you don't know file name of file that is going to be downloaded, create snapshot (list of files) of downloads directory, then download the file and find its name by comparin snapshot and current list of files in the downloads directory.
0
9,773
0
8
2013-05-02T01:03:00.000
python,selenium,selenium-chromedriver
Define download directory for chromedriver selenium with python
1
1
3
17,212,203
0
1
0
I have a python script that pings 12 pages on someExampleSite.com every 3 minutes. It's been working for a couple months but today I started receiving 404 errors for 6 of the pages every time it runs. So I tried going to those urls on the pc that the script is running on and they load fine in Chrome and Safari. I've also tried changing the user agent string the script is using and that also didn't change anything. Also I tried removing the ['If-Modified-Since'] header which also didn't change anything. Why would the server be sending my script a 404 for these 6 pages but on that same computer I can load them in Chrome and Safari just fine? (I made sure to do a hard refresh in Chrome and Safari and they still loaded) I'm using urllib2 to make the request.
false
16,341,001
0
0
0
0
So I figured out what the problem was. The website is returning an erroneous response code for these 6 pages. Even though it's returning a 404, it's also returning the web page. Chrome and Safari seem to ignore the response code and display the page anyways, my script aborts on the 404.
0
52
0
0
2013-05-02T14:42:00.000
python,macos
Troubleshooting 404 received by python script
1
1
2
16,342,227
0
0
0
I have a number of clients who need to connect to a server and maintain the connection for some time (around 4 hours). I don't want to specify a different connection port for each client (as there are potentially many of them) I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients. Is there a way to do this in Python or do I need to re-think the architecture. EXTRA CREDIT: A Python snippet of the server code doing this would be amazing!
false
16,351,298
0
0
0
0
Use select.select() to detect events on multiple sockets, like incoming connections, incoming data, outgoing buffer capacity and connection errors. You can use this on multiple listening sockets and on established connections from a single thread. Using a websearch, you can surely find example code.
0
1,496
1
0
2013-05-03T03:50:00.000
python,networking,tcp,client-server
How do I receive and manage multiple TCP connections on the same port?
1
2
2
16,352,065
0
0
0
I have a number of clients who need to connect to a server and maintain the connection for some time (around 4 hours). I don't want to specify a different connection port for each client (as there are potentially many of them) I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients. Is there a way to do this in Python or do I need to re-think the architecture. EXTRA CREDIT: A Python snippet of the server code doing this would be amazing!
true
16,351,298
1.2
0
0
1
I don't want to specify a different connection port for each client (as there are potentially many of them) You don't need that. I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients That's how TCP already works. Just create a socket listening to port 10800 and accept connections from it.
0
1,496
1
0
2013-05-03T03:50:00.000
python,networking,tcp,client-server
How do I receive and manage multiple TCP connections on the same port?
1
2
2
16,354,390
0
0
0
I'm parsing a xml file in which I get basic expressions (like id*10+2). What I am trying to do is to evaluate the expression to actually get the value. To do so, I use the eval() method which works very well. The only thing is the numbers are in fact hexadecimal numbers. The eval() method could work well if every hex number was prefixed with '0x', but I could not find a way to do it, neither could I find a similar question here. How would it be done in a clean way ?
false
16,354,980
0
0
0
0
Be careful with eval! Do not ever use it in untrusted inputs. If it's just simple arithmetic, I'd use a custom parser (there are tons of examples out in the wild)... And using parser generators (flex/bison, antlr, etc.) is a skill that is useful and easily forgotten, so it could be a good chance to refresh or learn it.
1
1,913
0
3
2013-05-03T08:53:00.000
python,parsing
Appending '0x' before the hex numbers in a string
1
1
4
16,356,153
0
1
0
Currently I'm writing software for web automation using selenium and autoit. I've found a strange issue, that for some pages when printing to pdf with firefox I get unsearchable pdfs. I've tried ff 3.5, 4.0, 20, 22, 23 - all have the same issue. You can reproduce it by printing any linkedin profile - you'll get unsearchable pdf. Did anyone encounter the same behaviour? How can I bypass it (using python, selenium)? I've tried chrome driver, but it's increadibly slow. I'm running windows 7 x64 ultimate It does not deppend on printer used - I have tried a lot of different versions. By searchable I mean that I should be able to search text in it like in most pdf files. Update - I still don't understand why it happens. I've tried printing the same web page from IE 9 - it gives exactly the same print dialog as firefox and uses the same pdf printer driver. Nevertheless, it produces searchable pdfs. Guess the problem is related to the way firefox prints documents.
true
16,359,326
1.2
0
0
1
Firefox does not control the way your content is being printed to the PDF. Your PDF Printer Driver is responsible for creating the PDF file as a Bitmap snapshot of your page, instead of composing it from the elements in your page. The reason that you find a different behavior in Chrome compared to Firefox, is that Chrome has a built in "Save as PDF" which is different from your installed PDF drivers. So it really comes down to what PDF Printer Driver you are using.
0
338
0
0
2013-05-03T12:48:00.000
python,firefox,pdf,selenium,automation
Firefox produces unsearchable pdfs
1
1
1
16,444,728
0
1
0
This is part of a project I am working on for work. I want to automate a Sharepoint site, specifically to pull data out of a database that I and my coworkers only have front-end access to. I FINALLY managed to get mechanize (in python) to accomplish this using Python-NTLM, and by patching part of it's source code to fix a reoccurring error. Now, I am at what I would hope is my final roadblock: Part of the form I need to submit seems to be output of a JavaScript function :| and lo and behold... Mechanize does not support javascript. I don't want to emulate the javascript functionality myself in python because I would ideally like a reusable solution... So, does anyone know how I could evaluate the javascript on the local html I download from sharepoint? I just want to run the javascript somehow (to complete the loading of the page), but without a browser. I have already looked into selenium, but it's pretty slow for the amount of work I need to get done... I am currently looking into PyV8 to try and evaluate the javascript myself... but surely there must be an app or library (or anything) that can do this??
false
16,375,251
0.197375
0
0
2
Well, in the end I came down to the following possible solutions: Run Chrome headless and collect the html output (thanks to koenp for the link!) Run PhantomJS, a headless browser with a javascript api Run HTMLUnit; same thing but for Java Use Ghost.py, a python-based headless browser (that I haven't seen suggested anyyyywhere for some reason!) Write a DOM-based javascript interpreter based on Pyv8 (Google v8 javascript engine) and add this to my current "half-solution" with mechanize. For now, I have decided to use either use Ghost.py or my own modification of the PySide/PyQT Webkit (how ghost works) to evaluate the javascript, as apparently they can run quite fast if you optimize them to not download images and disable the GUI. Hopefully others will find this list useful!
0
1,441
0
1
2013-05-04T14:16:00.000
javascript,python,html,screen-scraping,eval
Evaluate javascript on a local html file (without browser)
1
1
2
16,385,053
0
1
0
In the client side is possible close the socket with connection.close(), but its possible close it from the server side?
false
16,390,603
0
0
0
0
Since the socket exists on the client you would have to send a close command from the server to the client
0
115
0
0
2013-05-06T00:42:00.000
python,google-app-engine,channel-api
Close GAE channel from server side Python
1
1
1
16,394,921
0
1
0
I am trying to log in a forum using Python/URLLib2. But I can't seem to succeed. I think it might be because there are several form objects in the login page, and I submit the incorrect one (the same code worked for a different forum, with a single form). Is there a way to specify which form to submit in URLLib2? Thanks.
false
16,392,113
0
0
0
0
9000 said: "I'd try to sniff/track a real exchange between browser and the site; both Chrome and FF have tools for that. I'd also consider using mechanize instead of raw urrlib2" This is the answer - mechanize is really easy to use and supports multiple forms. Thanks!
0
98
0
0
2013-05-06T04:46:00.000
python,web
python urllib2 - login and specify a form in page
1
1
2
16,397,349
0
0
0
I'm working with socket, asynchronous event-driven programming. I would like to send a message, once I receive a response, send another message. But I may be doing something besides listening. That is, I want to get interrupted when socket.recv() actually receives a message. Question 1: How can I let layer 3 interrupt layer 4? i.e. How can I handle the event of a non-null returning socket.recv() without actually dedicating "program time" to actually wait for a specific time to listen to incoming messages?
false
16,392,909
0
0
0
0
In asynchronous programming you don't interrupt an operation triggered by a message. All operations should be done in a short and fast fashion so you can process lots of messages per second. This way every operation is atomic and you don't suffer any race conditions so easily. If you are in need to do more complex processing in parallel you could hand those problems over to a helper thread. Libraries like twisted are prepared for such use cases.
0
110
0
0
2013-05-06T06:09:00.000
python,sockets,python-2.7,event-handling
Python: Interrupting sender with incoming messages
1
1
2
16,399,317
0
1
0
I'm using Python to gather some information, construct a very simple html page, save it locally and display the page in my browser using webbrowser.open('file:///c:/testfile.html'). I check for new information every minute. If the information changes, I rewrite the local html file and would like to reload the displayed page. The problem is that webbrowser.open opens a new tab in my browser every time I run it. How do I refresh the page rather than reopen it? I tried new=0, new=1 and new=2, but all do the same thing. Using controller() doesn't work any better. I suppose I could add something like < META HTTP-EQUIV="refresh" CONTENT="60" > to the < head > section of the html page to trigger a refresh every minute whether or not the content changed, but would prefer finding a better way. Exact time interval is not important. Python 2.7.2, chrome 26.0.1410.64 m, Windows 7 64.
false
16,399,355
0
0
0
0
The LivePage extension for Chrome. You can write to a file, then LivePage will monitor it for you. You can also optionally refresh on imported content like CSS. Chrome will require that you grant permissions on local file:// urls. (I'm unaffiliated with the project.)
0
66,470
0
11
2013-05-06T13:01:00.000
python,html,refresh
Refresh a local web page using Python
1
1
8
52,434,003
0
0
0
I'm trying to get a Python script working, that is using xml.dom (not xml.dom.minidom). I'm working on a new install of Python 2.7 for Windows, and xml.dom doesn't have a DOMImplementation or a DocumentType. Is there some additional module I need to install?
true
16,406,102
1.2
0
0
0
Sorry to bother everybody. On further examination, it turns out that the code I was trying to get working simply doesn't work. In essence, it was an attempt to write a SOAP client using Python's xml.dom and httplib modules - and it was simply done wrong. I'm scrapping his code, and writing a proper SOAP client.
0
452
0
0
2013-05-06T19:48:00.000
python,python-2.7
from xml.dom import DOMImplementation, DocumentType
1
1
2
16,408,496
0
0
0
I'm trying to run Selenium's Firefox webdriver and am getting the error below. I can see that the response does not have a sessionId - the offending line is self.session_id = response['sessionId'] - but I don't know why. I've run this in the following ways and get the same error: Cygwin, running nosetests Cygwin directly Windows, running nosetests Windows directly ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\dev\tools\cygwin\home\207013288\dev\projects\scorpion\test\unit\test_ approve_workflows.py", line 27, in test_login 'password', userid='207013288', test=True) File "C:\dev\tools\cygwin\home\207013288\dev\projects\scorpion\src\workflows.p y", line 20, in login browser = webdriver.Firefox() File "C:\dev\sdks\Python33\lib\site-packages\selenium-2.32.0-py3.3.egg\seleniu m\webdriver\firefox\webdriver.py", line 62, in __init__ desired_capabilities=capabilities) File "C:\dev\sdks\Python33\lib\site-packages\selenium-2.32.0-py3.3.egg\seleniu m\webdriver\remote\webdriver.py", line 72, in __init__ self.start_session(desired_capabilities, browser_profile) File "C:\dev\sdks\Python33\lib\site-packages\selenium-2.32.0-py3.3.egg\seleniu m\webdriver\remote\webdriver.py", line 116, in start_session self.session_id = response['sessionId'] nose.proxy.KeyError: 'sessionId' -------------------- >> begin captured logging << -------------------- selenium.webdriver.remote.remote_connection: DEBUG: POST http://127.0.0.1:63801/ hub/session {"sessionId": null, "desiredCapabilities": {"version": "", "browserN ame": "firefox", "platform": "ANY", "javascriptEnabled": true}} --------------------- >> end captured logging << --------------------- I haven't used Selenium before and I'm not sure where to go from here.
false
16,448,772
0
0
0
0
MAC OSX SOLUTION I'm using Python 2.7 and FireFox 48.0.2 and Chrome Versie 57.0.2987.98 (64-bit). The error *self.session_id = response['sessionId']* for me was solved by going to System preferences -> Network -> Advanced in the Wifi Tab. -> Proxy's -> Turning "Automatic Proxydetection" on. After changing this, the error no longer occurred.
0
6,072
0
4
2013-05-08T19:32:00.000
python,windows,firefox,selenium,cygwin
Why doesn't Selenium's response have a sessionId?
1
1
2
42,851,688
0
0
0
I am not sure if this question belongs here as it may be a little to broad. If so, I apologize. Anyway, I am planning to start a project in python and I am trying to figure out how best to implement it, or if it is even possible in any practical way. The system will consist of several "nodes" that are essentially python scripts that translate other protocols for talking to different kinds of hardware related to i/o, relays to control stuff, inputs to measure things, rfid-readers etc, to a common protocol for my system. I am no programming or network expert, but this part I can handle, I have a module from an old alarm system that uses rs-485 that I can sucessfully control and read. I want to get the nodes talking to eachother over the network so I can distribute them to different locations (on the same subnet for now). The obvious way would be to use a server that they all connect to so they can be polled and get orders to flip outputs or do something else. This should not be too hard using twisted or something like it. The problem with this is that if this server for some reason stops working, everything else does too. I guess what I would like is some kind of serverless communication, that has no single point of failure besides the network itself. Message brokers all seem to require some kind of server, and I can not really find anything else that seems suitable for this. All nodes must know the status of all other nodes as I will need to be able to make functions based on the status of things connected to other nodes, such as, do not open this door if that door is already open. Maybe this could be done by multicast or broadcast, but that seems a bit insecure and just not right. One way I thought of could be to somehow appoint one of the nodes to accept connections from the other nodes and act as a message router and arrange for some kind of backup so that if this node crashes or goes away, another predetermined node takes over and the other nodes connect to it instead. This seems complicated and I am not sure this is any better than just using a message broker. As I said, I am not sure this is an appropriate question here but if anyone could give me a hint to how this could be done or if there is something that does something similar to this that I can study. If I am beeing stupid, please let me know that too :)
true
16,452,913
1.2
1
0
1
There are messaging systems that don't require a central message broker. You might start by looking at ZeroMQ.
0
438
0
3
2013-05-09T01:29:00.000
python,networking
Serverless communication between network nodes in python
1
1
2
16,453,432
0
0
0
In Python I have been playing around with server sockets that listen for messages, and client socket connections that send the data to the server. Am i correct in thinking that the server/client python programs that utilize the socket module span layers 5 (session),6(presentation) and 7(application)? I think of the python code that utilizes sockets as presenting data, managing sessions and creating sockets using transport protocols such as tcp or udp. Is my understanding/thinking correct? Thank you.
true
16,490,795
1.2
0
0
4
Yes, that's correct. Note that the OSI model is not commonly used in practice. More typically, you see the Internet Reference model, which compresses the OSI layers 5, 6, and 7 into a single layer called the Application Layer.
0
1,025
0
0
2013-05-10T21:02:00.000
python,sockets,osi
Python program sending data via sockets, what OSI layers?
1
1
1
16,491,016
0
1
0
I am currently using Selenium to run instances of Chrome to test web pages. Each time my script runs, a clean instance of Chrome starts up (clean of extensions, bookmarks, browsing history, etc). I was wondering if it's possible to run my script with Chrome extensions. I've tried searching for a Python example, but nothing came up when I googled this.
false
16,511,384
0.033321
0
0
1
I also needed to add an extention to chrome while using selenium. What I did was first open the browser using selenium then add extention to the browser in the normal way like you would do in google chrome.
0
52,452
0
24
2013-05-12T19:49:00.000
python,google-chrome,selenium,selenium-webdriver
Using Extensions with Selenium (Python)
1
1
6
61,971,091
0
0
0
Okay, please don't kill me for asking this. I'm currently developing a 2D online multiplayer platformer shooter. Yeah, it's that cool. I have most of the game written with a couple of bugs and unoptimized, but I'm stuck when it comes to networking. I used PyGame, and so I tried using a bunch of Python libraries for networking. You name it, I think that I've looked at all the primary ones. Here are some PyEnet - thought it had internal congestion control, ugh MasterMind - not asynchronous PodSixNet - is this even UDP? Legume - currently stuck with the server giving me an exception, waiting for a response at the mailing list. Looks absolutely gorgeous otherwise. Can't remember all the other ones I tried. Anyways, what I need is UDP (trust me, I need UDP) and another reliable protocol for chat, masterserver, new player info, and all packets I can't afford to lose. I read somewhere that TCP and UDP used simultaneously wasn't a good idea, so I tried finding reliable UDP implementations in Python, therefore all my wandering about with these obscure libraries. Along the way I've learned to fool around with sockets myself, so I have two clear paths. 1) When people asked if UDP and TCP together were a bad idea, maybe they meant that they would use the same port for both protocols. How bad is it if I use two different ports? The TCP part will be idle most of the time, anyways, maybe 0-20 packets per 10 sec for a busy server. 2) Write my own reliable UDP. Ugh, it's what I was hiding from. If all fails, I guess I'll need to do that.
true
16,513,269
1.2
0
0
-1
In short, yes. I use Python/Scapy to test network equipments all the time. I am assuming you will be using Threads for the two separate communication channels. If your CPU can handle it there is no reason why you cannot do this, and of course the amount of traffic generated by network games are usually not enough to significantly utilize modern day CPUs.
0
1,137
0
1
2013-05-13T00:06:00.000
python,networking,udp
Python networking with UDP for action games
1
1
1
16,514,349
0
0
0
I would like to use the library requests to make two HTTP-requests within a session. However I don't know whether the IP adresses will be the same for both HTTP-requests (within that session). Can you tell me whether it will use the same IP (important for me) or whether Tor will use two different Exit-nodes?
true
16,519,231
1.2
0
0
1
If I remember, Tor changes routes after some pre-specified number of minutes. I'm tempted to say that the time period is 10 minutes, but I'm not entirely certain. Either way, so long as those two requests are made within that time-frame they'll have the same IP address.
0
180
0
0
2013-05-13T09:53:00.000
python,http,python-2.6,python-requests,tor
Make 2 HTTP-requests in a session through Tor with the same exit-node
1
1
1
16,529,177
0
1
0
Is there an API call for opening, reading and writing to text files on an Android device using SL4A and Python? If not, what options are available for persistent storage? e.g. database, preferences, dropbox, google drive.
true
16,522,493
1.2
0
0
0
Ok, so I was silly to look for a droid command to write files. The standard Python open, write, read, close commands work fine.
0
1,062
0
1
2013-05-13T12:49:00.000
android,python,sl4a
Writing to file with SL4A Python
1
1
1
16,633,755
0
1
0
I'm working on a Python project, currently using Django, which does quite a bit of NLP work in a form post process. I'm using the NLTK package, and profiling my code and experimenting I've realised that the majority of the time the code takes is performing the import process of NLTK and various other packages. My question is, is there a way I can have this server start up, do these imports and then just wait for requests, passing them to a function that uses the already imported packages? This would be much faster and less wasteful than performing such imports on every request. If anybody has any ideas to avoid importing large packages on every request, it'd be great if you could help me out! Thanks, Callum
true
16,532,314
1.2
0
0
3
Django, under most deployment mechanism, does not import modules for every request. Even the development server only reloads code when it changes. I don't know how you're verifying that all the imports are re-run each time, but that certainly shouldn't be happening.
0
216
0
1
2013-05-13T22:39:00.000
python,django,import,request,nlp
Python - Can a web server avoid imporing for every request?
1
1
1
16,538,939
0
0
0
I want to check if a certain tweet is a reply to the tweet that I sent. Here is how I think I can do it: Step1: Post a tweet and store id of posted tweet Step2: Listen to my handle and collect all the tweets that have my handle in it Step3: Use tweet.in_reply_to_status_id to see if tweet is reply to the stored id In this logic, I am not sure how to get the status id of the tweet that I am posting in step 1. Is there a way I can get it? If not, is there another way in which I can solve this problem?
false
16,574,746
0
1
0
0
What one could do, is get the last nth tweet from a user, and then get the tweet.id of the relevant tweet. This can be done doing: latestTweets = api.user_timeline(screen_name = 'user', count = n, include_rts = False) I, however, doubt that it is the most efficient way.
0
1,649
0
1
2013-05-15T20:51:00.000
python,twitter,tweepy
How to get id of the tweet posted in tweepy
1
1
2
16,589,445
0
1
0
So i've been looking around trying to figure out how i could extract some specific data such as just the text, and push that data into a program that organizes the data. So if you took homedepot.com for example and wanted to extract from each item listed under "2x4 wood" and from each item you are needing to grab the name, the description, and the specifications and import that data into a piece of software that contains this data? So I guess that would be something like an automated data entry? From what I've researched I'd need to write a crawler program that is designed to search a specific term and then crawl each and every page that the result returns and grab the data that I need. However I have a bit of a problem: I don't really know any programming/scripting and am unsure where to start. I found something called Scrapy which is based off of Python. Is this what I want to use for the crawler? The next issue I have is the fact that I have no clue on how to now import the data gathered into the software? Any tips on where I should look to find this answer? I want to use this idea that I have to help me learn how to script.
true
16,578,545
1.2
0
0
0
Well you should probably start by learning the language in general it would make it alot easier to do but for the Web stuff you can use something called urllib and urllib2 these can open up the browser to get the data without actually opening the window also there are a few automated web browsers like selenium which actually opens the window there are many others you can look through on the internet but that's only the web browser automation then you have to actually obtain the information and data you want for this you need something like scrapy like you said or beautifulsoup these go through the source code and pick out the information you want since i don't exactly know what you want its kind of hard to explain but i hope this gives you somewhere to start but like i said you should probably learn basic python and that would help alot I hope this helps!!
0
225
0
0
2013-05-16T03:30:00.000
python,web-crawler
Need to extract data from website and push to a program
1
1
2
16,578,621
0
0
0
I wanted to use urllib2 for python 3, but I don't think it's available in such name. I use urllib.request, is there another way to use urllib2?
false
16,634,773
0
0
0
0
The urllib2 module has been split across several modules in Python 3.0 named urllib.request and urllib.error. The 2to3 tool will automatically adapt imports when converting your sources to 3
0
4,670
0
2
2013-05-19T12:44:00.000
python,web
Using urllib2 for Python3
1
1
3
16,634,850
0
1
0
Unfortunately I am newbie with beautifulsoup and urllib so I might not even ask correctly what I need.. There is a website www.example.com I need to extract some data from this website which displays a random message. The problem is the message is displayed after the user presses a button, otherwise it shows a general message like "press the button to see the message". After searching stackoverflow I realised that probably there is NO way to change the variables by calling with my browser the url like this.. www.example.com/?showRandomMsg='true' In some threads I read that maybe I can do it with bookmarlets.. Is there anyway to use bookmarklets with beautifulsoup or urllib in order to access the website and make it display a random message? Thanks in advance! :D
true
16,637,879
1.2
0
0
1
I came back after a long time just to answer quickly my own question.. I found many solutions and tutorials on the web and most of them were suggesting using Selenium and xpath but this method was more complex than I needed.. So I ended up using Selenium ONLY for emulating the Browser (firefox in my case) and grabbing the html after the page was loaded completely. After that I was still using beautifoulsoup to parse the html code (whihc now would include the javascript data too).
0
169
0
0
2013-05-19T18:12:00.000
javascript,python,beautifulsoup,urllib2
Cannot scrape with beautifulsoup and urllib because of javascript variable
1
1
1
23,974,667
0
0
0
I have an application that creates many thousands of graphs in memory per second. I wish to find a way to persist these for subsequent querying. They aren't particularly large (perhaps max ~1k nodes). I need to be able to store the entire graph object including node attributes and edge attributes. I then need to be able to search for graphs within specific time windows based on a time attribute in a node. Is there a simple way to coerce this data into neo4j ? I've yet to find any examples of this. Though I have found several python libs including an embedded neo4j and a rest client. Is the common approach to manually traverse the graph and store it in that manner? Are there any better persistence alternatives?
false
16,639,770
-0.099668
0
0
-1
networkx supports flexible container structures (eg arbitrary combinations of py lists and dicts) in both nodes and edges. Are there restrictions on the Neo4j side to persist such flexible data?
0
1,778
0
5
2013-05-19T21:44:00.000
python,neo4j,networkx,directed-graph
Python networkx and persistence (perhaps in neo4j)
1
1
2
71,859,666
0
1
0
I’m writing a cherrypy application that needs to redirect to a particular page and I use HTTPRedirect(‘mynewurl’, status=303) to achieve this. This works inasmuch as the browser (Safari) redirects to ‘mynewurl’ without asking the user. However, when I attempt to unit test using nosetests with assertInBody(), I get a different result; assertInBody reports that ‘This resource can be found at mynewurl’ rather than the actual contents of ‘mynewurl’. My question is how can I get nosetests to behave in the same way as a Safari, that is, redirecting to a page without displaying an ‘ask’ message? Thanks Kevin
true
16,652,406
1.2
1
0
0
With python unit tests, you are basically testing the server. And the correct response from server is the redirect exception and not the redirected page itself. I would recommend you testing this behaviour in two steps: test if the first page/url throws correctly initialized (code, url) HTTPRedirect exception test contents of the second page (on which is being redirected) But of course, if you insist, you can resolve the redirect in Try/Except by yourself by inspecting the exception attributes and calling testing method on target url again.
0
190
0
0
2013-05-20T15:01:00.000
python,cherrypy,nose,nosetests
Unit testing Cherrypy HTTPRedirect.
1
1
1
16,652,717
0
1
0
I've been into scraping websites data using Python Scrapy although I have a strong experience in PHP cURL. I don't know which is better for scraping data and manipulating the returned values and the speed and the memory usage. And what is (yield) function in Python Scrapy supposed to do?
false
16,655,681
0.53705
0
0
3
Scrapy is a framework. You can define pipelines and systematic ways of crawling a URL; cURL is simply boiler plate code to query a page or download files over a protocol like HTTP. If you are building an extensive scraping system or project, Scrapy is probably a better bet. Otherwise for hacky or one time things, cURL is hard to beat (or if you are constrained to PHP).
0
1,634
0
0
2013-05-20T18:13:00.000
php,python,scrapy
PHP cURL vs Python Scrapy?
1
1
1
16,659,313
0
1
0
I'm writing a python script to do some screen scraping of a public website. This is going fine, until I want to interact with an AJAX-implemented tree control. Evidently, there is a large amount of javascript controlling the AJAX requests. It seems that the tree control is a JBoss RichFaces RichTree component. How should I interact with this component programatically? Are there any tricks I should know about? Should I try an implement a subset of the RichFaces AJAX? Or would I be better served wrapping some code around an existing web-browser? If so, is there a python library that can help with this?
false
16,661,801
0.099668
0
0
1
You need to make the AJAX calls from your client to the server and interpret the data. Interpreting the AJAX data is easier and less error-prone than scraping HTML any way. Although it can be a bit tricky to figure out the AJAX API if it isn't documented. A network sniffer tool like wireshark can be helpful there, there may also be useful plugins for your browser to do the same nowadays. I haven't needed to do that for years. :-)
0
601
0
0
2013-05-21T03:48:00.000
python,ajax,jboss,richfaces,screen-scraping
How can I programmatically interact with a website that uses an AJAX JBoss RichTree component?
1
1
2
16,662,103
0
0
0
As the title says, can client A connect to client B on different machines, when the server is on the same machine with client B ? Note that the client B and server on the machine have different port numbers. And client B acts like a server i.e. it also listen for clients, but client A must first handshake with the server and then with client B. Is this possible? Thank you.
true
16,700,381
1.2
0
0
1
The criteria that determine uniqueness (for connectivity) are: IP Address Protocol Port Thus, if A and B have different IP addresses, they all use TCP, but B's server has a different port than B's client, then, all other things being equal, they should all be reachable.
0
49
0
0
2013-05-22T19:46:00.000
python,sockets,client-server
can peer A connect to peer B if server is on peer B?
1
1
1
16,700,468
0
0
0
Is there an API which allows me to send a notification to Google Hangout? Or is there even a python module which encapsulates the Hangout API? I would like to send system notification (e.g. hard disk failure reports) to a certain hangout account. Any ideas, suggestions?
true
16,712,834
1.2
0
0
2
Hangouts does not currently have a public API. That said, messages delivered to the Google Talk XMPP server (talk.google.com:5222) are still being delivered to users via Hangouts. This support is only extended to one-on-one conversations, so the notification can't be delivered to a group of users. The messages will need to be supplied through an authenticated Google account in order to be delivered.
0
9,293
0
7
2013-05-23T11:32:00.000
python,notifications,google-plus,hangout
send google hangout notification using python
1
1
3
16,721,015
0
0
0
I am trying to work with Python 2.7 in eclipse on my Mac, I don't belive that I have ever messed with the source files, but when I try to import urllib, urllib2 or random it tells me that it can't find them. I used the eclipse autoconfig-ed 2.7 interpreter so I have no idea what happened to the modules. How can I find it so that I can include it?
true
16,719,307
1.2
0
0
3
Please check that Eclipse has the right PYTHONPATH environmental variables. Open a python interactive interpreter in a shell and try importing the same urllib, urllib2 and random modules. If that works, then Eclipse might be configured wrong. If you can't access those modules, then you should consider fixing your PYTHONPATH.
1
130
0
0
2013-05-23T16:32:00.000
python,python-2.7
Python2.7 Modules Missing
1
1
1
16,719,393
0
0
0
I'm using bottle to write a very simple backend API that will allow me to check up on an ongoing process remotely. I'm the only person who will ever use this service—or rather, I would very much like to be the only person to use this service. It's a very simple RESTish API that will accept GET requests on a few endpoints. I've never really done any development for the web, and I wanted to do something as simple as is reasonable. Even this is probably an undue level of caution. Anyway, my very-very-basic idea was to use https on the server, and to authenticate with basically a hard-coded passkey. It would be stored in plaintext on the server and the client, but if anyone has access to either of those then I have a different problem. Is there anything glaringly wrong about this approach?
false
16,723,782
0
0
0
0
If you are using password authentication you need to store the password in the server so you can validate the password you send from the client is Ok. In your particular case you will be using basic authentication, as you want the simplest. Basic authentication over HTTP/HTTPS encodes the password with base64 but that's not a protection measure. Base64 is a two way encoding, you can encode and decode a chunk of data and you need no secret to do it. The purpose of Base64 encoding is codify any kind of data, even binary data, as a string. When you enter the password and send it over HTTPS, the HTTPS tunel avoids anyone from seeing your password. Other problem comes if someone gets access to your server and reads the password "copy" that you are using to check if the entered password was valid. The best way to protect is hashing it. A hash is a one way codification system. This means anyone can hash a password, but you can not unhash a chunk of data to discover the password. The only way to break a hashed password is by brute force. I'll recommend using MD5 or SHA hashes. So to make it simple. The client uses http/https basic authentication, so you'll encode your password in base64. Pass it through a header, not the URL. The server will contain a hased copy of the password either on databse or wherever you want. The backend code will recibe the http request, get the passowrd, base64 decode it and then hash it. Once hashed, you will check if its equal to the copy stored in the server. This is it. Hope it helps!
0
385
0
2
2013-05-23T20:59:00.000
python,rest,ssl,bottle
very, very simple web authentication for personal use
1
1
2
16,724,489
0
1
0
I 've develooped a basic custom browser with CEF (Chromium Embedded Framework) Python . This browser is meant to run into an interactive kiosk with windows 8. It has a multi-touch screen for all user interactions. If I run Google Chrome on the machine, the multi-touch gestures (scroll and virtual keyboard) are supported. Unfortunately my CEF browser doesn't have detect any multi-touch event. How can I fix it? ANy pointer is welcomed.
true
16,737,962
1.2
0
0
0
The problem was fixed by using CEF3 rather than CEF1
0
652
0
1
2013-05-24T14:58:00.000
python,windows-8,multi-touch,chromium-embedded
How to add multitouch support for ChromeEmbeddedFramework browser on windows 8?
1
1
1
16,913,113
0
0
0
I am using the python unit testing library (unittest) with selenium webdriver. I am trying to find an element by it's name. About half of the time, the tests throw a NoSuchElementException and the other time it does not throw the exception. I was wondering if it had to do with the selenium webdriver not waiting long enough for the page to load.
false
16,739,319
0
0
0
0
You need to have a waitUntil (your element loads). If you are sure that your element will eventually appear on the page, this will ensure that what ever validations will only occur after your expected element is loaded.
0
9,188
0
4
2013-05-24T16:11:00.000
python,selenium,webdriver
Selenium Webdriver - NoSuchElementExceptions
1
1
7
34,998,411
0
0
0
There is a strange API I need to work with. I want to make a HTTP call to the API and the API will return success but I need to wait for request from this API before I respond to the client. What is the best way to accomplish that?
false
16,749,084
0
0
0
0
Is it an option to make your API REST-ful? An example flow: Have the client POST to a url to create a new resource and GET/HEAD for the state of that resource, that way you don't need to block your client while you do any blocking stuff.
0
104
0
0
2013-05-25T11:30:00.000
python,tornado
Listen for http request in the body of RequestHandler
1
1
1
16,749,274
0
0
0
I frequently use lxml module in Python to scrape data from some web sites, and I'm comfortable with the module generally. However, when I try to scrape, at times I encounter lxml.etree.XMLSyntaxError: AttValue: " or ' expected error on etree.fromstring() call, but don't usually. I can't clarify how often I see that error, but I think one out of thousands or even tens of thousands times, I encounter the error. When I run the exactly same script immediately after the error occurred and the script stopped, I don't see the error and the script runs well as expected. Why does it spit out an ocasional error? Is there any way to deal with the issue? I have the similar problem when I instantiate urllib2.urlopen() function, but since I haven't seen the error from urllib2 recently, I can't write the exact error message coming from it right now. Thanks.
false
16,765,257
0.099668
0
0
1
I also had the problem that lxml's iterparse() would occasionally throw an AttValue: ' expected in a very unpredictable pattern. I knew that the XML I'm sending in is valid and rerunning the same script would often make it work (or fail at an entirely different point). In the end, I managed to create a test case that I could rerun and it would immediately either complete or raise an AttValue error in a seemingly random outcome. Here's what I did wrong: My input to iterparse() was a file-like object I wrote myself (I'm processing an HTTP response stream from requests, but it has to be ungzipped first). When writing the read() method, I cheated and ignored the size argument. Instead, I would just unzip a chunk of compressed bytes of a fixed size and return whatever byte sequence this decompressed to—often much more than the 32k lxml requests! I suspect that this caused buffer overflows somewhere inside lxml, which led to the above issues. As soon as I stopped returning more bytes than lxml requested, these random errors would go away.
0
1,316
0
2
2013-05-27T01:23:00.000
python,web-scraping,urllib2,lxml,elementtree
Why does lxml spit out an error at times (but not usual) in Python?
1
1
2
24,978,950
0
0
0
My question I guess is: Is this possible without shelling out to command line and without 3rd party Python packages? I can't seem to find any native Python commands to manipulate or configure a wireless network connection. I know there are already built-in 'netsh wlan' commands in Windows 7, but would rather this all be in python. I am also confused by the logistics of this operation. With the netsh stuff, you still are required to have a wireless profile xml file specified in the command. My current image doesn't have any wireless profiles and I do not really understand the purpose of that if you are connecting to a brand new network. Why is this not automatically generated when you connect? A little bit about the network Network type: Infrastructure Authentication: WPA2-Enterprise Encryption: CCMP The ultimate goal is to have a script that my users can just launch, put in their credentials, and never see the multiple Windows dialogues while doing so. I'm not asking for someone to write this for me. That's what I'm suppose to do. I just need to know if anyone has successfully done something like this in strictly Python or point me in the right direction. Thanks!
true
16,794,850
1.2
0
0
1
No. Python standard library doesn't ship with any functionality to control platform-specific functionality like wireless adapters. You have to invoke the tools shipped with the platform, find some 3rd party libraries that control this functionality, or write your own such libraries.
0
1,246
0
0
2013-05-28T14:38:00.000
python,security,networking,windows-7,wifi
Simple Python program to connect to a secure wifi network with user input credentials
1
1
1
16,797,510
0
1
0
Has anyone found a method for executing their .py files from the Robot Framework like you can for JS? RobotFramework: Executes the given JavaScript code. code may contain multiple statements and the return value of last statement is returned by this keyword. code may be divided into multiple cells in the test data. In that case, the parts are catenated together without adding spaces. If code is an absolute path to an existing file, the JavaScript to execute will be read from that file. Forward slashes work as a path separator on all operating systems. The functionality to read the code from a file was added in SeleniumLibrary 2.5. Note that, by default, the code will be executed in the context of the Selenium object itself, so this will refer to the Selenium object. Use window to refer to the window of your application, e.g. window.document.getElementById('foo'). Example: Execute JavaScript window.my_js_function('arg1', 'arg2') Execute JavaScript ${CURDIR}/js_to_execute.js It's bs that I can't run my .py files this way...
false
16,826,304
0.197375
0
0
1
The Execute Javascript extension isn't a part of RobotFramework, it's something added by the Selenium integration, it would therefore follow that you can't use Selenium to execute a .py file. That said, RobotFramework is written in Python and can obviously be extended with a Python script. Can you clear up what you're actually trying to achieve here though? My concern is that if you're using a .py file in your test state to validate your code, isn't that introducing an uncertainty that means that what you're testing is not the same as the code that gets executed when you release your project? A bit more detail would help a lot here!
0
635
0
1
2013-05-30T00:59:00.000
python,robotframework
Can Not execute Python .py file using RobotFramework like Javascript
1
1
1
16,847,883
0
0
0
Is there a way to parse a SOAP Response (similar to XML) in Python? I have browsed through most of Stackoverflow solutions and they usually use minidom or ET functions parse() or parseString(). parse() takes a filename as input, while parseString() takes a string as input. However SOAP response type is HTTPResponse and hence I am always getting type mismatch error while using parse() or parseString(), so not sure how to parse the SOAP response in Python. I have also tried converting the HTTPResponse to string (failed), or using XML function (failed), or using response.read() function (failed). I have checked that the SOAP response is correct with valid XML. I am using SUDS to call the SOAP service.
true
16,838,198
1.2
0
0
0
Ok the solution was just to cast it to a string using str(response)
0
1,989
0
0
2013-05-30T13:58:00.000
python,xml,soap,httpresponse,minidom
Parse SOAP Response in Python
1
1
1
16,850,655
0
0
0
I am currently involved in a project where we performed some computer vision object recognition and classification using python. It is necessary to use photos taken from an Android phone (photos should be automatically sent from android to python). I actually don't know how should I connect both applications (android and python) together. I thought of using TCP Server, but I am new to socket programming and don't know how and where to start. Any one can help?
false
16,867,749
0
0
0
0
poof's answer is a good overview, but here are some notes on the various options: Option 1: Have your Android get the picture and do a HTTP POST to a Python application running a framework such as Django. It should be 1 line of code on Android, and only a few lines of code on Python (around your existing code). The upside is that the low-level "Glue" is written for you, and it scales easily. The downside is that HTTP has some overhead. Option 2: Have your Android talk a custom TCP protocol to a custom TCP application. This is more work, so you should avoid it unless you need it. It will be more efficient and have lower latency, especially if you're sending multiple pictures. The response can also be much smaller without HTTP headers. In either option, you don't have to send a JPEG, you could send any custom format you want (there is a trade-off between compression on Android and size of file). I thought of using TCP Server, but I am new to socket programming and don't know how and where to start. Start where everyone else started - by reading a lot and playing a lot. You can find plenty of introductions to Socket programming in Python on the web. Go thru the tutorials, and start modifying them to see how they work. Read up on TCP/IP itself -- there are a lot of dark corners (Nagel, fragmentation, slow start) that can affect how you write the app when you get to a low level.
0
780
0
0
2013-06-01T00:15:00.000
android,python,sockets,tcp
Tcp/IP socket programming in python
1
1
2
16,868,676
0
0
0
I have created a python script that uses selenium to automate an online task. The script works perfect on my local machine (windows 7) and gives the output i am looking for. I am now trying to get it up and running from PHP on my hostmonster shared server which is running linux and having no luck. I have installed this version of selenium on both my win7 comp and the server: pypi.python.org/pypi/selenium Python version: 2.7.5 The script i wrote gets the following error at "import selenium":ImportError: No module named selenium When i log into the server through ssh shell, i can type in "import selenium" and receive no errors. I can also type in "from selenium import webdriver" in the ssh shell and receive no errors. Any help/guidance would be greatly appreciated.
false
16,881,335
0
1
0
0
when i enter import sys and then print sys.path into ssh shell I receive the following: ['', '/home2/klickste/python/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/home2/klickste/python/lib/python2.7/site-packages/mechanize-0.2.5-py2.7.egg', '/home2/klickste/python/lib/python2.7/site-packages/html2text-3.200.3-py2.7.egg', '/home2/klickste/python/lib/python2.7/site-packages/pip-1.3.1-py2.7.egg', '/home2/klickste/python/lib/python27.zip', '/home2/klickste/python/lib/python2.7', '/home2/klickste/python/lib/python2.7/plat-linux2', '/home2/klickste/python/lib/python2.7/lib-tk', '/home2/klickste/python/lib/python2.7/lib-old', '/home2/klickste/python/lib/python2.7/lib-dynload', '/home2/klickste/python/lib/python2.7/site-packages']
0
238
0
0
2013-06-02T09:08:00.000
php,python,selenium,hostmonster
Import selenium error on hostmonster shared linux server
1
2
2
16,975,487
0
0
0
I have created a python script that uses selenium to automate an online task. The script works perfect on my local machine (windows 7) and gives the output i am looking for. I am now trying to get it up and running from PHP on my hostmonster shared server which is running linux and having no luck. I have installed this version of selenium on both my win7 comp and the server: pypi.python.org/pypi/selenium Python version: 2.7.5 The script i wrote gets the following error at "import selenium":ImportError: No module named selenium When i log into the server through ssh shell, i can type in "import selenium" and receive no errors. I can also type in "from selenium import webdriver" in the ssh shell and receive no errors. Any help/guidance would be greatly appreciated.
true
16,881,335
1.2
1
0
1
I have resolved the issue. I used the following command to install selenium outside of the python folder. easy_install --prefix=$HOME/.local/ selenium I also added these lines at the bottom of my .bashrc file located in my home directory export PYTHONPATH=$HOME/.local/lib/python/site-packages:$PYTHONPATH export PYTHONPATH=$HOME/.local/lib/python2.7/site-packages:$PYTHONPATH export PATH=$HOME/.local/bin:$PATH
0
238
0
0
2013-06-02T09:08:00.000
php,python,selenium,hostmonster
Import selenium error on hostmonster shared linux server
1
2
2
16,991,512
0
0
0
I am building a back end that will handle requests from web apps and mobile device apps. I am trying to decide if an TCP server is appropriate for this vs. Regular http GET and POST requests. Use case 1: 1. Client on mobile device executes a search on the device for the word "red". Word sent to server (unclear wether JSON or TCP somehow) The word red goes to the server and the server pulls all rows from a mysql db that have red as their color (this could be ~5000 results). Alternate step 2 (maybe TCP should make more sense here): there is a hashmap built with the word red as the key and the value a pointer to an array of all the objects with the word red (I think this will be a faster look up time). Data is sent to the phone (either JSON or some other way, not sure). I am unclear on this step. The phone parses, etc... There is a possibility that I may want to keep the array alive on the server until the user finishes the query (since they could continue to filter down results). Based on this example, what is the architecture I should be looking at? Any different way is highly appreciated. Thank you
true
16,889,768
1.2
1
0
1
In your case I would use the HTTP because: Your service is stateless. If you use TCP you will have problem scaling up your service (since every request will be directed to the server that establish the TCP connection ), this relate to that your service is stateless. In HTTP you just add more servers behind load balane For TCP you will need to state some port which can be block due to firewall and ect' - you can use port 80/8080 but I don't think this is good practice if you service were more like suggestion that change as the use typein his word you may want to use TCP/HTTP Socket TCP is used for more long term connection - like Security system that report the state of the system every X seconds - which is not the case
0
110
0
0
2013-06-03T03:51:00.000
python,http,ftp
Should I build a TCP server or use simple http messages for a back-end?
1
1
1
16,889,799
0
1
0
I have some webpages where I'm collecting data over time. I don't care about the content itself, just whether the page has changed. Currently, I use Python's requests.get to fetch a page, hash the page (md5), and store that hash value to compare in the future. Is there a computationally cheaper or smaller-storage strategy for this? Things work now; I just wanted to check if there's a better/cheaper way. :)
false
16,890,209
0.197375
0
0
2
You can keep track of the date of the last version you got and use the If-Modified-Since header in your request. However, some resources ignore that header. (In general it's difficult to handle it for dynamically-generated content.) In that case you'll have to fall back to less efficient method.
0
89
0
0
2013-06-03T04:54:00.000
python,hash,web-scraping
Efficient way to check whether a page changed (while storing as little info as possible)?
1
2
2
16,890,397
0
1
0
I have some webpages where I'm collecting data over time. I don't care about the content itself, just whether the page has changed. Currently, I use Python's requests.get to fetch a page, hash the page (md5), and store that hash value to compare in the future. Is there a computationally cheaper or smaller-storage strategy for this? Things work now; I just wanted to check if there's a better/cheaper way. :)
true
16,890,209
1.2
0
0
0
A hash would be the most trustable source of change detection. I would use CRC32. It's only 32 bits as opposed to 128bits for md5. Also, even in browser Javascript it can be very fast. I have personal experience in improving the speed for a JS implementation of CRC32 for very large datasets.
0
89
0
0
2013-06-03T04:54:00.000
python,hash,web-scraping
Efficient way to check whether a page changed (while storing as little info as possible)?
1
2
2
16,890,487
0
0
0
I am trying to use the dropbox sdk for python. I installed a virtual enviroment and used pip install to install dropbox-sdk. When I try to run the example code (see below) I get a importerror client cannot be found, but if I try to do it from the interactive intreperter it works. So what am I doing wrong. APP key and secret key and acces_type ommitted.
true
16,901,130
1.2
0
0
1
Update: I found the problem, i called my own file dropbox.py and in that file imported dropbox. I accidently imported my own file. Renamed my file and now it works.
1
216
0
2
2013-06-03T16:07:00.000
python,api,sdk,dropbox,importerror
Dropbox python sdk import error
1
1
1
17,679,874
0
0
0
I have a script that downloads a lot of fairly large (20MB+) files. I would like to be able to check if the copy I have locally is identical to the remote version. I realize I can just use a combination of date modified and length, but is there something even more accurate I can use (that is also available via paramiko) that I can use to ensure this? Ideally some sort of checksum? I should add that the remote system is Windows and I have SFTP access only, no shell access.
false
16,901,650
0
1
0
0
I came with a similar scenario. the solution I currently take is to compare the remote file's size by using item.st_size for item in sftp.listdir_attr(remote_dir) with the local file's size by using os.path.getsize(local_file). when the two files are around 1MB or smaller,this solution is fine. However, a weird thing might happen: when the files are around 10MB or larger, the two size might differ slightly,e.g., one is 10000 Byte, another is 10003 Byte.
0
614
0
2
2013-06-03T16:41:00.000
python,sftp,checksum,paramiko
Python + Paramiko - Checking whether two files are identical without downloading
1
1
1
68,450,855
0
0
0
I am trying to build a fast web crawler, and as a result, I need an efficient way to locate all the links on a page. What is the performance comparison between a fast XML/HTML parser like lxml and using regex matching?
false
16,929,149
-0.197375
0
0
-2
You can use pyquery, a library for python that bring you functions from jquery.
0
1,263
0
1
2013-06-04T23:31:00.000
python,regex,html-parsing,web-crawler,lxml
Finding links fast: regex vs. lxml
1
1
2
16,933,336
0
1
0
I'm using latest dropzone.js, version 3.2.0. I downloaded the folder and have all files needed. Using latest Chrome. When i drop a file, dropzone sends it to the server, and i successfully save it, but nothing visual happens on the front end. I guess i'm missing something trivial. How to make dropzone show upload progress animation? Another issue i have is that dropzone doesnt hide the div.fallback that contains fallback form. I thought those features supposed to work automatically.
false
16,946,659
0
0
0
0
This sounds as if you didn't include the CSS files that come along with Dropzone. Or you didn't add the dropzone or dropzone-previews class to your form.
0
1,443
0
1
2013-06-05T17:53:00.000
python,django,dropzone.js
Dropzonejs - Doesn't show progress bar / complete status and doesnt hide fallback form
1
1
2
17,276,082
0
0
0
If this is a stupid question, please don't mind me. But I spent some time trying to find the answer but I couldn't get anything solid. Maybe this is a hardware question, but I figured I'd try here first. Does Serial Communication only work one to one? The reason this came up is because I had an arduino board listening for communication on its serial port. I had a python script feed bytes to the port as well. However, whenever I opened up the arduino's serial monitor, the connection with the python script failed. The serial monitor also connects to the serial port for communication for its little text input field. So what's the deal? Does serial communication only work between a single client and a single server? Is there a way to get multiple clients writing to the server? I appreciate your suggestions.
false
16,949,369
0.066568
1
0
1
Edit: I forgot about RS-485, which 'jdr5ca' was smart enough to recommend. My explanation below is restricted to RS-232, the more "garden variety" serial port. As 'jdr5ca' points out, RS-485 is a much better alternative for the described problem. Original: To expand on zmo's answer a bit, it is possible to share serial at the hardware level, and it has been done before, but it is rarely done in practice. Likewise, at the software driver level, it is again theoretically possible to share, but you run into similar problems as the hardware level, i.e. how to "share" the link to prevent collisions, etc. A "typical" setup would be two serial (hardware) devices attached to each other 1:1. Each would run a single software process that would manage sending/receiving data on the link. If it is desired to share the serial link amongst multiple processes (on either side), the software process that manages the link would also need to manage passing the received data to each reading process (keeping track of which data each process had read) and also arbitrate which sending process gets access to the link during "writes". If there are multiple read/write processes on each end of the link, the handshaking/coordination of all this gets deep as some sort of meta-signaling arrangement may be needed to coordinate the comms between the process on each end. Either a real mess or a fun challenge, depending on your needs and how you view such things.
0
1,057
0
1
2013-06-05T20:35:00.000
python,serial-port,arduino,pyserial
Serial Communication one to one
1
1
3
16,951,886
0
0
0
Is there a way to scrape more than 64 results from google with python without getting my IP address instantly blocked?
false
16,955,325
0
0
0
0
64 result is the limit? Sounds weird to me! Even with browser, I can navigate till the 100th page with no problem. I'm very curious about how did you reach this limit. Anyway: classical possible solutions are: proxying ( IE: tor ) delaying requests randomly switch user agent
0
241
0
2
2013-06-06T06:40:00.000
python
Scraping Google Results with Python
1
2
2
16,955,453
0
0
0
Is there a way to scrape more than 64 results from google with python without getting my IP address instantly blocked?
false
16,955,325
0.099668
0
0
1
I use tsocks and ssh-tunnels to machines with other ip addresses to achieve this.
0
241
0
2
2013-06-06T06:40:00.000
python
Scraping Google Results with Python
1
2
2
16,955,450
0
1
0
I have a >100,000 urls (different domains) in a list that I want to download and save in a database for further processing and tinkering. Would it be wise to use scrapy instead of python's multiprocessing / multithreading? If yes, how do I write a standalone script to do the same? Also, feel free to suggest other awesome approaches that come to your mind.
false
16,957,276
0
0
0
0
Scrapy is still an option. Speed/performance/efficiency Scrapy is written with Twisted, a popular event-driven networking framework for Python. Thus, it’s implemented using a non-blocking (aka asynchronous) code for concurrency. Database pipelining You mentioned that you want your data to be pipelined into the database - as you may know Scrapy has Item Pipelines feature: After an item has been scraped by a spider, it is sent to the Item Pipeline which process it through several components that are executed sequentially. So, each page can be written to the database immediately after it has been downloaded. Code organization Scrapy offers you a nice and clear project structure, there you have settings, spiders, items, pipelines etc separated logically. Even that makes your code clearer and easier to support and understand. Time to code Scrapy does a lot of work for you behind the scenes. This makes you focus on the actual code and logic itself and not to think about the "metal" part: creating processes, threads etc. But, at the same time, Scrapy might be an overhead. Remember that Scrapy was designed (and great at) to crawl, scrape the data from the web page. If you want just to download a bunch of pages without looking into them - then yes, grequests is a good alternative.
0
1,273
0
5
2013-06-06T08:32:00.000
python,multithreading,multiprocessing,scrapy,web-crawler
What is the best way to download number of pages from a list of urls?
1
1
4
16,961,651
0
0
0
Is this now possible using Google Drive API or should I just send a multiple requests to accomplish this task? By the way I'm using Python 2.7
false
16,967,509
0
0
0
0
You can batch your multiple deletes into a single HTTP request.
0
259
0
0
2013-06-06T16:35:00.000
python,google-drive-api
How to do a multiple folder removal in Google Drive API?
1
1
3
20,088,100
0
0
0
I am using dropbox for one of my application where a user can connect their dropbox folders. Usage is such that a user can create links among the files of a folder and many more. But the problem is the moment when I stored the file information in my application, the file media information is stored with a key expires. So obviously I wont be able to use the link next time once the expiry time is met. One way is to generate the media information every time the user is selecting a thumbnail from my application, as I already have metadata of the file. But is there any other way (i.e by using python client or API) that I can make a folder public when a user selects it to connect with my application. Any help would be really appreciated. Thanks in advance for your precious time.
false
16,978,169
0.379949
0
0
2
I think the right thing to do is to generate a media link each time you need it. Is there a reason you don't like that solution?
0
130
0
0
2013-06-07T07:03:00.000
python-2.7,dropbox,dropbox-api
Can I make a dropbox folder public by using python client?
1
1
1
16,988,885
0
0
0
I'd like to avoid running into "Max number of clients reached" errors with interfacing with a 3rd party Redis host from my Heroku app by limiting the number of connections held in the pool to an arbitrary amount of my choosing. Is that possible?
false
16,994,514
0
0
0
0
I think maybe you should keep your redis instance in the global, let all requests share the same redis instance, this should not cause too many connections anymore. The redis instance will have its own connection pool, you can limit your connection nums by set max_connections parameter to redis.ConnectionPool. If max_connections is set, then this object raises redis.ConnectionError when the pool's limit is reached.
0
1,957
0
2
2013-06-07T23:48:00.000
python,heroku,redis
limit number of connections to redis in py-redis
1
1
2
20,607,006
0
0
0
I'm trying to implement a UDP traceroute solution in Python 2.6, but I'm having trouble understanding why I need root privileges to perform the same-ish action as the traceroute utility that comes with my operating system. The environment that this code will run in will very doubtfully have root privileges, so is it more likely that I will have to forego a python implementation and write something to parse the output of the OS traceroute in UDP mode? Or is there something I'm missing about opening a socket configured like self.rx = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_UDP). It seems that socket.SOCK_RAW is inaccessible without root privileges which is effectively preventing me from consuming the data I need to implement this in python.
true
17,027,970
1.2
0
0
0
The conclusion I've come to is that I'm restricted to parsing the output of the traceroute using subprocess. traceroute is able to overcome the root-requirement by using setuid for portions of the code effectively allowing that portion of the code to run as root. Since I cannot establish those rights without root privileges I'm forced to rely on the existence of traceroute since that is the more probable of the two situations.
0
1,380
1
2
2013-06-10T15:54:00.000
python,sockets,permissions,udp,traceroute
Implementing UDP traceroute in Python without root
1
1
1
17,045,881
0
1
0
I am building a chat application that consists of a Django web backend with a Node.js/socket.io powered chat server. There will be instances when changes made via the web interface (e.g. banning a user) need to be pushed immediately to the chat server. I can think of the following options: Use a Python-based socket.io client to interface directly with the server (what are some good Python clients?) Use redis or a message queue to do pub/sub of events (seems like overkill) implement a simple TCP wire protocol on a secondary localhost-only port (this could be done using the built-in Node and Python TCP libraries) What would be the best option?
false
17,039,873
0.379949
0
0
2
Expose a Restful API on the chat server. Then your Django web application can easily make API calls to modify state in the chat server. Doing anything else is more complicated and most likely unnecessary.
0
361
0
1
2013-06-11T08:44:00.000
python,node.js,socket.io
Communicating with a Node.js server application
1
1
1
17,039,955
0
0
0
I am trying to create a server in Python 2.7.3 which sends data to all client connections whenever one client connection sends data to the server. For instance, if client c3 sent "Hello, world!" to my server, I would like to then have my server send "Hello, world!" to client connections c1 and c2. By client connections, I mean the communications sockets returned by socket.accept(). Note that I have tried using the asyncore and twisted modules, but AFAIK they do not support this. Does anybody know any way to accomplish this? EDIT: I have seen Twisted, but I would much rather use the socket module. Is there a way (possibly multithreading, possibly using select) that I can do this using the socket module?
false
17,087,905
0.197375
0
0
1
You can absolutely do this using Twisted Python. You just accept the connections and set up your own handling logic (of course the library does not including built-in support for your particular communication pattern exactly, but you can't expect that).
0
108
0
0
2013-06-13T13:09:00.000
python,sockets,tcp
How would I handle multiple sockets and send data between them in Python 2.7.3?
1
1
1
17,088,117
0
0
0
I'm using Selenium with Python to test a web application. The app has a Flash component that I'd like to test. The only references I've seen to using Selenium with Flash refer to Flash-Selenium which hasn't been updated in several years. Is testing Flash with Selenium even possible?
false
17,094,940
0
1
0
0
As long as you have access to the flash source code it is possible (although it requires some work). To do that you have to expose the flash actions you want to test using selenium. This requires that you make the methods available in Flash to execute via Javascript. Once you can do that, you should be able to automate the process with using selenium's ability to execute javascript.
0
3,667
0
2
2013-06-13T19:00:00.000
python,flash,selenium
Selenium/Python/Flash - How?
1
1
2
17,096,754
0
0
0
I want to create a website that extracts information from other websites and print them into my website, I am on research step, so I would like to hear some opinions and what's the best solution to this project? I have heard that Python using parser can do this I just want to know what's the path I should take and which language should I use?.
false
17,099,364
0.158649
0
0
4
Python with BeautifulSoup and Urllib2 will probably serve you well. Of course, it is questionable as to whether or not you should be scraping data from other websites and you might find yourself in a constant struggle if those websites change layouts.
0
314
0
4
2013-06-14T00:30:00.000
python,database,parsing,web-scraping
How can I get data from other websites?
1
2
5
17,099,503
0
0
0
I want to create a website that extracts information from other websites and print them into my website, I am on research step, so I would like to hear some opinions and what's the best solution to this project? I have heard that Python using parser can do this I just want to know what's the path I should take and which language should I use?.
false
17,099,364
0
0
0
0
You can write some web spiders to gather some data from other website.By using urllib2 or requests can help you download the html from the website.Beautiful or PyQuery can help you parse the html and get the data you want.
0
314
0
4
2013-06-14T00:30:00.000
python,database,parsing,web-scraping
How can I get data from other websites?
1
2
5
17,100,130
0
0
0
I'm designing a Python SSL server that needs to be able to handle up to a few thousand connections per minute. If a client doesn't send any data in a given period of time, the server should close their connection in order to free up resources. Since I need to check if each connection has expired, would it be more efficient to make the sockets non-blocking and check all the sockets for data in a loop while simultaneously checking if they've timed out, or would it be better to use select() to get sockets that have data and maintain some kind of priority queue ordered by the time data was received on a socket to handle connection timeout? Alternatively, is there a better method of doing this I haven't thought of, or are there existing libraries I could use that have the features I need?
true
17,100,293
1.2
0
0
1
I'd use a priority queue to keep track of who's been dormant. Notice, however, that you don't actually need a full-fledged priority queue if you only want to time out connections that have been inactive for a certain fixed amount of time. You can use a linked list instead: The linked list stores all of the sockets in sorted order by the last time activity was seen. When a socket receives data, you update a per-socket "data last seen at" member and move its list entry to the back of the list. Pass select() the time until the head of the list expires. At the end of an iteration of your select() loop, you pop off all of the expired list nodes (they're in sorted order) and close the connections. It's important, if you want sockets to expire at the right time, to use a monotonic clock. The list might lose its sorted order if the clock happens to go backward at some point.
0
235
0
0
2013-06-14T02:38:00.000
python,performance,sockets,loops,select
Connections with timeout: better to set sockets to non-blocking and loop through them or use select() and maintain a priority queue?
1
1
1
17,100,500
0
0
0
I am using python and XMLBuilder, a module I downloaded off the internet (pypi). It returns an object, that works like a string (I can do print(x)) but when I use file.write(x) it crashes and throws an error in the XMLBuilder module. I am just wondering how I can convert the object it returns into a string? I have confirmed that I am writing to the file correctly. I have already tried for example x = y although, as I thought, it just creates a pointer, and also x=x+" " put I still get an error. It also returns an string like object with "\n". Any help on the matter would be greatly appreciated.
false
17,136,085
0
0
0
0
I'm not quite sure what your question is, but print automatically calls str on all of it's arguments ... So if you want the same output as print to be put into your file, then myfile.write(str(whatever)) will put the same text in myfile that print (x) would have put into the file (minus a trailing newline that print puts in there).
0
165
0
1
2013-06-16T17:51:00.000
python,string,types,type-conversion
Converting a string-like object into a string in python
1
1
4
17,136,094
0
0
0
I am writing my first Yum plugin, which I hope to use to display some info about the packages to be downloaded on an update or an install. I have successfully gotten the plugin to run and have it all set up properly. My problem is getting a list of packages that will be downloaded before the user accepts or cancels the transaction. There is a method available in a certain conduit, the one provided to predownload_hook(conduit) and postdownload_hook(conduit), that can be called with conduit.getDownloadPackages() to do exactly what I want. However, both of these hooks are called after the user accepts or declines the transaction. According to the yum Python API docs, getDownloadPackages() is not available anywhere else. I have asked about this in #yum on Freenode a couple of times but haven't gotten an answer. A solution or any help is greatly appreciated. Have a good one.
true
17,139,217
1.2
0
0
2
You want to use the postresolve_hook(), and walk the transaction list. To see a fairly simple copy and paste example look at the changelog plugin (displays the rpm changelog for everything to be installed/upgraded in the transaction).
0
498
0
1
2013-06-17T01:07:00.000
python,yum
How can I use the yum Python module to get a list of packages that will be downloaded before accepting the transaction?
1
1
1
17,140,290
0
1
0
How can I serve arbitrary paths zope.browserrsource does for @@ and ++resource++ URIs in Zope?
false
17,151,693
0
0
0
0
There are two adapters needed for this. One adapts the ZODB context one wishes to use and zope.publisher.interfaces.IRequest, while providing zope.traversing.interfaces.ITraversable (view). The second adapts the previous objects instantiated view and zope.publisher.interfaces.browser.IBrowserRequest, while providing zope.publisher.interfaces.IPublishTraverse (traverser). I subclassed BrowserView for both adapters. Inside the traverser, the publishTraverse method will be called successively for each URL part that is being traversed and returns a view for that URL part.
0
47
0
1
2013-06-17T15:50:00.000
python,zope
How can I serve arbitrary request paths?
1
1
1
17,305,631
0
0
0
I'd like to write a script (preferably in python, but other languages is not a problem), that can parse what you type into a google search. Suppose I search 'cats', then I'd like to be able to parse the string cats and, for example, append it to a .txt file on my computer. So if my searches were 'cats', 'dogs', 'cows' then I could have a .txt file like so, cats dogs cows Anyone know any APIs that can parse the search bar and return the string inputted? Or some object that I can cast into a string? EDIT: I don't want to make a chrome extension or anything, but preferably a python (or bash or ruby) script I can run in terminal that can do this. Thanks
false
17,156,844
0
0
0
0
A few options you might consider, with their advantages and disadvantages: URL: advantage: as Chris mentioned, accessing the URL and manually changing it is an option. It should be easy to write a script for this, and I can send you my perl script if you want disadvantage: I am not sure if you can do it. I made a perl script for that before, but it didn't work because Google states that you can't use its services outside the Google interface. You might face the same problem Google's search API: advantage: popular choice. Good documentation. It should be a safe choice disadvantage: Google's restrictions. Research other search engines: advantage: they might not have the same restrictions as Google. You might find some search engines that let you play around more and have more freedom in general. disadvantage: you're not going to get results that are as good as Google's
0
205
1
0
2013-06-17T21:03:00.000
python,google-chrome
Parse what you google search
1
1
3
24,497,812
0
1
0
I am very much new to selenium WebDriver and I am trying to automate a page which has a button named "Delete Log File". Using FireBug I got to know that, the HTML is described as and also the css selector is defined as "#DeleteLogButton" using firepath hence I used browser.find_element_by_css_selector("#DeleteLogButton").click() in webdriver to click on that button but its now working and also, I tried, browser.find_element_by_id("DeleteLogButton").click() to click on that button. Even this did not find the solution for my problem... Please help me out in resolving the issue.
false
17,183,068
0
0
0
0
Most of the times im using By.xpath and it works specially if you use contains in your xpath. For example : //*[contains(text(),'ABC')] This will look for all the elements that contains string 'ABC' In your case you can replace ABC with Delete Log File
0
157
0
0
2013-06-19T04:49:00.000
python,selenium-webdriver
Unable to locate the element while using selenium-webdriver
1
1
2
17,183,255
0
1
0
Here is the situation: We use Flask for a website application development.Also on the website sever, we host a RESTful service. And we use Flask-login for as the authentication tool, for BOTH the web application access and the RESTful service (access the Restful service from browsers). Later, we find that we need to, also, access the RESTful from client calls (python), so NO session and cookies etc. This gives us a headache regarding the current authentication of the RESTful service. On the web, there exist whole bunch of ways to secure the RESTful service from client calls. But it seems no easy way for them to live together with our current Flask-login tool, such that we do not need to change our web application a lot. So here are the question: Is there a easy way(framework) so the RESTful services can support multiple authentication methods(protocols) at the same time. Is this even a good practice? Many thanks!
false
17,219,512
0
0
0
0
So, you've officially bumped into one of the most difficult questions in modern web development (in my humble opinion): web authentication. Here's the theory behind it (I'll answer your question in a moment). When you're building complicated apps with more than a few users, particularly if you're building apps that have both a website AND an API service, you're always going to bump into authentication issues no matter what you're doing. The ideal way to solve these problems is to have an independent auth service on your network. Some sort of internal API that EXCLUSIVELY handles user creation, editing, and deletion. There are a number of benefits to doing this: You have a single authentication source that all of your application components can use: your website can use it to log people in behind the scenes, your API service can use it to authenticate API requests, etc. You have a single service which can smartly managing user caching -- it's pretty dangerous to implement user caching all over the place (which is what typically happens when you're dealing with multiple authentication methods: you might cache users for the API service, but fail to cache them with the website, stuff like this causes problems). You have a single service which can be scaled INDEPENDENTLY of your other components. Think about it this way: what piece of application data is accessed more than any other? In most applications, it's the user data. For every request user data will be needed, and this puts a strain on your database / cache / whatever you're doing. Having a single service which manages users makes it a lot nicer for you to scale this part of the application stack easily. Overall, authentication is really hard. For the past two years I've been the CTO at OpenCNAM, and we had the same issue (a website and API service). For us to handle authentication properly, we ended up building an internal authentication service like described above, then using Flask-Login to handle authenticating users via the website, and a custom method to authenticate users via the API (just an HTTP call to our auth service). This worked really well for us, and allowed us to scale from thousands of requests to billions (by isolating each component in our stack, and focusing on user auth as a separate service). Now, I wouldn't recommend this for apps that are very simple, or apps that don't have many users, because it's more hassle than it's worth. If you're looking for a third party solution, Stormpath looks pretty promising (just google it). Anyhow, hope that helps! Good luck.
0
527
0
0
2013-06-20T16:59:00.000
python,authentication,client,restful-authentication
Flask login together with client authentication methods for RESTful service
1
1
1
21,565,425
0
1
0
I am using Selenium with PhantomJS as the webdriver in order to render webpages using Python. The pages are on my local drive. I need to save a screenshot of the webpages. Right now, the pages all render completely black. The code works perfect on non-local webpages. Is there a way to specify that the page is local? I tried this: driver.get("file://... but it did not work. Thanks!
false
17,242,929
0.197375
0
0
1
I feel silly now. I needed another forward slash driver.get("file:///
0
630
0
0
2013-06-21T19:29:00.000
selenium,python-3.x,local,phantomjs,webpage-screenshot
Screenshot local page with Selenium and PhantomJS
1
1
1
17,243,970
0
1
0
How would I scrape a domain to find all web pages and content? For example: www.example.com, www.example.com/index.html, www.example.com/about/index.html and so on.. I would like to do this in Python and preferable with Beautiful Soup if possible..
true
17,265,027
1.2
0
0
0
You can't. Pages not only can pages be dynamically generated based on backend database data and search queries or other input that your program supplies to the website, but there is a nearly infinite list of possible pages, and the only way to know which ones exist is to test and see. The closest you can get is to scrape a website based on hyperlinks between pages in the page content itself.
0
1,831
0
4
2013-06-19T20:39:00.000
python,http,dns
How can I find (and scrape) all web pages on a given domain using Python?
1
1
2
17,265,028
0
0
0
I'm having a pretty unique problem. I'm using the python module urllib2 in order to get http responses from a local terminal. At first, urllib2 would only work with non-local addresses (i.e. google.com, etc.) and not local webservers. I eventually deduced that urllib2 was not respecting the no_proxy environment variable. If I manually erased the other proxy env variables in the code (i.e. set http_proxy to ''), then it seemed to fix it for my CentOS 6 box. However, I have a second machine running Fedora 12 that needs to run the same python script, and I cannot for the life of me get urllib2 to connect to the local terminal. If I set http_proxy to '' then I can't access anything at all - not google, not the local terminal. However, I have a third machine running Fedora 12 and the fix that I found for CentOS 6 works with that one. This leads me to my question. Is there an easy way to tell the difference between Fedora 12 Box#1 (which doesn't work) and Fedora 12 Box#2 which does? Maybe there's a list of linux config files that could conceivably affect the functionality of urllib2? I know /etc/environment can affect it with proxy-related environment variables and I know the routing tables could affect it. What else am I missing? Note: - Pinging the terminal with both boxes works. Urllib2 can only fetch http responses from the CentOS box and Fedora 12 Box#2, currently. Info: I've tested this with Python 2.6.2 Python 2.6.6 Python 2.7.5 on all three boxes. Same results each time.
false
17,279,906
0
0
0
0
Permanent network settings are stored in various files in /etc/networking and /etc/network-scripts. You could use diff to compare what's in those files between the system. However, that's just the network stuff (static v.s. dynamic, routes, gateways, iptables firewalls, blah blah blah). If there's no differences there, you'll have to start expanding the scope of your search.
0
73
1
0
2013-06-24T16:02:00.000
python,linux,http,proxy,urllib2
Is there an easy way to tell the difference in network settings between two systems running Fedora 12?
1
1
1
17,279,971
0
1
0
I've been charged with migrating a large amount of simple web pages into wikiMedia articles. I've been researching the API and PyWikiBot but it seems that all it allows you to do is edit and retrieve what is already there. Can these tools be used to create a brand new article with content, a title and links to itself etc? If not, can anyone suggest a way to make large scale automated entries to the MediaWiki?
true
17,280,297
1.2
0
0
1
You can create a new article simply by editing a page that doesn't exist yet.
0
94
0
1
2013-06-24T16:24:00.000
python,bots,mediawiki-api
MediaWiki API: can it be used to create new articles programatically?
1
1
1
17,282,177
0
1
0
I'd like to scrape contact info from about 1000-2000 different restaurant websites. Almost all of them have contact information either on the homepage or on some kind of "contact" page, but no two websites are exactly alike (i.e., there's no common pattern to exploit). How can I reliably scrape email/phone # info from sites like these without specifically pointing the Python script to a particular element on the page (i.e., the script needs to be structure agnostic, since each site has a unique HTML structure, they don't all have, e.g., their contact info in a "contact" div). I know there's no way to write a program that will be 100% effective, I'd just like to maximize my hit rate. Any guidance on this—where to start, what to read—would be much appreciated. Thanks.
false
17,366,528
0.099668
1
0
1
In most countries the telephone number follows one of a very few well defined patterns that can be matched with a simple regexp - likewise email addresses have an internationally recognised format - simply scrape the homepage, contacts or contact us page and then parse with regular expressions - you should easily achieve better than 90% accuracy. Alternatively of course you simply submit the restaurant name and town to the local equivalent of the Yellow Pages web site.
0
2,766
0
3
2013-06-28T14:03:00.000
python,web-scraping,beautifulsoup,screen-scraping
Scraping Contact Information from Several Unique Sites with Python
1
1
2
17,366,729
0
0
0
Intro: I have a Python application using a Cassandra 1.2.4 cluster with a replication factor of 3, all reads and writes are done with a consistency level of 2. To access the cluster I use the CQL library. The Cassandra cluster is running on rackspace's virtual servers. The problem: From time to time one of the nodes can become slower than usual, in this case I want to be able to detect this situation and prevent making requests to the slow node and if possible to stop using it at all (this should theoretically be possible since the RF is 3 and the CL is 2 for every single request). So far the solution I came up with involves timing the requests to each of the nodes and preventing future connections to the slow node. But still this doesn't solves all the problem because even connecting to another node a particular query may end up being served by the slow node after the coordinator node routes the query. The questions: What's the best way of detecting the slow node from a Python application? Is there a way to stop using one of the Cassandra nodes from Python in this scenario without human intervention? Thanks in advance!
true
17,415,024
1.2
0
0
3
Your manual solution of timing the requests is enough, if nodes that are slow to respond are also ones that are slow to process the query. Internally Cassandra will avoid slow nodes if it can by using the dynamic snitch. This orders nodes by recent latency statistics and will avoid reading from the slowest nodes if the consistency level allows. NB writes go to all available nodes, but you don't have to wait for them to all respond if your consistency level allows. There may be some client support for what you want in a python client - Astyanax in Java uses something very like the dynamic snitch in the client to avoid sending requests to slow nodes.
0
968
0
3
2013-07-01T23:04:00.000
python,cassandra,cql
How to prevent traffic to/from a slow Cassandra node using Python
1
1
1
17,419,541
0
0
0
I'm creating a desktop application that is interfaced with using a mobile app or mobile communications ( twitter, txt ) I already have the mechanisms in place to share media ( youtube, instagram, ) with the desktop app from a mobile device. But, I would like to add a websocket chatbox to the desktop interface. So, that users can add msgs using a webview or websocket client within the mobile app. BUT How do I combine websockets with pyqt? I've found very few examples online... just looking for some insight on this problem.
true
17,416,893
1.2
0
0
1
A way to do this is to use a QWebView, insert that into your App and then load a HTML5 page in the WebView and use that to communicate with the server. This way you can probably even reuse the code for the mobile client as the code for the desktop chat interface.
0
1,153
0
0
2013-07-02T03:20:00.000
python,user-interface,websocket,pyqt
PyQt and WebSockets
1
1
1
17,417,077
1
1
0
Hi I have been working on openerp-7 (win-7) custom module creation . I have been loading openerp server through localhost:8069 . But today the application failed to start and its generating error " Oops! Google Chrome could not connect to localhost:8069 " . What should I do now to fix this issue? Plz help Hopes for suggestion
false
17,419,724
0
0
0
0
Verify that the OpenERP service is running on your computer. You can verify this by clicking on the Taskbar -> task manager -> Services. Look for the OpenERP service and start it if it is not running. A problem might have made it fail to start. There might be errors with your custom module. I tell you developing custom modules on Window is more tedious than on Linux because you can run the server in terminal mode and view output logged directly on the console
0
1,240
0
0
2013-07-02T07:21:00.000
python,eclipse,openerp
openerp not loading on localhost
1
1
1
17,486,874
0
0
0
I want to "establish" a TCP connection to a server with a fake sender IP address by using Python 2.6 on Windows 7. Is this possible without a proxy/Tor? I understand that I won't get a response, I'm only interested in sending an HTTP GET request to mimic a DDOS attack on my web server.
false
17,434,390
0
0
0
0
you can send TCP SYN packets to the server to initiate a handshake with the server to start a connection, but as you are using fake IPs the SYN-ACK packets sent from the server will be sent somewhere else and so you won't be able to acknowledge the server and finally start the connection. you better read more about the tcp handshake and the SYN cookies
0
4,544
0
1
2013-07-02T19:40:00.000
python,tcp-ip
TCP connection with a fake source IP address
1
1
2
47,622,960
0