Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I started writing some python code that adds events to a spreadsheet in Google Sheets using gdata. Everything went well and it works, but now I wanted to add these same events to a calendar and I can't figure out what I'm supposed to use. I see that the calendar API on Google is at V3 and they suggest I install google-api-python-client. Gdata seems to be almost abandoned and I feel like I'm lost in the middle. I would just like to be able to use one python API to add data to a calendar and a spreadsheet and if possible keeping things simple as I'm really not very good at this yet. Any help would be greatly appreciated. Thanks
true
28,517,789
1.2
0
0
0
You'll need to continue to use gdata for sheets until a new version that supports Google API is released.
0
303
0
2
2015-02-14T16:43:00.000
python,gdata,google-api-python-client
Python API for Google Calendar and Spreadsheets
1
1
1
28,519,427
0
0
0
I came across the term "Opinionated API" when reading about the ssl.create_default_context() function introduced by Python 3.4, what does it mean? What's the style of such API? Why do we call it an "opinionated one"? Thanks a lot.
true
28,535,671
1.2
0
0
17
It means that the creator of the API makes some choices for you that are in her opinion the best. For example, a web application framework could choose to work best with (or even bundle or work exclusively with) a selection of lower-level libraries (for stuff like logging, database access, session management) instead of letting you choose (and then have to configure) your own. In the case of ssl.create_default_context some security experts have thought about reasonably secure defaults to configure SSL connections. In particular, it limits the available algorithms to those that are still considered secure, at the expense of complete compatibility with legacy systems, a trade-off that is beneficial in their (and my) opinion. Essentially they are saying "we have a lot of experience in this domain, and we really think you should do things in the following way". I suppose this is a response to "enterprise" API that claim to work with every implementation of as many standard interfaces as possible (at the expense of complexity in configuration and combination, requiring costly consultants to set up everything). Or a natural extension of "Convention over Configuration". Things should work very well out-of-the-box, so that you only have to twiddle around with expert settings in special cases (and by then you should know what you are doing), as opposed to even a beginner having to make informed decisions about every aspect of the application (which can end in disaster).
1
2,047
0
8
2015-02-16T06:27:00.000
api,python-3.x
What does "Opinionated API" mean?
1
1
1
28,535,731
0
0
0
Which is the best method for sending a data using sockets: Method 1: Creating a new socket every time when data needs to be sent and closing it when the transfer is complete. Method 2: Using the same socket instead of creating a new socket and maintaining the connection even when waiting for new data.
false
28,541,889
0.197375
0
0
2
That depends. Creating a new socket means two computers have to discover each other, so there is name lookup, TCP/IP routing and resource allocation involved. Not really cheap but not that expensive either. Unless you send data more than 10 times/second, you won't notice. If you keep a socket open and don't send data for some time, a firewall between the two computers will eventually decide that this connection is stale and forget about it. The next data packet you send will fail with a timeout. So the main difference between the two methods is whether you can handle the timeout case properly in your code. You will have to handle this every time you write data to the socket. In many cases, the code to write is hidden very deep somewhere and the code doesn't actually know that it's writing to a socket plus you will have more than one place where you write data, so the error handling will leak into your design. That's why most people prefer to create a new socket every time, even if it's somewhat expensive.
0
2,330
0
4
2015-02-16T12:55:00.000
python,sockets,networking
Which is a larger overhead: Creating a new socket each time or maintaining a single socket for data transfer
1
2
2
28,542,041
0
0
0
Which is the best method for sending a data using sockets: Method 1: Creating a new socket every time when data needs to be sent and closing it when the transfer is complete. Method 2: Using the same socket instead of creating a new socket and maintaining the connection even when waiting for new data.
true
28,541,889
1.2
0
0
5
It depends on the kind of socket, but in the usual cases it is better to keep the socket unless you have very limited resources. UDP is connectionless, that is you create the socket and there is no delay because of connection setup when sending a packet. But there are still system calls involved and allocating memory etc, so it is cheap but not free. TCP instead needs to establish the connection before you can even start to sending data. How fast this is done depends on the latency, i.e. fast on local machine, slower in local network and even slower on the internet. Also, connection start slowly because the available bandwidth is not known yet. With SSL/TLS on top of TCP connection setup is even more expensive because it needs more round trips between client and server. In summary: If you are using TCP you almost always better keep the socket open and close it only if you lack the necessary resources to keep it open. A good compromise is to close the socket as long as you have enough activity on the socket. This is the approach usually done with HTTP persistent connections.
0
2,330
0
4
2015-02-16T12:55:00.000
python,sockets,networking
Which is a larger overhead: Creating a new socket each time or maintaining a single socket for data transfer
1
2
2
28,543,387
0
0
0
I am exploring Python socket support with Linux network-namespaces, I see there is pyroute2, which handles only network-namespace (netns) creation etc, but does not seem to have any APIs for socket IO (say udp). And the Python socket library also does not seem to have any methods related to pick a specific network-namespace . Am I missing something, or its not yet implemented?
true
28,556,452
1.2
0
0
1
Recently there was setns() call introduced in pyroute2, that allows you to set the network namespace for the current process. Then you can spawn processes with multiprocessing, set NS for each and use multiprocessing.Pipe to communicate between spawned processes. If anything else is still missing — ye're welcome to file an issue at github, we'll try to fix it asap.
0
291
0
0
2015-02-17T07:05:00.000
python,sockets,network-programming
socket IO between netns
1
1
1
28,564,544
0
0
0
I want to verify Stickiness policy of load balancer. Hence want to verify that when I make two or three subsequent HTTP requests (wget), then the request is satisfied by the same server every time?
false
28,559,048
0
0
0
0
Send a unique header on the backend server, E.G., "X-Node-ID: 1" for server #1.
0
178
0
0
2015-02-17T09:50:00.000
python,linux,wget
How to check if the wget command request was served by a particular server when load balancer is in place?
1
1
1
28,559,080
0
0
0
I am using pythonanywhere.com and trying to run an app that I made for twitter that uses tweepy but it keeps saying connection refused or failed to send request. Is there any way to run a python app online easily that sends requests?
false
28,600,076
0
1
0
0
Pythonanywhere requires a premium account for web access. You can create a limited account with one web app at your-username.pythonanywhere.com, restricted Internet access from your apps, low CPU/bandwidth. It works and it's a great way to get started! Get a premium account: $5/month Run your Python code in the cloud from one web app and the console A Python IDE in your browser with unlimited Python/bash consoles One web app with free SSL at your-username.pythonanywhere.com Enough power to run a typical 50,000 hit/day website. (more info) 3,000 CPU-seconds per day (more info) 512MB disk space That said, I'd just set it up locally if its for personal use and go from there.
0
214
0
0
2015-02-19T06:38:00.000
python,tweepy
Trying to use python on a server
1
2
2
28,600,163
0
0
0
I am using pythonanywhere.com and trying to run an app that I made for twitter that uses tweepy but it keeps saying connection refused or failed to send request. Is there any way to run a python app online easily that sends requests?
false
28,600,076
0
1
0
0
You will need a server that has a public IP. You have a few options here: You can use a platform-as-a-service provider like Heroku or AWS Elastic Beanstalk. You can get a server online on AWS, install your dependencies and use it instead. As long as you keep your usage low, you can stay withing the free quotas for these services.
0
214
0
0
2015-02-19T06:38:00.000
python,tweepy
Trying to use python on a server
1
2
2
28,600,193
0
0
0
I'm trying to get the hostmask for a user, to allow some authentication in my IRCClient bot. However, it seems to be removed from all responses? I've tried 'whois', but it only gives me the username and the channels the user is in, not the hostmask. Any hint on how to do this?
false
28,606,809
0.197375
1
0
1
Found it, when I override RPL_WHOISUSER, I can get the information after issuing an IRCClient.whois. (And yes, did search for it before I posted my question, but had an epiphany right after I posted my question...)
0
37
0
1
2015-02-19T12:46:00.000
python,twisted,irc,twisted.internet,twisted.words
How to get a user's hostmask with Twisted IRCClient
1
1
1
28,607,141
0
0
0
I need to constantly access a server to get real time data of financial instruments. The price is constantly changing so I need to request new prices every 0.5 seconds. The REST APIs of the brokers let me do this, however, I have noticed there's quite some delay when connecting to the server. I just noticed that they also have websocket API though. According to what I read, they both have some pros/cons. But for what I want to do and because speed is specially important here, which kind if API would you recommend? Is websocket really faster? Thank you!
true
28,613,399
1.2
0
0
84
The most efficient operation for what you're describing would be to use a webSocket connection between client and server and have the server send updated price information directly to the client over the webSocket ONLY when the price changes by some meaningful amount or when some minimum amount of time has elapsed and the price has changed. This could be much more efficient than having the client constantly ask for new price changes and the timing of when the new information gets to the client can be more timely. So, if you're interested in how quickly the information on a new price level gets to the client, a webSocket can get it there much more timely because the server can just send the new pricing information directly to the client the very moment it changes on the server. Whereas using a REST call, the client has to poll on some fixed time interval and will only ever get new data at the point of their polling interval. A webSocket can also be faster and easier on your networking infrastructure simply because fewer network operations are involved to simply send a packet over an already open webSocket connection versus creating a new connection for each REST/Ajax call, sending new data, then closing the connection. How much of a difference/improvement this makes in your particular application would be something you'd have to measure to really know. But, webSockets were designed to help with your specific scenario where a client wants to know (as close to real-time as practical) when something changes on the server so I would definitely think that it would be the preferred design pattern for this type of use. Here's a comparison of the networking operations involved in sending a price change over an already open webSocket vs. making a REST call. webSocket Server sees that a price has changed and immediately sends a message to each client. Client receives the message about new price. Rest/Ajax Client sets up a polling interval Upon next polling interval trigger, client creates socket connection to server Server receives request to open new socket When connection is made with the server, client sends request for new pricing info to server Server receives request for new pricing info and sends reply with new data (if any). Client receives new pricing data Client closes socket Server receives socket close As you can see there's a lot more going on in the Rest/Ajax call from a networking point of view because a new connection has to be established for every new call whereas the webSocket uses an already open call. In addition, in the webSocket cases, the server just sends the client new data when new data is available - the client doens't have to regularly request it. If the pricing information doesn't change super often, the REST/Ajax scenario will also frequently have "do-nothing" calls where the client requests an update, but there is no new data. The webSocket case never has that wasteful case since the server just sends new data when it is available.
0
21,513
0
40
2015-02-19T17:51:00.000
python,rest,websocket,httprequest
websocket vs rest API for real time data?
1
1
1
28,618,369
0
0
1
I have a csv file in S3 and I'm trying to read the header line to get the size (these files are created by our users so they could be almost any size). Is there a way to do this using boto? I thought maybe I could us a python BufferedReader, but I can't figure out how to open a stream from an S3 key. Any suggestions would be great. Thanks!
false
28,618,468
1
0
0
20
I know it's a very old question. But as for now, we can just use s3_conn.get_object(Bucket=bucket, Key=key)['Body'].iter_lines()
0
83,468
0
35
2015-02-19T22:42:00.000
python,amazon-web-services,amazon-s3,boto
Read a file line by line from S3 using boto?
1
1
10
58,636,713
0
1
0
I'm planning to deliver an API as a web-service. When I update my API's (interpreted) code-base, however, I anticipate possibly needing to restart the API service or even just have a period where the code is being overwritten. This introduces the possibility that incoming API requests may be dropped or, even worse, that processes triggered by the API may be interrupted. The flask library for python appears to offer something of a solution; by enabling debug mode it will check the modified flag on all python files and, for each request, it will reload any modules that have changed. It's not the performance penalty that puts me off this approach - it's the idea that it looks slightly jilted. Surely there is an elegant, high-availability approach to what must be a common issue? Edit: As @piotrek answered below, "there is no silver bullet". One briefly visible comment suggested using DNS to switch to a new API server after an update.
true
28,625,474
1.2
0
0
1
actually there is no silver bullet. you mention two different things. one is availability. it depends on how many nines you want to have in your 9,999... availability. second thing is api change. so: availability: some technologies allows you to do hot changes/deloyments. which means pending requests goes the old path, new request goes the new path. if your technology don't support it you can't use it for othe reasons, there are other options in small scale intranet applications you simply don't care. you stop the world: stop the application, upload new version and start. on application stop many web frameworks stop accepting new connection and wait until all pending requests are finished. if yours don't support it you have 2 options: ignore it (db will rollback current transaction, user will get error) implement it yourself (may be challenging). and you do your best to shorten the inactivity period. if you can't afford to stop everything then you can do: clustering. restart one service by one. all the time some server is available. that's not always possible because sometimes you have to change your database and not always you can do it on working system or you can't afford to loose any data in case of update failure microservices. if you split your application into many independent components connected with persistent queues then you turn of only some parts of your system (graceful degradation). for example you can disable component that writes changes to the database but still allow reads. if you have infrastructure to do it quickly then the update may be unnoticed - requests will be put into queues and picked up by new version api change: you version your api. each request says which version it requires. if you control all your clients / small scale / decide not to support old versions : you don't care. everyone has to update its client. if not, then again microservices may help. you split your public api from internal api. and you keep all your public api services running and announce then some of them are deprecated. you monitor their usage. when you decide that usage of some version is low enough you announce end-of-life and later you shutdown specific version that's best i've got for the moment
0
33
0
0
2015-02-20T09:23:00.000
python,api,high-availability
How do I design an API such that code updates don't cause interruptions?
1
1
1
28,629,932
0
0
0
I have a Python script that uploads videos to my YouTube channel. Now I have more than one YouTube channel under my account and want to upload to the second channel once I'm done with the first. But OAuth 2 at no point asks me which account I want to upload to, and keeps uploading to the first channel that I authorised. How should I fix this ? Thanks :)
true
28,653,507
1.2
0
0
0
You will need an access token for each account. There are various ways you can get them, for example request offline access for each account and store the two refresh tokens. There are other ways too. You'll need to detail the behaviour you want to achieve.
0
170
0
0
2015-02-22T01:36:00.000
python,youtube,oauth-2.0,youtube-api,google-oauth
How to Google OAuth2 to upload videos to 2 different YouTube account with a Python script
1
1
1
28,657,784
0
0
0
I have a server on which I want to build a script to login to page which is using javascript. I want to use python selenium to achieve the same. We have a shared drive which contains all the installed binaries and the same has to be included. So when running a python program I won't be using my #!/usr/bin/python instead efs/path../python, similarly all the packages are to be included in this ways. sys.path.append("/efs/path.../selenium-egg-info"). This works good, but as selenium would need firefox included, I could see mozilla in the path, but where are it's binary, exactly which folder to include inside mozilla.
true
28,661,207
1.2
0
0
0
You can think Selenium as launching 'firefox' behind the scenes. You won't see it but it's there and then accordingly opening up the webpage and manipulating things. How do you think it does all that cool stuff without writing explicit url header etc. So for that you need to have a firefox installed with a physical display(monitor) installed. You can fake a pyhsical terminal it's just input/output but you AFAIK you need to have a firefox installed. Sad news but that's the way it is.
1
100
0
1
2015-02-22T17:56:00.000
python,firefox,selenium,mozilla
Selenium include mozilla instance
1
1
2
32,930,931
0
1
0
I need to get data (json) in my html page, with help of Ajax. I have a Nodejs server serving requests. I have to get the json from server, which is python code to process and produce json as output. So should i save json in db and access it? (seems complicated just for one single use) Should i run python server, to serve the requests with json as result (call it directy from html via ajax) Should i serve requests with nodejs alone, by calling python method from nodejs? if so how to call the python method. If calling python requires it to run server ? which one to prefer (zeropc, or some kind of web framework?) Which is the best solution? or Which is preferred over other in what scenario and what factors?
false
28,679,250
0
0
0
0
In your application, if you have some requirement of processing results of python server requests in your nodejs application, then you need to call the python server requests in nodejs app with request libray and then process the result. Otherwise, you should simply call python server resources through client side ajax requests. Thanks
0
932
0
1
2015-02-23T17:06:00.000
python,ajax,json,node.js
Nodejs Server, get JSON data from Python in html client with Ajax
1
1
2
28,687,176
0
1
0
I'm trying to use BrowserMob to proxy pages with Selenium WebDriver. When the initial page request is made, many elements of the page fail to load (e.g., css, jquery includes). If I manually refresh the page everything loads as expected. Has anyone else seen this behavior? Is there a solution? Thanks!
false
28,704,465
0.197375
0
0
1
When you configure the WebDriver in your test code, set the proxy address not as localhost:8080 but as 127.0.0.1:8080. I think that Firefox has some problems resolving the proxy localhost:8080 that it does not have with the explicit form 127.0.0.1:8080.
0
90
0
1
2015-02-24T19:24:00.000
python-2.7,selenium-webdriver,browsermob
BrowserMob only partially loading page on initial load; fine afterwards
1
1
1
29,100,263
0
1
0
The boto config has a num_retries parameter for uploads. num_retries The number of times to retry failed requests to an AWS server. If boto receives an error from AWS, it will attempt to recover and retry the request. The default number of retries is 5 but you can change the default with this option. My understanding is that this parameter governs how many times to retry on commands like set_content_from_string. According to the documentation, the same command will fail if the md5 checksum does not match upon upload. My question is, will boto also retry upon checksum failure, or does num_retry apply to a separate class of failures?
true
28,732,751
1.2
0
0
1
When boto uploads a file to S3 it calculates the MD5 checksum locally, sends that checksum to S3 as the Content-MD5 header and then checks the value of the ETag header returned by the S3 service against the previously computed MD5 checksum. If the ETag header does not match the MD5 it raises an S3DataError exception. This exception is a subclass of ClientError and client errors are not retried by boto. It is also possible for the S3 service to return a BadDigest error if the Content-MD5 header we provide does not match the MD5 checksum computed by the service. This is a 400 response from S3 and is also considered a client error and would not be retried.
0
329
0
1
2015-02-26T01:06:00.000
python,amazon-s3,boto
Does Boto retry on failed md5 checks?
1
1
1
28,745,850
0
0
0
I'd like to give to each of my customers access to their own bucket under my GCS enabled app. I also need to make sure that a user's bucket is safe from other users' actions. Last but not least, the customer will be a client application, so the whole process needs to be done transparently without asking the user to login. If I apply an ACL on each bucket, granting access only to the user I want, can I create an API key only for that bucket and hand that API key to the client app to perform GCS API calls?
false
28,753,061
0.197375
0
0
1
Unfortunately you only have two good options here: Have a service which authenticates the individial app according to whatever scheme you like (some installation license, a random GUID assigned at creation time, whatever) and vends GCS signed URLs, which the end user could then use for a single operation, like uploading an object or listing a bucket's content. The downside here is that all requests must involve your service. All resources would belong entirely to your application. Abandon the "without asking the user to login" requirement and require a single Google login at install time.
0
56
1
1
2015-02-26T21:26:00.000
google-app-engine,google-cloud-storage,google-app-engine-python
Can I have GCS private isolated buckets with a unique api key per bucket?
1
1
1
28,754,390
0
0
0
I am planning to have API Automation Framework built on the top pf Python + Request Library Expected flow: 1) Read Request Specification From input file "csv/xml" 2) Make API Request & get Response & analyse the same 3) Store Test Results 4) Communicate the same Initial 'smoke test' to be performed with basic cases then the detailed ones.There will be 'n' number of api's with respective cases.
false
28,765,243
0.379949
1
0
2
I have done API Automation framework using JAVA - TestNG - HTTP Client. It's a Hybrid framework consist of, Data Driven Model : Reads data from JSON/ XML file. Method-Driven : I have written POJO for JSON Objects and Arrays reading and writing. Report : I will be getting the Report using TestNG customized report format Dependency Management: I have used Maven. This framework I have Integrated with Jenkins for Continous Integration. SCM : I have Used GIT for that.
0
1,451
0
3
2015-02-27T12:33:00.000
python,api,automation,web-api-testing
API Test Automation Framework Structure
1
1
1
38,866,398
0
1
0
I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data. When i saw the HTML Source I noticed its blank there too. I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python?
false
28,765,398
0.099668
0
0
1
The Python binding for Selenium and phantomjs (if you want to use a headless browser as backend) are the appropriate tools for this job.
0
272
0
0
2015-02-27T12:42:00.000
javascript,python,html,web-scraping,beautifulsoup
How to fetch data from a website using Python that is being populated by Javascript?
1
2
2
28,852,463
0
1
0
I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data. When i saw the HTML Source I noticed its blank there too. I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python?
false
28,765,398
0
0
0
0
Yes, you can scrape JS data, it just takes a bit more hacking. Anything a browser can do, python can do. If you're using firebug, look at the network tab to see from which particular request your data is coming from. In chrome element inspection, you can find this information in a tab named network, too. Just hit ctrl-F to search the response content of the requests. If you found the right request, the data might be embedded in JS code, in which case you'll have some regex parsing to do. If you're lucky, the format is xml or json, in which case you can just use the associated builtin parser.
0
272
0
0
2015-02-27T12:42:00.000
javascript,python,html,web-scraping,beautifulsoup
How to fetch data from a website using Python that is being populated by Javascript?
1
2
2
28,850,142
0
0
0
What is the best way to create a menuitem (for the Gtk.MenuBar) that should open the default browser with a new tab and loading an URL? Is it possible to do that in Glade directly or do I need to create that function in the program code itself? Is there a preferred way in Python 3 to do that?
true
28,775,534
1.2
0
0
0
After a lot of searching for a Glade-only solution, I think that Gtk.Menuitem doesn't have a URL-open option. I now just defined on_menuitem_clicked-function that uses: webbrowser.open_new_tab() from the standard library.
0
178
0
0
2015-02-27T22:37:00.000
python-3.x,gtk,glade
Link to website in Gtk.MenuBar using Glade
1
1
1
28,878,731
0
0
0
I have 46000 xml files that all share a common structure but there are variations and the number of files makes it impossible to figure out a common xml schema for all of them. Is there a tool that may help me to extract a schema from all these files or at least something that may give me a close enough idea of what is mandatory and what is optional? Preferably, of course, a schema according to some standard or a DTD. And, as I work entirely in Linux, a Linux tool or a program that works in Linux is OK. I am quite fluent in C, Java, Javascript, Groovy, Python (2.7 and 3) and a number of other languages.
false
28,794,872
0
0
0
0
Looking deeper into the problem and after scanning other posts I found out that inst2xsd is the tool for this kind of tasks.
0
76
0
0
2015-03-01T14:49:00.000
java,python,xml,linux,groovy
extract an XML schema (or equivalent) from a large set of xml files
1
1
1
28,805,524
0
1
0
I am trying to do some web scraping but I have some problems in joining relative and root urls for example the root url is: http://www.jmlr.org/proceedings/papers/v2 and the relative url is: ../v2/meila07a/meila07a.pdf As I use urljoin in urlparse: the result is odd: http://www.jmlr.org/proceedings/v2/meila07a/meila07a.pdf Which is not a valid link. Can anybody help me with that?
false
28,818,559
0
0
0
0
Two dots (..) means go back once in the hierarchy, change the second link to ./v2/meila07a/meila07a.pdf and it should be working fine. Or you can also change the root one to http://www.jmlr.org/proceedings/papers/v2/, thanks to this change it will no longer dispose of v2 at the end because the root was not set to a proper directory.
0
399
0
0
2015-03-02T20:06:00.000
python,urllib,urlparse
joining urls with urljoin in python
1
1
1
30,655,631
0
0
0
I am running some application in multiple network namespace. And I need to create socket connection to the loopback address + a specific port in each of the name space. Note that the "specific port" is the same across all network namespaces. Is there a way I can create a socket connection like this in python? Appreciate any pointer!
false
28,846,059
0
0
0
0
I just came across this post while looking into network namespaces and using python to interact with them. In regards to your question about non-root users running the setns(), or similar functions, I believe that is achievable. In a small script that creates the red and blue namespaces mentioned in this post, you could also set a linux capability inside of the new namespace that would allow non-root users to attach and bind. Directly from the man page, we see this description: call setns(2) (requires CAP_SYS_ADMIN in the target namespace); Capabilities can be added to binaries such as the python2.7 or it can also be added to systemd processes. For example, if you look at the default openvpn-server service file on a centos 7 or RHEL 7 you can see added capabilities so that it can run without root privileges: CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN..... I know this isn't an answer to the original question, but I do not have enough reputation to reply to comments at the moment. I would suggest looking at capabilities and all of the options provided if you are security conscious and looking to run code as non-root users.
0
6,844
0
8
2015-03-04T03:12:00.000
python,sockets,namespaces
Can I open sockets in multiple network namespaces from my Python code?
1
1
2
67,091,745
0
1
0
When Querying just for tasks that are marked for today in python: client.tasks.find_all({ 'assignee_status':'upcoming','workspace': 000000000,'assignee':'me' ,'completed_since':'now'}, page_size=100) I get a response of all tasks the same as if I would not have included assignee_status client.tasks.find_all({'workspace': 000000000,'assignee':'me' ,'completed_since':'now'}, page_size=100) The workspace space has around 5 task that are marked for today. Thank you, Greg
false
28,947,894
0.197375
0
0
1
You actually can't filter by assignee_status at all - if you pass the parameter it is silently ignored. We could change it so that unrecognized parameters result in errors, which would help make this clearer.
0
269
0
2
2015-03-09T17:11:00.000
python,asana
Asana API querying by assignee_status
1
1
1
28,948,207
0
1
0
I have a situation when a msg fails and I would like to replay that msg with the highest priority using python boto package so he will be taken first. If I'm not wrong SQS queue does not support priority queue, so I would like to implement something simple. Important note: when a msg fails I no longer have the message object, I only persist the receipt_handle so I can delete the message(if there was more than x retries) / change timeout visibility in order to push him back to queue. Thanks!
false
28,970,289
0.099668
0
0
2
By "when a msg fails", if you meant "processing failure" then you could look into Dead Letter Queue (DLQ) feature that comes with SQS. You can set the receive count threshold to move the failed messages to DLQ. Each DLQ is associated with an SQS. In your case, you could make "max receive count" = 1 and you deal with that message seperately
0
27,088
0
14
2015-03-10T17:29:00.000
python,boto,priority-queue,amazon-sqs
How to implement a priority queue using SQS(Amazon simple queue service)
1
2
4
57,454,595
0
1
0
I have a situation when a msg fails and I would like to replay that msg with the highest priority using python boto package so he will be taken first. If I'm not wrong SQS queue does not support priority queue, so I would like to implement something simple. Important note: when a msg fails I no longer have the message object, I only persist the receipt_handle so I can delete the message(if there was more than x retries) / change timeout visibility in order to push him back to queue. Thanks!
true
28,970,289
1.2
0
0
20
I don't think there is any way to do this with a single SQS queue. You have no control over delivery of messages and, therefore, no way to impose a priority on messages. If you find a way, I would love to hear about it. I think you could possibly use two queues (or more generally N queues where N is the number of levels of priority) but even this seems impossible if you don't actually have the message object at the time you determine that it has failed. You would need the message object so that the data could be written to the high-priority queue. I'm not sure this actually qualifies as an answer 8^)
0
27,088
0
14
2015-03-10T17:29:00.000
python,boto,priority-queue,amazon-sqs
How to implement a priority queue using SQS(Amazon simple queue service)
1
2
4
28,973,859
0
0
0
I am writing a little script which picks the best machine out of a few dozen to connect to. It gets a users name and password, and then picks the best machine and gets a hostname. Right now all the script does is print the hostname. What I want is for the script to find a good machine, and open an ssh connection to it with the users provided credentials. So my question is how do I get the script to open the connection when it exits, so that when the user runs the script, it ends with an open ssh connection. I am using sshpass.
true
28,971,180
1.2
1
0
0
If you want the python script to exit, I think your best bet would be to continue doing a similar thing to what you're doing; print the credentials in the form of arguments to the ssh command and run python myscript.py | xargs ssh. As tdelaney pointed out, though, subprocess.call(['ssh', args]) will let you run the ssh shell as a child of your python process, causing python to exit when the connection is closed.
0
1,186
1
1
2015-03-10T18:17:00.000
python,ssh
Open SSH connection on exit Python
1
1
3
28,971,821
0
1
0
I wrote a script that scrapes various things from around the web and stores them in a python list and have a few questions about the best way to get it into a HTML table to display on a web page. First off should my data be in a list? It will at most be a 25 by 9 list. I’m assuming I should write the list to a file for the web site to import? Is a text file preferred or something like a CSV, XML file? Whats the standard way to import a file into a table? In my quick look around the web I didn’t see an obvious answer (Major web design beginner). Is Javascript this best thing to use? Or can python write out something that can easily be read by HTML? Thanks
true
28,972,157
1.2
0
0
2
store everything in a database eg: sqlite,mysql,mongodb,redis ... then query the db every time you want to display the data. this is good for changing it later from multiple sources. store everything in a "flat file": sqlite,xml,json,msgpack again, open and read the file whenever you want to use the data. or read it in completly on startup simple and often fast enough. generate a html file from your list with a template engine eg jinja, save it as html file. good for simple hosters There are some good python webframeworks out there some i used: Flask, Bottle, Django, Twisted, Tornado They all more or less output html. Feel free to use HTML5/DHTML/Java Script. You could use a webframework to create/use an "api" on the backend, which serves json or xml. Then your java script callback will display it on your site.
0
1,914
0
1
2015-03-10T19:11:00.000
python,html,python-2.7
Best way to import a Python list into an HTML table?
1
1
2
28,973,712
0
0
0
I have a python script that scans for new tweets containing specified #hashtags, then it posts them to my "python bot's" twitter account as new tweets. I tested it from the python console and let it run for 5 minutes. It managed to pick up 10 tweets matching my criteria. It works flawlessly, but I'm concerned about performance issues and leaving the script running for extended amounts of time. What are the negative effects of leaving this script running on my personal computer for a whole day or more? Should I be running this on a digital ocean VPS instead? Twitter offers the API for bot creation, but do they care how much a bot tweets? I don't see how this is any different from retweeting.
true
28,974,896
1.2
1
0
1
Twitter probably has limits on their api and will most likely block your api key if they feel that you are spamming. In fact I would bet there is a maximum number of tweets per day depending on the type of developer account. For stability and up time concerns running on a 'personal' computer is not a good idea. You probably want to do other things on your personal comp that may interrupt your bot's service (like install programs/updates and restart). As far as load on the cpu, well if its only picking up 10 tweets per 5 min that doesn't seem like any kind of load that you need to worry about. To be sure you could run the top command and check out the cpu and memory usage. If you have a server somewhere like at digital ocean then I would run it there just to reduce the interruption the program experiences. I ran a similar program using twitters stream api and collected tweets using a personal computer and the interruptions were annoying and I eventually stopped collecting data....
0
399
0
0
2015-03-10T22:00:00.000
python,twitter,tweepy
Running python tweepy listener
1
1
1
28,977,051
0
0
0
Openstack-Swift is using evenlet.green.httplib for BufferedHttpconnections. When I do performance benchmark of it for write operations, I could observer that write throughput drops even only one replica node is overloaded. As I know write quorum is 2 out of 3 replicas, therefore overloading only one replica cannot affect for the throughput. When I dig deeper what I observed was, the consequent requests are blocked until the responses are reached for the previous requests. Its mainly because of the BufferedHttpConnection which stops issuing new request until the previous response is read. Why Openstack-swift use such a method? Is this the usual behaviour of evenlet.green.httplib.HttpConnection? This does not make sense in write quorum point of view, because its like waiting for all the responses not a quorum. Any ideas, any workaround to stop this behaviour using the same library?
false
29,008,403
0
0
0
0
Its not a problem of the library but a limitation due to the Openstack Swift configuration where the "Workers" configuration in all Account/Container/Object config of Openstack Swift was set to 1 Regarding the library When new connections are made using evenlet.green.httplib.HttpConnection it does not block. But if requests are using the same connection, subsequent requests are blocked until the response is fully read.
0
181
0
0
2015-03-12T11:20:00.000
python,python-2.7,openstack,openstack-swift
Why does Openstack Swift requests are blocked by eventlet.green.httplib?
1
1
1
29,079,748
0
1
0
I have two applications, one has the web api other and application use it to authenticate it itself. How 2FA implemented in my application is, first get the username and password then authenticate it. After authenticate it I send the username, session key . If I get the correct mobile passcode , username and session key back, application authenticate it second time. Now the problem is It works, when I use postman chrome plugin to test the 2FA. However if I use the second application to authenticate it fails. When I debug through the code I found, it breaks at session variables. I get Key error. I assume that the session is empty when I try to authenticate second time from the application. I am confused why it works from Postman plugin but not from the second application.
false
29,018,927
0
0
0
0
Fact is that I forget REST is stateless. Can't use session within two web services calls.
0
88
0
0
2015-03-12T19:45:00.000
python,session,authentication,two-factor-authentication
two factor authentication doesn't work when it is access from applicaiton
1
1
1
29,021,959
0
0
0
I'm trying to load a CSV file to Amazon S3 with Python. I need to know CSV file's modification time. I'm using ftplib to connect FTP with Python (2.7).
false
29,026,709
-1
1
0
-4
When I want to change the file modification time, I use an FTP client on the console. Log on to a remote FTP ftp ftp.dic.com cd commands go to the correct directory SITE command to move the extended command mode UTIME somefile.txt 20050101123000 20050101123000 20050101123000 UTC change the access time, modification time, it's time to create a directory on 2005-01-01 12:30:00 somefile.txt Complete example: site UTIME somefile.txt 20150331122000 20150331122000 20150331122000 UTC Please feel free to sit back and wish you a pleasant journey in time :)
0
24,718
0
22
2015-03-13T07:13:00.000
python,python-2.7,datetime,ftp,ftplib
How to get FTP file's modify time using Python ftplib
1
1
2
29,468,092
0
0
0
I want to check that the given email id is really exists or not in smtp server. Is it possible to check or not.? If it possible please give me suggestion how can we do it.
false
29,073,907
0
1
0
0
Short of sending an email and having someone respond to it is impossible to verify an email exists. You can verify the SMTP server has a whois address, but thats it.
0
1,440
0
0
2015-03-16T09:57:00.000
python,email,smtp
How to Check is email exists or not in python
1
1
1
29,073,968
0
1
0
I am in the middle of my personal website development and I am using python to create a "Comment section" which my visitors could leave comments at there in public (which means, everybody can see it, so don't worry about the user name registration things). I already set up the sql database to store those data but only thing I haven't figured out yet was how to get the user input (their comments) from the browser. So, is there any modules in python could do that? (Like, the "Charfield" things in django, but unfortunately I don't use django)
false
29,088,344
0
0
0
0
For that you would need a web framework like Bottle or Flask. Bottle is a simple WSGI based web framework for Python. Using either of these you may write simple REST based APIs, one for set and other for get. The "set" one could accept data from your client side and store it on your database where as your "get" api should return the data by reading it from your DB. Hope it helps.
0
754
0
0
2015-03-16T22:50:00.000
python,web-development-server
How I can get user input from browser using python
1
1
1
37,997,045
0
1
0
I am working with a script where i need to crawl websites, need to crawl only base_url site. Anyone who has pretty good idea how i can launch scarpy in custom python scripts and get urls link in list?
false
29,092,291
0
0
0
0
You can use a file to pass the urls from scrapy to your python script. Or you can print the urls with a mark in your scrapy, and use your python script to catch the stdout of your scrapy.Then parse it to list.
0
343
0
0
2015-03-17T06:01:00.000
python,python-2.7,web-crawler,scrapy
How we can get List of urls after crawling website from scrapy in costom python script?
1
1
2
29,092,568
0
0
0
I am getting the following error while running gclient runhooks for building chromium. running '/usr/bin/python src/tools/clang/scripts/update.py --if-needed' in '/media/usrname/!!ChiLL out!!' Traceback (most recent call last): File "src/tools/clang/scripts/update.py", line 283, in sys.exit(main()) File "src/tools/clang/scripts/update.py", line 269, in main stderr=os.fdopen(os.dup(sys.stdin.fileno()))) File "/usr/lib/python2.7/subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.7/subprocess.py", line 710, in init errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 13] Permission denied Error: Command /usr/bin/python src/tools/clang/scripts/update.py --if-needed returned non-zero exit status 1 in /media/usrname/!!ChiLL out!! In order to get permission of the directory "/usr/bin/python src/tools/clang/scripts" I tried chown and chmod but it returned the same error.
true
29,105,684
1.2
0
0
0
Actually the directory was not mounted with execution permission. So I remounted the directory with execution permission using mount -o exec /dev/sda5 /media/usrname and it worked fine.
0
769
1
0
2015-03-17T17:22:00.000
python,build,permissions,chromium
Chromium build gclient runhooks error number 13
1
1
2
29,161,669
0
0
0
Hy. So I installed python 2.7.9 and the twitter follow bot from github. I don't really know what I'm doing wrong but when I try to use a command I get an error. using this from twitter_follow_bot import auto_follow_followers_for_userresults in this Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> from twitter_follow_bot import auto_follow File "twitter_follow_bot.py", line 21, in <module> from twitter import Twitter, OAuth, TwitterHTTPError ImportError: cannot import name Twitter Any idea what I did wrong. I never use python before so if you could explain it to me step by step it would be great. Thnaks
false
29,111,388
0
1
0
0
In from twitter import Twitter, OAuth, TwitterHTTPError "Twitter" does not exists in "twitter". Try to re-download or double check if that file is even within "twitter".
0
249
0
0
2015-03-17T23:06:00.000
python,twitter,bots,twitter-follow
twitter python script generates error
1
1
1
29,111,422
0
0
0
I recently started searching for socket programming, and decided to use python for testing. I have the following question: As I read, you can only listen for a limited number of connections in a server-side socket, thus you can only have such a number of connections operating at a time. Is there a way to be able to hold as many sockets open as the system can tolerate? That is e.g. in the case of a chat server (you would not want to only have 5 active users at a time, for example). What's the solution to that? Should one create more sockets to achieve that goal? But then, would the number of ports available to the system be the next limitation?
true
29,149,000
1.2
0
0
0
If you are asking about function 'listen': 'backlog' argument is the maximum length to which the queue of pending connections for socket may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds. This does not set any limit on number of socket connections. Only the ones which were not 'accept'-ed yet (e.g. have not become 'connection'). When client tries to connect the backlog grows by 1. When you call 'accept' the backlog decreases by 1. So if you will call 'accept' periodically you will open a number of connections.
0
103
0
0
2015-03-19T15:42:00.000
python,sockets,server-side
Arbitrarily large number of sockets - Python
1
1
1
29,149,290
0
0
0
I have a program written in Perl, and I want to execute this program in MATLAB. In this program I am calling an XML file, but on execution, I am getting XML: DOM error as Error using perl (line 80) System error: Can't locate XML/DOM.pm in @INC, etc. How can I get out of this error? Program is executing in Perl very well...
false
29,149,900
0
1
0
0
Matlab brings it's own perl installation located at fullfile(matlabroot, 'sys\perl\win32\bin\'). Probably here the additional resources are missing. Navigate to this folder and install the requirements using ppm
0
56
0
0
2015-03-19T16:22:00.000
python,xml,matlab,xml-parsing
XML DOM ERROR when executing Python program in perl
1
1
1
29,150,179
0
0
0
What would be the best way of downloading / uploading files / directory to / from remote windows server and local windows machine using python scripting language? Modules I heard of are paramiko and fabric... Apart from these any other good option/choice?
false
29,204,576
0.379949
0
0
2
It depends on the protocol you are using, if the file is big, use UDP, if the file is small use TCP, if the file is small use SSH. You don's necessarily need paramiko or fabric to communicate with another computer, since they are for ssh connections. If you know the protocol, then it is easier to communicate.
0
608
0
0
2015-03-23T06:44:00.000
python,windows,paramiko
How to download files from remote windows server
1
1
1
29,214,110
0
0
0
Many search engines track clicked URLs by adding the result's URL to the query string which can take a format like: http://www.example.com/result?track=http://www.stackoverflow.com/questions/ask In the above example the result URL is part of the query string but in some cases it takes the form http://www.example.com/http://www.stackoverflow.com/questions/ask or URL encoding is used. The approach I tried first is to split searchengineurl.split("http://"). Some obvious problems with this: it would return all parts of the query string that follow the result URL and not just the result URL. This would be a problem with an URL like this: http://www.example.com/result?track=http://www.stackoverflow.com/questions/ask&showauthor=False&display=None it does not distinguish between any additional parts of the search engine tracking URL's query string and the result URL's query string. This would be a problem with an URL like this: http://www.example.com/result?track=http://www.stackoverflow.com/questions/ask?showauthor=False&display=None it fails if the "http://" is ommitted in the result URL What is the most reliable, general and non-hacky way in Python to extract URLs contained in other URLs?
false
29,235,535
0.066568
0
0
1
I would try using urlparse.urlparse it will probably get you most of the way there and a little extra work on your end will get what you want.
0
99
0
1
2015-03-24T14:42:00.000
python,html,parsing,url,urlencode
How to reliably extract URLs contained in URLs with Python?
1
1
3
29,235,977
0
1
0
For the IE webdriver, it opens the IE browsers but it starts to load the local host and then stops (ie/ It never stated loading ). WHen the browser stops loading it shows the msg 'Initial start page for webdriver server'. The problem is that this does not occur every time I execute the test case making it difficult to identify what could be the cause of the issue. What I have noticed is when this issue occurs, the url will take ~25 secs to load manually on the same machine. When the issue does not occur, the URL will load within 3secs. All security setting are the same (protected Mode enabled across all zone) enhance protected mode is disabled IE version 11 the URL is added as a trusted site. Any clue why it does not load the URL sometimes?
false
29,241,353
0
0
0
0
Use remote driver with desired cap (pageLoadStrategy) Release notes from seleniumhq.org. Note that we have to use version 2.46 for the jar, iedriverserver.exe and python client driver in order to have things work correctly. It is unclear why 2.45 does not work given the release notes below. v2.45.0.2 Updates to JavaScript automation atoms. Added pageLoadStrategy to IE driver. Setting a capability named pageLoadStrategy when creating a session with the IE driver will now change the wait behavior when navigating to a new page. The valid values are: "normal" - Waits for document.readyState to be 'complete'. This is the default, and is the same behavior as all previous versions of the IE driver. "eager" - Will abort the wait when document.readyState is 'interactive' instead of waiting for 'complete'. "none" - Will abort the wait immediately, without waiting for any of the page to load. Setting the capability to an invalid value will result in use of the "normal" page load strategy.
0
2,136
0
0
2015-03-24T19:35:00.000
python,selenium,selenium-webdriver,webdriver,internet-explorer-11
IE11 stuck at initial start page for webdriver server only when connection to the URL is slow
1
2
3
39,231,646
0
1
0
For the IE webdriver, it opens the IE browsers but it starts to load the local host and then stops (ie/ It never stated loading ). WHen the browser stops loading it shows the msg 'Initial start page for webdriver server'. The problem is that this does not occur every time I execute the test case making it difficult to identify what could be the cause of the issue. What I have noticed is when this issue occurs, the url will take ~25 secs to load manually on the same machine. When the issue does not occur, the URL will load within 3secs. All security setting are the same (protected Mode enabled across all zone) enhance protected mode is disabled IE version 11 the URL is added as a trusted site. Any clue why it does not load the URL sometimes?
false
29,241,353
0
0
0
0
It hasn't been updated for a while, but recently I had very similar issue - IEDriverServer was eventually opening page under test, but in most cases just stuck on Initial page of WebDriver. What I found the root cause (in my case) was startup setting of IE. I had Start with tabs from the last session enabled, when changed back to Start with home page driver started to work like a charm, opening page under test in 100% of tries.
0
2,136
0
0
2015-03-24T19:35:00.000
python,selenium,selenium-webdriver,webdriver,internet-explorer-11
IE11 stuck at initial start page for webdriver server only when connection to the URL is slow
1
2
3
62,954,013
0
0
0
I am trying to build a proxy to buffer some packets according to some schedules. I have two TCP connections, one from host A to the proxy and the other one from the proxy to host B. The proxy forwards the packets between A and B. The proxy will buffer the packets according to the scheduled instructions. At certain time, it will buffer the packets. After the buffering period is over, it will forward the packets in the buffer and also do its normal forwarding work. I am using python. Which module would be the best in this situation? I tried pickle but it is difficult to remove and append elements in the file. Any suggestions? Thanks!
false
29,306,962
0
0
0
0
I recommend you join the two scripts into one and just use memory. If you can't join the scripts for some reason, create a unix-domain socket to pass the raw, binary data directly from one to the other. These fifos have limited size so you'll still have to do in-memory buffering on one side or another, probably the B side. If the data is too big for memory, you can write it out to a temporary file and re-read it when it's time to pass it on. It'll be easiest if the same script both writes and reads the file as then you won't have to guess when to write, advanced to a new file, or deal with separate readers and writers.
0
261
1
0
2015-03-27T17:45:00.000
python,tcp,proxy,buffer
Build a buffer in file for stream of TCP packets
1
1
1
29,309,592
0
0
0
I Want to create my own safe connection for a VOIP app. Now I am looking into key exchange which seems to be much more Tricky than encrypting/decrypting. Are there any better approaches than Diffie-Hellman in practice ? I understand the concept of Diffie-Hellman but I think it needs the right values to be safe since with natural numbers it could be easily be guessed. How can I get those values using python, what are they and is it really safe from key guessing? Please help me with some background informations / inspiring.
false
29,315,955
0
0
0
0
DH is fine for this purpose, just make to sure to use 2048 bit keys or more. However for VoIP the standards are TLS with SRTP/zrtp so it would be better if you would implement these. With DH you loose compatibility and will introduce a lot of complications. Also note that DH is only for key exchange, so you will need something also for the encryption itself. With TLS you could handle all these in one step by using a well know implementation instead to write your own encryption stack from scratch.
0
474
0
0
2015-03-28T10:16:00.000
python,diffie-hellman
Diffie-Hellman Parameters and safety
1
1
3
29,883,118
0
0
0
Here's the idea -- there's a website that I want to scrape. It updates every 10 minutes, but sometimes get's out of sync. It's important that the information I scrape is just before it updates. Each time I check the site, I can scrape the 'time remaining' until next update. Is there a way to make a cron job that -- after each iteration -- I can specifically set the time to wait before running the the time (t+1) iteration based on some variable from the time (t) iteration? I'm not particularly familiar with cron jobs -- my current super rough implementation just uses -sleep-. Not ideal.
false
29,383,893
0
0
0
0
You could use the 'at' command to set a new job for the next time you need to run it. So if your scraper tells you the next update is in 7 minutes you can set the 'at' command to run 'now + 6 minutes'
0
633
0
0
2015-04-01T06:05:00.000
python-2.7,cron,crontab,cron-task
Variable time between cron jobs (or similar implementation)
1
1
2
29,386,729
0
1
0
I have a very basic python script which uses boto to query the state of my EC2 instances. When I run it from console, it works fine and I'm happy. The problem is when I want to add some automation and run the script via crond. I notices that the script hangs and waits indefinitely for the connection. I saw that boto has this problem and that some people suggested to add some timeout value to boto config file. I couldn't understand how and where, I added manually /etc/boto.cfg file with the suggested timeout value (5) but it didn't help. With strace you can see that this configuration file is never being accessed. Any suggestions how to resolve this issue?
false
29,395,946
0.099668
0
0
1
The entire issue appeared to be HTTP_PROXY environment variable. The variable was set in /etc/bashrc and all users got it this way but when cron jobs ran (as root) /etc/bashrc wasn't read and the variable wasn't set. By adding the variable to the configuration file of crond (via crontab -e) the issue was solved
0
774
0
1
2015-04-01T16:23:00.000
python,amazon-web-services,amazon-ec2,cron,boto
Connection with boto to AWS hangs when running with crond
1
1
2
29,407,280
0
0
0
This is hardly a programming question, so please don't laugh me out of here. On the Google bigquery Web UI, is there a way to make the "New Query" box taller? I have been getting into fairly length queries and I would like to be able to see them all at once. Or should I be graduating to a different mechanism (python?) for writing and running queries? Suggestions appreciated.
false
29,405,412
0.379949
0
0
4
Try dragging the grey line just under the big red "Run Query" button.
0
91
0
4
2015-04-02T05:13:00.000
python,google-bigquery
Google bigquery Web UI: change height of the "New Query" box?
1
1
2
29,405,486
0
0
0
My friend and I trying to communicate each other using simple python client - server. When we were in the same LAN , the communication was great. Now , each of us is in his house and we can't connect beacuse of error 10060. We read about the firewall problem , we tried to turn it off - and still not working. What should we do? Thank is advance.
false
29,454,902
0.197375
0
0
1
Your problem could be due to port forwarding, to fix this you would need to enable port forwarding on your router. Each router is different, but this is usually done by opening the router's webpage and setting port forwarding to the IP of your computer
0
77
0
0
2015-04-05T06:50:00.000
python,network-programming,client-server,port
Networking programming - client - server not in the same LAN
1
1
1
29,454,929
0
0
0
I want to help my friend to analyze Posts on Social Networks (Facebook, Twitter, Linkdin and etc.) as well as several weblogs and websites. When it comes to the Storing the Data, I have no experience in huge data. Which one is the best for a bunch of thousand post, tweet and article per day: Database, XML file, plain text? If database, which one? P.S. The language that I am going to start programming with is Python.
false
29,457,275
0.379949
1
0
2
That depends on the way you want to work with the data. If you have structured data, and want the exchange it between different programs, xml might be a good choice. If you do mass processing, plain text might be a good choice. If you want to filter the data, a database might be a good choice.
0
124
0
0
2015-04-05T12:29:00.000
python,mysql,xml,database
Storing Huge Data; Database, XML or Plain text?
1
1
1
29,457,336
0
0
0
I am doing my bachelor's thesis where I wrote a program that is distributed over many servers and exchaning messages via IPv6 multicast and unicast. The network usage is relatively high but I think it is not too high when I have 15 servers in my test where there are 2 requests every second that are going like that: Server 1 requests information from server 3-15 via multicast. every of 3-15 must respond. if one response is missing after 0.5 sec, the multicast is resent, but only the missing servers must respond (so in most cases this is only one server) Server 2 does exactly the same. If there are missing results after 5 retries the missing servers are marked as dead and the change is synced with the other server (1/2) So there are 2 multicasts every second and 26 unicasts every second. I think this should not be too much? Server 1 and 2 are running python web servers which I use to do the request every second on each server (via a web client) The whole szenario is running in a mininet environment which is running in a virtual box ubuntu that has 2 cores (max 2.8ghz) and 1GB RAM. While running the test, i see via htop that the CPUs are at 100% while the RAM is at 50%. So the CPU is the bottleneck here. I noticed that after 2-5 minutes (1 minute = 60 * (2+26) messages = 1680 messages) there are too many missing results causing too many sending repetitions while new requests are already coming in, so that the "management server" thinks the client servers (3-15) are down and deregisters them. After syncing this with the other management server, all client servers are marked as dead on both management servers which is not true... I am wondering if the problem could be my debug outputs? I am printing 3-5 messages for every message that is sent and received. So that are about (let's guess it are 5 messages per sent/recvd msg) (26 + 2)*5 = 140 lines that are printed on the console. I use python 2.6 for the servers. So the question here is: Can the console output slow down the whole system that simple requests take more than 0.5 seconds to complete 5 times in a row? The request processing is simple in my test. No complex calculations or something like that. basically it is something like "return request_param in ["bla", "blaaaa", ...] (small list of 5 items)" If yes, how can I disable the output completely without having to comment out every print statement? Or is there even the possibility to output only lines that contain "Error" or "Warning"? (not via grep, because when grep becomes active all the prints already have finished... I mean directly in python) What else could cause my application to be that slow? I know this is a very generic question, but maybe someone already has some experience with mininet and network applications...
false
29,461,480
0
1
0
0
I finally found the real problem. It was not because of the prints (removing them improved performance a bit, but not significantly) but because of a thread that was using a shared lock. This lock was shared over multiple CPU cores causing the whole thing being very slow. It even got slower the more cores I added to the executing VM which was very strange... Now the new bottleneck seems to be the APScheduler... I always get messages like "event missed" because there is too much load on the scheduler. So that's the next thing to speed up... :)
1
102
0
0
2015-04-05T19:44:00.000
python,performance,networking,cpu,mininet
Console output consuming much CPU? (about 140 lines per second)
1
1
1
29,502,719
0
1
0
I'm using a pre put hook to fetch some data from an api before each put. If that api does not respond, or is offline, I want the request to fail. Do I have to write a wrapper around a put() call, or is there some way so that we can still type My_model.put() and just make it fail?
true
29,470,767
1.2
0
0
5
_pre_put_hook is called immediately before NDB does the actual put... so if an exception is raised inside of _pre_put_hook, then the entire put will fail
0
247
0
1
2015-04-06T11:55:00.000
python,google-app-engine,google-cloud-datastore,app-engine-ndb
Can I cause a put to fail from the _pre_put_hook?
1
1
1
29,473,764
0
0
0
I just bought a new Macbook Pro, but I forgot to write down my own wifi password. I tried contacting my ISPs (or whatever you call them) but no one responded. I don't think I will ever get an answer from them. Using Python 2.7.9, is a program able to hack into my own wifi and retrieve the password?
false
29,482,125
0
0
0
0
The password for the wifi will be stored in the keychain => /Applications/Utilities/ It will be in either the Login keychain or System keychain, just double click the keychain entry with the wifi name and tick show password, once you have entered your password it should show you the password used to connect to the network.
0
1,948
0
1
2015-04-07T01:11:00.000
python-2.7,passwords,wifi
Recovering Wifi Password Using Python
1
2
3
29,482,182
0
0
0
I just bought a new Macbook Pro, but I forgot to write down my own wifi password. I tried contacting my ISPs (or whatever you call them) but no one responded. I don't think I will ever get an answer from them. Using Python 2.7.9, is a program able to hack into my own wifi and retrieve the password?
false
29,482,125
0
0
0
0
Connect to your router via ethernet. You should then be able to set the wifi password to whatever you want
0
1,948
0
1
2015-04-07T01:11:00.000
python-2.7,passwords,wifi
Recovering Wifi Password Using Python
1
2
3
36,777,109
0
1
0
I'm using the python package to move the mouse in some specified pattern or just random motions. The first thing I tried is to get the size of the //html element and use that to make the boundaries for mouse movement. However, when I do this the MoveTargetOutOfBoundsException rears its head and displays some "given" coordinates (which were not anywhere near the input. The code I used: origin = driver.find_element_by_xpath('//html') bounds = origin.size print bounds ActionChains(driver).move_to_element(origin).move_by_offset(bounds['width'] - 10, bounds['height'] - 10).perform() So I subtract 10 from each boundary to test it and move to that position (apparently the move_to_element_by_offset method is dodgy). MoveTargetOutOfBoundsException: Message: Given coordinates (1919, 2766) are outside the document. Error: MoveTargetOutOfBoundsError: The target scroll location (17, 1798) is not on the page. Stacktrace: at FirefoxDriver.prototype.mouseMoveTo (file://... The actual given coordinates were (1903-10=1893, 969-10=989). Any ideas?
true
29,488,957
1.2
0
0
0
The problem in my case was that I was not waiting for the element to load. At least I assume that's what the problem is because if I allow selenium to wait for the element instead and then click on it, it works.
0
1,318
0
3
2015-04-07T10:05:00.000
python,html,selenium
Selenium moving to absolute positions
1
2
2
29,564,991
0
1
0
I'm using the python package to move the mouse in some specified pattern or just random motions. The first thing I tried is to get the size of the //html element and use that to make the boundaries for mouse movement. However, when I do this the MoveTargetOutOfBoundsException rears its head and displays some "given" coordinates (which were not anywhere near the input. The code I used: origin = driver.find_element_by_xpath('//html') bounds = origin.size print bounds ActionChains(driver).move_to_element(origin).move_by_offset(bounds['width'] - 10, bounds['height'] - 10).perform() So I subtract 10 from each boundary to test it and move to that position (apparently the move_to_element_by_offset method is dodgy). MoveTargetOutOfBoundsException: Message: Given coordinates (1919, 2766) are outside the document. Error: MoveTargetOutOfBoundsError: The target scroll location (17, 1798) is not on the page. Stacktrace: at FirefoxDriver.prototype.mouseMoveTo (file://... The actual given coordinates were (1903-10=1893, 969-10=989). Any ideas?
false
29,488,957
0
0
0
0
Two possible problems: 1) There could be scroll on the page, so before clicking you should have scroll into element view 2) Size is given without browser elements respect, and in real world you should substitute about 20 or 30 to have original size (you could test test that values)
0
1,318
0
3
2015-04-07T10:05:00.000
python,html,selenium
Selenium moving to absolute positions
1
2
2
29,489,027
0
1
0
I have a python script that extracts product data from an ecommerce website. However, one essential piece of information is missing from the page - delivery cost. This is not provided on any of the product pages, and is only available when you add the product to the shopping basket in order to test how much the product costs to deliver. Complexity is also added due to different delivery rules - e.g free delivery on orders over £100, different delivery prices for different items, or a flat rate of shipping for multiple products. Is there a way that I can easily obtain this delivery cost data? Are there any services that anyone knows of through which I can obtain this data more easily, or suggestions on a script that I could use? Thanks in advance.
false
29,539,555
0.197375
0
0
1
You shouldn't try to fetch information about delivery price from a cart or any other pages, because like you see it depends on cart amount or other conditions on e-commerce site. It means only one right way here is to emulate this rules/conditions when you try to calculate total price of an order on your side. Do it like this and you'll avoid too many problems with the correct calculations of delivery prices.
0
363
0
0
2015-04-09T13:14:00.000
python,web-scraping,data-extraction
Scraping / Data extraction of shipping price not on product page (only available on trolley)
1
1
1
29,540,245
0
0
0
I would like to write a script to access data on a website, such as: 1) automatically searching a youtuber's profile for a new posting, and printing the title of it to stdout. 2) automatically posting a new video, question, or comment to a website at a specified time. For a lot of sites, there is a required login, so that is something that would need to be automated as well. I would like to able to do all this stuff from the command line. What set of tools should I use for this? I was intending to use Bash, mostly because I am in the process of learning it, but if there are other options, like Python or Javascript, please let me know. In a more general sense, it would be nice to know how to read and directly interact with a website's JS; I've tried looking at the browser console, but I can't make much sense of it.
false
29,541,619
0.197375
0
0
1
Python or Node (JS) will probably be a lot easier for this task than Bash, primarily because you're going to have to do OAuth to get access to the social network. Or, if you're willing to get a bit "hacky", you could issue scripts to PhantomJS, and automate the interaction with the sites in question...
0
52
0
0
2015-04-09T14:40:00.000
javascript,python,bash,youtube,command-line-interface
How to interact with social websites (auto youtube posting, finding titles of new videos etc.) from the command line
1
1
1
29,541,704
0
0
0
I was able to set up a simple socket server and client connection between two devices, with the ability to send and receive values. My issue is with setting up the remote server to accept two clients from the same device, and differentiate the data being received by them. Specifically, each client will be running a similar code to accept encoder/decoder values from their respective motor. My main program, attached to the server, needs to use the data from each client separately, in order to carry out the appropriate calculations. How do I differentiate the incoming signals coming from both clients?
false
29,554,475
0
0
0
0
When the communication isn't heavy between clients and server, one way to do this is to have clients do a handshake to server and have the server enumerate clients and send back id's for communication. Then the client sends it's id along with any communication it has with server in order for the server to identify it. At least that is what I did.
0
117
0
0
2015-04-10T06:07:00.000
python,sockets
Python server to receive specific data from two clients on same remote device
1
1
1
29,554,763
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0
0
0
0
This is a fresh install. It works with firewall disabled. As this was a fresh install any answer that deals with AppData does not apply. Nor does any answer dealing with deleting *.py files. If you are using a third party firewall - uninstall it and use Windows firewall. The main offender here is Avast/AVG. If you really want to you can set such software to "ask" you for rule creation, AVG ignores these rules in this case and blocks what it thinks is an external "Tcp/Udp" (sic) public connection. Using Process Explorer from SysInternals reveals the succesful connection between the two processes. AVG cannot seem to deal with "phone home" situation originating on the same host. The problem should be more widespread as many debuggers operate in the same way, so there may be some contribution to this problem by the IDLE developers. Change your firewall provider.
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
67,039,753
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0
0
0
0
Just to note my particular issue, this happens for me when my RAM gets full and my CPU gets busy. The problem is because of a network socket timeout on the IPC pipes between the RPC subprocess. It's a poor design (insecure and prone to failure) that's commonly used for IPC instead of process pipes. The fix is to clear out some RAM and CPU usage and wait a minute before trying again. And for developers, the fix is to stop using sockets for IPC and use proper process pipes. Yes, it's the same exact socket timeout issue you experience with your browser, though on modern browsers, the page just stops loading instead of displaying a timeout error screen. (note this assumes the case of a good WAN connection, with a local timeout)
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
56,508,508
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0
0
0
0
My problem was that the .py file wasn't on my local machine. It was on a shared directory. After moving the file to my local machine, I quit getting the error.
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
56,134,431
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0.049958
0
0
3
Simple...rename your .py file with some name different from any keyword name like 'random.py' which already exists in python package. Eg. I named one file as "random.py". The same error popped up. I renamed it to "random_demo.py". It worked. The different naming discards the problem of ambiguity between an already existing file and a newly created file with same name.
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
36,233,988
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0.049958
0
0
3
I fixed it, I needed to run IDLE with admin privileges. (I am using Windows 7 x64). Hope this helps.
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
33,713,257
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0
0
0
0
Just had the same issue. So uninstalled and reinstalled which fixed it and took 10 minutes. The key with Windows machines is delete the old directory (C:\Python27\ because windows does seem to actually delete things) and when reinstalling specify a new directory (C:\Python279\ or whatever you choose to call it). I am using Win 10 with Python 2.7.9.
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
33,092,815
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0
0
0
0
I had a similar problem with a file called "test.py" and Python 2.7.9 - renaming the file to something else solved my issue. After checking, I noticed that there is a file with the same name under the Python27\Lib folder. Seems to be a bug in IDLE.
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
32,435,420
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
false
29,567,051
0
0
0
0
Go to C:/Users/[your user]/AppData/Local/Programs/Python/Python35-32 and delete or rename every *.py file in this directory which is named after a certain method, function, module or library. Then run IDLE. Should work. Hope I could help
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
36,227,282
0
0
0
I am new to programming and i decided to learn Python first, so; I installed Python, latest version 3.4. and I am trying to open Python IDLE(GUI) mode, so when I open I get message "IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection.". My firewall is not problem beacuse I put Python throught it. I also tried to reinstall it and it didnt made diffirence. So please if somenone can help! Thank you on your time :D
true
29,567,051
1.2
0
0
29
Delete all newely created .py files in the directory with Python. for example random.py, end.py - that was my problem that caused the same notification window. Reason in filename conflicts.
0
111,264
0
20
2015-04-10T17:10:00.000
python,user-interface,runtime-error,subprocess,python-idle
Python error - IDLE's subprocess didn't make connection. Either IDLE can't start or personal firewall software is blocking connection
1
9
12
32,142,364
0
1
0
I am busy developing a Python system that uses web-sockets to send/received data from a serial port. For this to work I need to react to data from the serial port as it is received. Problem is to detect incoming data the serial port needs to queried continuously looking for incoming data. Most likely a continuous loop. From previous experiences(Slow disk access + heavy traffic) using Flask this sounds like it could cause the web-sockets to be blocked. Will this be the case or is there a work around? I have looked at how NodeJS interact with serial ports and it seems much nicer. It raises an event when there is incoming data instead of querying it all the time. Is this an option in Python? Extra Details: For now it will only be run on Linux.(Raspbian) Flask was my first selection but I am open to other Python Frameworks. pyserial for serial connection.(Is the only option I know of)
false
29,577,287
0.099668
0
0
1
Simply start a subprocess that listens to the serial socket and raises an event when it has a message. Have a separate sub-process for each web port that does the same.
0
1,800
0
1
2015-04-11T11:21:00.000
python,websocket,event-handling,serial-port,blocking
Python: how to host a websocket and interact with a serial port without blocking?
1
1
2
29,577,603
0
1
0
I am building a Python Flask webpage that uses websockets to connect to a single serial port(pySerial). The webpage will collect a list of commands to be executed(user input) and send that to the serial port via websockets. The problem I am facing is that as soon as the webpage has been opened multiple times commands can be sent at any time and might get run out of order.
false
29,596,604
0.197375
0
0
1
Specify a variable like serial_usage which has an initial value False. When a new client connected to your WebSocket server, check the serial_usage variable. If serial port is not being used at that moment (serial_usage == False), make the connection happen, set serial_usage True. When the client disconnects, set serial_usage variable False. If serial port is being used by another client (serial_usage == True), you can show an error page and prevent the new connection.
0
124
0
0
2015-04-13T01:24:00.000
python,session,websocket,serial-port
How to restrict a webpage to only one user(Browser Tab)
1
1
1
29,596,782
0
0
0
I am using Python 2.7 on Linux. I need to get local IP address (here it is 172.16.x.x). I can get IP address and it's corresponding netmask address but I'm not able to get broadcast IP address for the same IP.
false
29,669,838
0
0
0
0
If you are able to get the IP and the Subnetmask, you could simply calculate the Broadcastadress, by binary ORing the IP adress and the inverted subnetmask. BC = ipadress || inv(subnetmask)
0
1,652
0
0
2015-04-16T08:55:00.000
python-2.7,network-programming,ip-address
How to find Broadcast IP address from local IP address?
1
1
1
29,670,007
0
1
0
Is there a way to GET the metafields for a particular product if I have the product ID? Couldn't find it in the docs.
false
29,683,036
0
0
0
0
product = shopify.Product.find(pr_id) metafields = product.metafields()
0
1,518
0
3
2015-04-16T18:25:00.000
python,shopify
Shopify Python API GET Metafields for a Product
1
1
4
43,496,363
0
0
0
I need to write python script which performs several tasks: read commands from console and send to server over tcp/ip receive server response, process and make output to console. What is the best way to create such a script? Do I have to create separate thread to listen to server response, while interacting with user in main thread? Are there any good examples?
false
29,694,344
0.197375
0
0
1
Calling for a best way or code examples is rather off topic, but this is too long to be a comment. There are three general ways to build those terminal emulator like applications : multiple processes - the way the good old Unix cu worked with a fork multiple threads - a variant from the above using light way threads instad of processes using select system call with multiplexed io. Generally, the 2 first methods are considered more straightforward to code with one thread (or process) processing upward communication while the other processes the downward one. And the third while being trickier to code is generally considered as more efficient As Python supports multithreading, multiprocessing and select call, you can choose any method, with a slight preference for multithreading over multiprocessing because threads are lighter than processes and I cannot see a reason to use processes. Following in just my opinion Unless if you are writing a model for rewriting it later in a lower level language, I assume that performance is not the key issue, and my advice would be to use threads here.
0
78
1
0
2015-04-17T08:41:00.000
python,multithreading,tcpclient
Whats the best way to implement python TCP client?
1
1
1
29,695,958
0
1
0
I am currently working on a HTML presentation, that works well, but I need the presentation to be followed simultaneously with a NAO robot who reads a special html tag. I somehow need to let him know, which slide I am on, so that he can choose the correct tag. I use Beautiful Soup for scraping the HTML, but it does so from a file and not from a browser. The problem is, there is javascript running behind, assigning various classes to specific slides, that tell the current state of the presentation. And I need to be able to access those, but in the default state of the presentation they are not present and are added asynchronously throughout the process of the presentation. Hopefully, my request is clear. Thank you for your time
false
29,733,653
0
0
0
0
There's wget on the robot, you could use it... (though I'm not sure I understand where is really the problem...)
0
307
0
0
2015-04-19T17:58:00.000
javascript,python,html,web-scraping
Is there a way to get current HTML from browser in Python?
1
1
2
29,791,337
0
1
0
I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled. Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it?
true
29,758,554
1.2
0
0
1
It appears that the primary problem was not having cookies enabled. Having enabled cookies, I'm having more success now. Thanks.
0
3,256
0
1
2015-04-20T21:15:00.000
python,web-scraping,scrapy
Scrapy crawl blocked with 403/503
1
2
4
30,086,257
0
1
0
I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled. Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it?
false
29,758,554
0
0
0
0
I simply set AutoThrottle_ENABLED to True and my script was able to run.
0
3,256
0
1
2015-04-20T21:15:00.000
python,web-scraping,scrapy
Scrapy crawl blocked with 403/503
1
2
4
72,128,238
0
0
0
I'm trying to install selenium library for python on my Ubuntu machine using pip installer. I receive the following error: pip install selenium Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 304, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1230, in prepare_files req_to_install.run_egg_info() File "/usr/lib/python2.7/dist-packages/pip/req.py", line 293, in run_egg_info logger.notify('Running setup.py (path:%s) egg_info for package %s' % (self.setup_py, self.name)) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 285, in setup_py if six.PY2 and isinstance(setup_py, six.text_type): AttributeError: 'module' object has no attribute 'PY2' I am currently using Python 2.7.9 python --version Python 2.7.9
true
29,772,530
1.2
0
0
0
Solved upgrading six pip install --upgrade six
0
361
1
0
2015-04-21T12:41:00.000
python,ubuntu,selenium
Can't install python selenium on Ubuntu 14.10/15.04(pre-release)
1
1
1
29,772,531
0
0
0
How to execute a file after it gets downloaded on client side,File is a python script, User dont know how to change permission of a file, How to solve this issue??
false
29,778,405
0.099668
0
0
1
Change file permissions to make it executable: sudo chmod +x file.py
0
48
0
0
2015-04-21T16:46:00.000
python,linux,apache,file-permissions
File permission gets changed after file gets downloaded on client machine
1
2
2
29,778,427
0
0
0
How to execute a file after it gets downloaded on client side,File is a python script, User dont know how to change permission of a file, How to solve this issue??
false
29,778,405
0
0
0
0
Maybe you can try teaching them how to use chmod +x command? Or actualy even more simple it would be to change it using GUI: right click -> properties -> permissions-> pick what is needed
0
48
0
0
2015-04-21T16:46:00.000
python,linux,apache,file-permissions
File permission gets changed after file gets downloaded on client machine
1
2
2
29,778,455
0
0
0
I'm wondering if there is a way to use the copy_tree module in Python to copy a directory tree over a network. Has anyone done this or seen something close to this?
false
29,803,463
0
0
0
0
Do you mean the "copy_tree" method in distutils.dir_util module? If so the answer is no, with a caveat. The Code requires a directory name for both the source and destination directories. The caveat is if you can mount the remote drive onto your the local machine then it would be doable.
0
90
0
0
2015-04-22T16:20:00.000
python,networking,directory,sftp,distutils
Python : copy_tree over network?
1
1
1
29,803,781
0
0
0
I am trying to invalidate the session using jira-python rest client. How can I achieve this feature, is it built in or needs to be implemented? I tried looking at all the APIs available in client.py and there seems to be no way to destroy or invalidate a session. Another question that follow is, do I have to authenticate on every REST call made by the client? Currently that is what I am doing.
false
29,810,878
0
0
0
0
Just to be clear, one of the reasons why is called REST api is because you do not have to do anything with the session.
0
1,174
0
0
2015-04-22T23:36:00.000
python,rest,python-jira
Destroy session using jira-python REST client
1
1
2
29,880,808
0
0
0
I am trying to run a sample BigQuery query using a Python Client that I downloaded from the Google Site (modified with my client secrets, project info, etc.), but am unable to get passed the browser page that is requesting access. I've tried several browsers including Chrome and Firefox. I am on a MAC if that matters. I've tried both the native google client sample as well as Pandas GBQ API. When I execute either of the API samples, a page is rendered in the browser basically asking that the client API is requesting permission to "View and manage your data in Google BigQuery". When I click Accept, a new page is rendered with an error that indicates no data was returned from the server. I cannot tell if this is an issue with Google or my local network blocking. I would like to know what might be going on or how I can troubleshoot this issue so I can authenticate/authorize and run queries through my python client. Thanks, J.D.
false
29,832,656
0.197375
0
0
1
The issue was environmental. My browser was not recognizing localhost, so when I manually modified the url to reference the ip, 127.0.0.1, then the authorization succeeded. Thanks for the responses.
0
442
0
2
2015-04-23T19:28:00.000
python,authentication,authorization,google-bigquery
Unable to Authenticate with Google BigQuery
1
1
1
29,853,676
0
0
0
I've been using Scapy to craft packets and test my network, but the programmer inside me is itching to know how to do this without Scapy. For example, how do I craft a DNS Query using sockets (I assume it's sockets that would be used). Thanks
false
29,852,509
0.197375
0
0
1
To open a UDP socket you'd use: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_UDP To send use: query = craft_dns_query() # you do this part s.sendto(query,(socket.inet_aton("8.8.8.8",53)) To receive the response use: response = s.recv(1024) You'll have to refer to documentation on DNS for actually crafting the messages and handling the responses.
0
2,153
0
2
2015-04-24T16:14:00.000
python,tcp,tcp-ip
Crafting a DNS Query Message in Python
1
1
1
29,898,151
0
0
0
I want to parse a large XML file (25 GB) in python, and change some of its elements. I tried ElementTree from xml.etree but it takes too much time at the first step (ElementTree.parse). I read somewhere that SAX is fast and do not load the entire file into the memory but it just for parsing not modifying. 'iterparse' should also be just for parsing not modifying. Is there any other option which is fast and memory efficient?
true
29,853,949
1.2
0
0
2
What is important for you here is that you need a streaming parser, which is what sax is. (There is a built in sax implementation in python and lxml provides one.) The problem is that since you are trying to modify the xml file, you will have to rewrite the xml file as you read it. An XML file is a text file, You can't go and change some data in the middle of the text file without rewriting the entire text file (unless the data is the exact same size which is unlikely) You can use SAX to read in each element and register an event to write back each element after it is been read and modified. If your changes are really simple it may be even faster to not even bother with the XML parsing and just match text for what you are looking for. If you are doing any signinficant work with this large of an XML file, then I would say you shouldn't be using an XML file, you should be using a database. The problem you have run into here is the same issue that Cobol programmers on mainframes had when they were working with File based data
0
1,129
0
1
2015-04-24T17:33:00.000
python,xml,parsing,sax,elementtree
memory efficient way to change and parse a large XML file in python
1
1
1
29,855,367
0
0
0
I am implementing a small distributed system (in Python) with nodes behind firewalls. What is the easiest way to pass messages between the nodes under the following restrictions: I don't want to open any ports or punch holes in the firewall Also, I don't want to export/forward any internal ports outside my network Time delay less than, say 5 minutes, is acceptable, but closer to real time would be nice, if possible. 1+2 → I need to use a third party, accessible by all my nodes. From this follows, that I probably also want to use encryption Solutions considered: Email - by setting up separate or a shared free email accounts (e.g. Gmail) which each client connects to using IMAP/SMTP Google docs - using a shared online spreadsheet (e.g. Google docs) and some python library for accessing/changing cells using a polling mechanism XMPP using connections to a third party server IRC Renting a cheap 5$ VPS and setting up a Zero-MQ publish-subscribe node (or any other protocol) forwarded over SSH and having all nodes connect to it Are there any other public (free) accessible message queues available (or platforms that can be misused as a message queue)? I am aware of the solution of setting up my own message broker (RabbitMQ, Mosquito) etc and make it accessible to my nodes somehow (ssh-forwardning to a third host etc). But my questions is primarily about any solution that doesn't require me to do that, i.e. any solutions that utilizes already available/accessible third party infrastructure. (i.e. are there any public message brokers I can use?)
false
29,902,069
0
1
0
0
I would recommend RabbitMQ or Redis (RabbitMQ preferred because it is a very mature technology and insanely reliable). ZMQ is an option if you want a single hop messaging system instead of a brokered messaging system such as RabbitMQ but ZMQ is harder to use than RabbitMQ. Depending on how you want to utilize the message passing (is it a task dispatch in which case you can use Celery or if you need a slightly more low-level access in which case use Kombu with librabbitmq transport )
0
1,745
1
2
2015-04-27T17:15:00.000
python,message-queue,messaging,distributed,distributed-system
Simple way for message passing in distributed system
1
1
3
29,904,422
0
0
0
Amazon is sunsetting SSLv3 support soon, and I am trying to verify that boto is utilizing TLS. Is there a good way to verify this? Or is there a good test to show TLS utilization?
true
29,903,119
1.2
0
0
1
At a high-level, the client and the server will negotiate which one to support as part of the SSL/TLS handshake, the highest supported version of the protocol, both from the client and the server side, wins. If client supports the latest and greatest which is TLS 1.2 and the server supports it as well, they will decide to use TLS 1.2. You can sniff the traffic using Wireshark or other similar packet capture tools to determine if the encrypted traffic is using SSLv3 or TLS.
0
3,258
0
10
2015-04-27T18:17:00.000
python,ssl,amazon-s3,boto,sslv3
How to tell if boto is using SSLv3 or TLS?
1
2
2
30,356,292
0
0
0
Amazon is sunsetting SSLv3 support soon, and I am trying to verify that boto is utilizing TLS. Is there a good way to verify this? Or is there a good test to show TLS utilization?
false
29,903,119
0.291313
0
0
3
As stated above, you can use a packet sniffer to determine if SSLv3 connections are being made: # sudo tcpdump -i eth0 'tcp[((tcp[12]>>4)*4)+9:2]=0x0300' Replace 'eth0' with the correct interface. Then test if it's working, by performing a SSLv3 connection with openssl: # openssl s_client -connect s3.amazonaws.com:443 -ssl3 That activity should be captured by tcpdump, if network interface is correct. Finally, test your app. If it's using SSLv3 it should be visible as well. You can also change the capture filter to see what protocol is being used: TLSv1 - 0x0301 TLSv1.1 - 0x0302 TLSv1.2 - 0x0303
0
3,258
0
10
2015-04-27T18:17:00.000
python,ssl,amazon-s3,boto,sslv3
How to tell if boto is using SSLv3 or TLS?
1
2
2
30,388,720
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0
0
0
0
The best way to resolve is, while creating your interpreter select your global python path on your system(/usr/local/bin/python3.7). Make sure that in pycharm shell, python --version appears as 3.7. It shouldn't show 2.7
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
57,754,599
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0.013333
0
0
1
One of the possible reason: If you have more than one python versions installed and let's say you installed beautifulsoup4 using pip3, it will only be available for import when you run it in python3 shell.
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
54,489,766
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0.039979
0
0
3
Copy bs4 and beautifulsoup4-4.6.0.dist-info from C:\python\Lib\site-packages to your local project directory. It worked for me. Here, python actually looks for the library in local directory rather than the place where the library was installed!
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
53,284,293
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0.039979
0
0
3
I experienced a variation of this problem and am posting for others' benefit. I named my Python example script bs4.py Inside this script, whenever trying to import bs4 using the command: from bs4 import BeautifulSoup, an ImportError was thrown, but confusingly (for me) the import worked perfectly from an interactive shell within the same venv environment. After renaming the Python script, imports work as expected. The error was caused as Python tries to import itself from the local directory rather than using the system copy of bs4
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
52,847,210
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0
0
0
0
For me it was a permissions issue. Directory "/usr/local/lib/python#.#/site-packages/bs4" was only 'rwx' by root and no other groups/users. Please check permissions on that directory.
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
72,153,534
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0.066568
0
0
5
Make sure the directory from which you are running your script does not contain a filename called bs4.py.
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
59,960,168
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0
0
0
0
I had the same problem. The error was that the file in which I was importing beautifulsoup from bs4 was in another folder. Just replaced the file out of the internal folder and it worked.
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
69,835,809
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0
0
0
0
There is no problem with package just need to Copy bs4 and beautifulsoup4-4.6.0.dist-info into your project directory
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
61,982,622
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0
0
0
0
I was also facing this type error in the beginning even after install all the modules which were required including pip install bs4 (if you have installed this then no need to install beautifusoup4 | BeautifulSoup4 through pip or anywhere else it comes with bs4 itself) Solution : Just go to your python file where it is installed C:\python\Lib\site-packages and then copy bs4 and beautifulsoup4-4.6.0.dist-info folders and paste it to your project folder where you have saved your working project.
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
51,398,896
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
false
29,907,405
0
0
0
0
For anyone else that might have the same issue as me. I tried all the above, but still didn't work. issue was 1 was using a virtual environment so needed to do pip install in the pycharm terminal instead of a command prompt to install it there. Secondly I had typed import Beautifulsoup with the S not capitalized. changed to BeautifulSoup and it worked.
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
70,643,924
0
1
0
I am trying to use BeautifulSoup, and despite using the import statement: from bs4 import BeautifulSoup I am getting the error: ImportError: cannot import name BeautifulSoup import bs4 does not give any errors. I have also tried import bs4.BeautifulSoup and just importing bs4 and creating a BeautifulSoup object with: bs4.BeautifulSoup() Any guidance would be appreciated.
true
29,907,405
1.2
0
0
17
The issue was I named the file HTMLParser.py , and that name is already used somewhere in the bs4 module. Thanks to everyone that helped!
0
46,868
0
18
2015-04-27T22:57:00.000
python,beautifulsoup
Cannot import Beautiful Soup
1
12
15
29,924,863
0