Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I have python script for ssh which help to run various Linux commands on remote server using paramiko module. All the outputs are saved in text file, script is running properly. Now I wanted to run these script twice a day automatically at 11am and 5pm everyday.
How can I run these script automatically every day at given time without compiling every time manually. Is there any software or module.
Thanks for your help. | false | 37,080,703 | 0 | 1 | 0 | 0 | Assuming you are running on a *nix system, cron is definitely a good option. If you are running a Linux system that uses systemd, you could try creating a timer unit. It is probably more work than cron, but it has some advantages.
I won't go though all the details here, but basically:
Create a service unit that runs your program.
Create a timer unit that activates the server unit at the prescribed times.
Start and enable the timer unit. | 0 | 1,567 | 0 | 1 | 2016-05-06T20:14:00.000 | python | Automatically run python script twice a day | 1 | 1 | 2 | 37,081,699 | 0 |
0 | 0 | I have created an xml file using element tree and want to include the checksum of the file in an xml tag. Is this possible? | true | 37,129,821 | 1.2 | 0 | 0 | 0 | You could calculate a checksum of the XML before saving, but including it in the file will change the checksum. You end up with a recursive problem where the checksum changes every time you update the file with the new checksum. So no, it's not possible. | 0 | 301 | 0 | 0 | 2016-05-10T05:18:00.000 | python,xml,checksum | Checksum in xml tag python | 1 | 1 | 1 | 37,129,870 | 0 |
0 | 0 | I have zmq version 4.1.3 and pyzmq version 15.2.0 installed on my machine (I assume through pip but I dont remember now). I have a need to connect to a UDP epgm socket but get the error "protocol not supported". I have searched the vast expanses of the internet and have found the answer: "build zero mq with --with-pgm option".
Does anyone know how to do that?
I searched around the harddrive and found the zeromq library in pkgs in my python directory and found some .so files but I dont see any setup.py or anything to recompile with the mysterious --with-pgm option. | false | 37,177,322 | 0.197375 | 0 | 0 | 1 | Here is the general procedure which works for me:
1. download zeromq package (using zeromq-4.1.5.tar.gz as example)
2. tar zxvf zeromq-4.1.5.tar.gz
3. cd zeromq-4.1.5
4. apt-get install libpgm-dev
5. ./configure --with-pgm && make && make install
6. pip install --no-binary :all: pyzmq
Then you can use pgm/epgm as you want. | 0 | 1,035 | 0 | 1 | 2016-05-12T04:31:00.000 | python,zeromq,multicast,pyzmq | How to install pyzmq "--with-pgm" | 1 | 1 | 1 | 45,479,765 | 0 |
0 | 0 | I am a getting query result from couchdb emit function in python as follows:
<Row id=u'c0cc622ca2d877432a5ccd8cbc002432', key=u'eric', value={u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}>
How do I parse the content of value item as:
{u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}
using json? | false | 37,199,342 | 0.099668 | 0 | 0 | 1 | I'm not sure what you mean by "parsing the contents using json". The data should already be parsed and you can refer to any attributes by doing something like row.value["_id"] where row is the name of the variable referencing the Row object. | 0 | 335 | 0 | 1 | 2016-05-13T00:10:00.000 | python,couchdb | Parsing couchdb query result using json in Python | 1 | 2 | 2 | 37,200,473 | 0 |
0 | 0 | I am a getting query result from couchdb emit function in python as follows:
<Row id=u'c0cc622ca2d877432a5ccd8cbc002432', key=u'eric', value={u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}>
How do I parse the content of value item as:
{u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}
using json? | false | 37,199,342 | 0 | 0 | 0 | 0 | The value is already converted to a python dictionary. So, if you need to change it to a JSON string just use json.loads, otherwise you can access the key with row.key and value attributes with row.value['some_attribute']. | 0 | 335 | 0 | 1 | 2016-05-13T00:10:00.000 | python,couchdb | Parsing couchdb query result using json in Python | 1 | 2 | 2 | 50,189,612 | 0 |
0 | 0 | I am using python and boto2 for an s3 project.
There is a file in s3, I want to get its contents by path name.
Correct me if I'm wrong but I think it can't be done with one API call.
First I need to call bucket.get_key and then key.get_content.
I would like to download the file contents with just one API call (the file is not big and should comfortably fit in an in-memory string) | false | 37,215,765 | 0 | 0 | 0 | 0 | Solved: first call bucket.new_key, which locally creates a key object and does not incur in an API call.
Then use key.get_content. | 0 | 337 | 0 | 0 | 2016-05-13T17:13:00.000 | python,amazon-web-services,amazon-s3,boto | boto2: download object from s3 with one API call | 1 | 1 | 1 | 37,215,966 | 0 |
0 | 0 | I'm wondering if anyone can offer me any guidance on this.. hope it's being posted in the right place :|.
I want to have a web server that can initiate an download stream via http and serve it out the other side via the FTP protocol.
So a user of the program would request the file and the server would initiate an http stream from another source and pass this transfer on via FTP back to the user.
Thoughts? | true | 37,275,831 | 1.2 | 0 | 0 | 0 | In my opinion, you could get this data stream by any protocol you desire (set up your own server using sockets, rest api and so forth), save it as a file and offer it to the customer via FTP.
This is the high-level design. What exactly are you struggling with?
(By the way, keep in mind you should prefer sFTP over FTP.)
Hope this helps. | 0 | 302 | 0 | 0 | 2016-05-17T12:11:00.000 | python,http,ftp,twisted | HTTP to FTP data stream conversion? | 1 | 1 | 1 | 37,276,337 | 0 |
0 | 0 | I'm new to Python. Currently I'm using it to connect to remote servers via ssh, do some stuff (including copying files), then close the session.
After each session I do ssh.close() and sftp.close(). I'm just doing this because that's the typical way I found on the internet.
I'm wondering what would happen if I just finishes my script without closing the session. Will that affect the server? Will this make some kind of (or even very little) load on the server? I mean why we are doing this in the first place? | false | 37,291,961 | 0.462117 | 1 | 0 | 5 | The (local) operating system closes any pending TCP/IP connection opened by the process, when the process closes (even if it merely crashes).
So in the end the SSH session is closed, even if you do not close it. Obviously, it's closed abruptly, without proper SSH cleanup. So it may trigger some warning in the server log.
Closing the session is particularly important, when the process is long running, but the session itself is used only shortly.
Anyway, it's a good practice to close the session no matter what. | 0 | 1,588 | 0 | 4 | 2016-05-18T06:36:00.000 | python,session,ssh,sftp,conceptual | What would happen if I don't close an ssh session? | 1 | 2 | 2 | 37,292,169 | 0 |
0 | 0 | I'm new to Python. Currently I'm using it to connect to remote servers via ssh, do some stuff (including copying files), then close the session.
After each session I do ssh.close() and sftp.close(). I'm just doing this because that's the typical way I found on the internet.
I'm wondering what would happen if I just finishes my script without closing the session. Will that affect the server? Will this make some kind of (or even very little) load on the server? I mean why we are doing this in the first place? | false | 37,291,961 | 0.291313 | 1 | 0 | 3 | We close session after use so that the clean up(closing of all running processes associated to it) is done correctly/easily.
When you ssh.close() it generates a SIGHUP signal. This signal kills all the tasks/processes under the terminal automatically/instantly.
When you abruptly end the session that is without the close(), the OS eventually gets to know that the connection is lost/disconnected and initiates the same SIGHUP signal which closes most open processes/sub-processes.
Even with all that there are possible issues like few processes continue running even after SIGHUP because they were started with a nohup option(or have somehow been disassociated from the current session). | 0 | 1,588 | 0 | 4 | 2016-05-18T06:36:00.000 | python,session,ssh,sftp,conceptual | What would happen if I don't close an ssh session? | 1 | 2 | 2 | 37,292,405 | 0 |
0 | 0 | I want to extract UUID from urls.
for example:
/posts/eb8c6d25-8784-4cdf-b016-4d8f6df64a62?mc_cid=37387dcb5f&mc_eid=787bbeceb2
/posts/d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid/7034
/posts/5ff0021c-16cd-4f66-8881-ee28197ed1cf
I have thousands of this kind of string.
My regex now is ".*\/posts\/(.*)[/?]+.*"
which gives me the result like this:
d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid
84ba0472-926d-4f50-b3c6-46376b2fe9de/uid
6f3c97c1-b877-40e0-9479-6bdb826b7b8f/uid
f5e5dc6a-f42b-47d1-8ab1-6ae533415d24
f5e5dc6a-f42b-47d1-8ab1-6ae533415d24
f7842dce-73a3-4984-bbb0-21d7ebce1749
fdc6c48f-b124-447d-b4fc-bb528abb8e24
As you can see, my regex can't get rid of /uid, but handle ?xxxx, query parameter, fine.
What did I miss? How to make it right?
Thanks | false | 37,310,576 | 0.197375 | 0 | 0 | 2 | Regular expressions try to match as many characters as possible (informally called "maximal munch").
A plain-English description of your regex .*\/posts\/(.*)[/?]+.* would be something like:
Match anything, followed by /posts/, followed by anything, followed by one or more /?, followed by anything.
When we apply that regex to this text:
.../posts/d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid/7034
... the maximal munch rule demands that the second "anything" match be as long as possible, therefore it ends up matching more than you wanted:
d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid
... because there is still the /7034 part remaining, which matches the remainder of the regex.
The best way to fix it is to use a regex which only matches characters that can actually occur in a UID (as suggested by @alecxe). | 0 | 3,800 | 0 | 1 | 2016-05-18T21:42:00.000 | python,regex | extract uuid from url | 1 | 1 | 2 | 37,310,823 | 0 |
1 | 0 | Using the form i create several strings that looks like xml data. One part of this strings i need to send on several servers using urllib and another part, on soap server, then i use suds library. When i receive the respond, i need to compare all of this data and show it to user. The sum of these server is nine and quantity of servers can grow. When i make this requests successively, it takes lot of time. According to this i have a question, is there some python library that can make different requests at the same time? Thank you for answer. | true | 37,311,172 | 1.2 | 0 | 0 | 3 | You might want to consider using PycURL or Twisted. These should have the asynchronous capabilities you're looking for. | 0 | 112 | 0 | 1 | 2016-05-18T22:33:00.000 | python,xml,django | Django sending xml requests at the same time | 1 | 1 | 1 | 37,311,205 | 0 |
0 | 0 | I know that in the API users must allow the app permissions to use their specific user data.
But as a user, I have friends which have their own likes. Is there a possible way to query another user or my friend's profile (using their user id) to get a list of their likes via the Graph API.
Ie. because they've decided to friend me, they've given me their permission on a user basis to access their data in Facebook, so can I query this via the graph api as myself instead of an external app? | true | 37,327,132 | 1.2 | 0 | 0 | 1 | No, that is not possible.
What is visible/accessible to you as a user on facebook.com or in their official apps, has litte correlation to what you can see via API.
If you want to access anyone’s likes via API – then they have to login to your app first, and grant it appropriate permission. | 0 | 38 | 0 | 0 | 2016-05-19T15:01:00.000 | python,facebook,facebook-graph-api | How to use Facebook graph api as myself instead of external app, to query friend's data? | 1 | 1 | 1 | 37,327,933 | 0 |
0 | 0 | I need to transmit full byte packets in my custom format via TCP. But if I understand correctly TCP is streaming protocol, so when I will call send method on sender side, there is not guaranty that it will be received with same size on receiver side when call recv (It can be merged together with Nagle's algorithm and then splited when will not fit into frame or when not fit to buffer).
UDP provides full datagrams so there is no such issue.
So question is: what will be the best and correct way to recv same pacakges as it was send, with same size, with no glue. I develop using python.
I think I can use something like HDLC but I am not sure that iterating throug each byte will be best choice.
Maybe there are some open-source small examples for this situation or it is discribed in books? | true | 37,379,793 | 1.2 | 0 | 0 | 2 | Since TCP is only an octet stream this is not possible without glue, either around your data (i.e. framing) or inside your data (structure with clear end).
The way this is typically done is either by having a delimiter (like \r\n\r\n between HTTP header and body) or just prefix your message with the size. In the latter case just read the size (fixed number of bytes) and then read this number of bytes for the actual message. | 0 | 54 | 1 | 0 | 2016-05-22T21:21:00.000 | python,sockets,tcp,packet,datagram | Correct architectual way to send "datagrams" via TCP | 1 | 1 | 1 | 37,380,443 | 0 |
1 | 0 | I want to build a python script to submit some form on internet website. Such as a form to publish automaticaly some item on site like ebay.
Is it possible to do it with BeautifulSoup or this is only to parse some website?
Is it possible to do it with selenium but quickly without open really the browser?
Are there any other ways to do it? | false | 37,420,756 | 0 | 0 | 0 | 0 | You can use selenium with PhantomJS to do this without the browser opening. You have to use the Keys portion of selenium to send data to the form to be submitted. It is also worth noting that this method will not work if there are captcha's on the form. | 0 | 55 | 0 | 0 | 2016-05-24T18:01:00.000 | python | Submit form on internet website | 1 | 1 | 3 | 37,421,286 | 0 |
0 | 0 | I am using proxmoxer to manipulate machines on ProxMox (create, delete etc).
Every time I am creating a machine, I provide a description which is being written in ProxMox UI in section "Notes".
I am wondering how can I retrieve that information?
Best would be if it can be done with ProxMox, but if there is not a way to do it with that Python module, I will also be satisfied to do it with plain ProxMox API call. | false | 37,463,782 | 0 | 0 | 0 | 0 | The description parameter is only a message to show in proxmox UI, and it's not related to any function | 0 | 256 | 0 | 1 | 2016-05-26T14:29:00.000 | python,virtual-machine,virtualization,proxmox | How can I retrieve Proxmox node notes? | 1 | 1 | 2 | 37,565,566 | 0 |
0 | 0 | I am building a command line tool using python that interfaces with an RESTful api. The API uses oauth2 for authentication. Rather than asking for access_token every time user runs the python tool. Can I store the access_token in some way so that I can use it till its lifespan? If it is then how safe it is. | false | 37,471,681 | 0.099668 | 0 | 0 | 1 | Do you want to store it on the service side or locally?
Since your tool interfaces RESTful API, which is stateless, meaning that no information is stored between different requests to API, you actually need to provide access token every time your client accesses any of the REST endpoints. I am maybe missing some of the details in your design, but access tokens should be used only for authorization, since your user is already authenticated if he has a token. This is why tokens are valid only for a certain amount of time, usually 1 hour.
So you need to provide a state either by using cookie (web interface) or storing the token locally (Which is what you meant). However, you should trigger the entire oauth flow every time a user logs in to your client (authenticating user and providing a new auth token) otherwise you are not utilizing the benefits of oauth. | 0 | 4,615 | 0 | 3 | 2016-05-26T21:56:00.000 | python,oauth2 | Is it possible to store authentication token in a separate file? | 1 | 1 | 2 | 44,904,547 | 0 |
1 | 0 | I'm having a bit of a problem figuring out how to generate user friendly links to products for sharing.
I'm currently using /product/{uuid4_of_said_product}
Which is working quite fine - but it's a bit user unfriendly - it's kind of long and ugly.
And I do not wish to use and id as it would allow users to "guess" products. Not that that is too much of an issue - I would like to avoid it.
Do you have any hints on how to generate unique, user friendly, short sharing urls based on the unique item id or uuid? | true | 37,568,811 | 1.2 | 0 | 0 | 1 | As Seluck suggested I decided to go with base64 encoding and decoding:
In the model my "link" property is now built from the standard url + base64.urlsafe_b64encode(str(media_id))
The url pattern I use to match the base64 pattern:
base64_pattern = r'(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$'
And finally in the view we decode the id to load the proper data:
media_id = base64.urlsafe_b64decode(str(media_id))
media = Media.objects.get(pk=media_id) | 0 | 491 | 0 | 0 | 2016-06-01T12:36:00.000 | python,django,url,sharing | Sharing URL generation from uuid4? | 1 | 1 | 2 | 37,651,587 | 0 |
1 | 0 | I'm trying to get the clone URL of a pull request. For example in Ruby using the Octokit library, I can fetch it from the head and base like so, where pr is a PullRequest object: pr.head.repo.clone_url or pr.base.repo.clone_url.
How can I achieve the same thing using github3.py? | true | 37,578,259 | 1.2 | 0 | 0 | 0 | github3.pull_request(owner, repo, number).as_dict()['head']['repo']['clone_url'] | 0 | 107 | 0 | 0 | 2016-06-01T20:32:00.000 | python,github,github3.py | clone url of a pull request | 1 | 1 | 1 | 37,578,664 | 0 |
0 | 0 | I am making a slack bot. I have been using python slackclient library to develop the bot. Its working great with one team. I am using Flask Webframework.
As many people add the app to slack via "Add to Slack" button, I get their bot_access_token.
Now
how should I run the code with so many Slack tokens. Should I store them in a list and then traverse using for loops for all token! But this was is not good as I may not be able to handle the simultaneous messages or events I receive Or "its a good way".
Any other way if its not? | false | 37,603,634 | 0 | 0 | 0 | 0 | You do indeed need to
Store each team token. Please remember to encrypt it
When a team installs your app, create a new RTM connection. When your app/server restarts, loop across all your teams, open a RTM connection for each of them
each connection will receive events from that team, and that team only. You will not receive all notifications on the same connection
(maybe you are coming from Facebook Messenger bots background, where all notifications arrive at the same webhook ? That's not the case with Slack) | 0 | 904 | 0 | 0 | 2016-06-02T23:20:00.000 | python,oauth,slack-api,slack | How to handle many users in slack app for in Python? How to use the multiple tokens? | 1 | 1 | 2 | 37,614,761 | 0 |
0 | 0 | Today I actually needed to retrieve data from the http-header response. But since I've never done it before and also there is not much you can find on Google about this. I decided to ask my question here.
So actual question: How does one print the http-header response data in python? I'm working in Python3.5 with the requests module and have yet to find a way to do this. | false | 37,616,460 | 0 | 0 | 0 | 0 | Try to use req.headers and that's all. You will get the response headers ;) | 0 | 68,238 | 0 | 21 | 2016-06-03T14:02:00.000 | python,http,header,response | How to print out http-response header in Python | 1 | 1 | 7 | 48,498,013 | 0 |
0 | 0 | My client asked me to build a tool that would let him and his partners to upload video to youtube to his channel automatically .
For example let's say that my client is A and he has some buisness partners . A want to be able to upload videos to his channel, that is easy to do, but the problem here is to let other parners B and C to upload their videos to His channel (channel of the person of A) .
In this case I would need "A" to auth my app so he can upload videos to his own channel, but how can I handle that for other users . How can users use the access token of the person "A" to upload videos to his channel ?
What I've done so far ?
I've got the youtube upload python sample from google api docs and played with it a bit. I tried to subprocess.Popen(cmd) where cmd is the following command : python upload.py --file "video name" --title "title of the vid" .
This will lead the user to auth my app once , that's only fine for the "A" person .The others won't be able to do that, since they need to upload the vid to A's account . | true | 37,646,652 | 1.2 | 0 | 0 | 2 | You can create a server-side script in which you use Google OAuth to upload videos to A's account.
Then you can create a client-side app which allows your clients B and C to upload their videos to the server; on completion, the server can then upload them to A's account.
Alternatively, to avoid uploading twice, if you trust the clients and would like them to be able to upload directly, you can pass them an OAuth access token to A's account. | 0 | 45 | 1 | 0 | 2016-06-05T20:53:00.000 | python,google-oauth | How to handle google api oauth in this app? | 1 | 1 | 1 | 37,647,153 | 0 |
1 | 0 | I was getting this facebook login error:
URL Blocked
This redirect failed because the redirect URI is not
whitelisted in the app’s Client OAuth Settings. Make sure Client and
Web OAuth Login are on and add all your app domains as Valid OAuth
Redirect URIs.
Facebook login requires whitelisting of the call-back url.
what is the call back url for django-social-auth or python-social-auth ? | true | 37,662,390 | 1.2 | 0 | 0 | 1 | include a url to your website that is the absolute url version of this relative url:
/complete/facebook/
how to find this out?
use Chrome browser dev tools, enable preserve log, try to login to your app.
This question / answer is for django-social-auth but likely applies to python-social-auth too. | 0 | 654 | 0 | 0 | 2016-06-06T16:24:00.000 | python-social-auth,django-socialauth | python-social-auth and facebook login: what is the whitelist redirect url to include in fb configuration? | 1 | 1 | 1 | 37,662,391 | 0 |
0 | 0 | My goal is to create a small sript that find all the result of a google search but in "raw".
I don't speak english very well so i prefer to give an exemple to show you what i would like :
I Type : elephant
The script return
www.elephant.com
www.bluelephant.com
www.ebay.com/elephant
.....
I was thinking about urllib.request but the return value will not be usable to that !
I found some tutorials but not adapted at all to my wish !
Like i told you my goal is to have an .txt file as output witch contains alls the website who match with my query !
Thanks all | true | 37,754,771 | 1.2 | 0 | 0 | 4 | One simple way is to make a request to google search, then parse the html result. You can use some Python libraries such us Beautiful Soup to parse the html content easily, finally get the url link you need. | 0 | 10,567 | 0 | 2 | 2016-06-10T18:13:00.000 | python | Python - Get Result of Google Search | 1 | 1 | 4 | 37,754,845 | 0 |
0 | 0 | When I try to run tcpServer and tcpClient on the same local network, it works, but I can't run them on the external network. The OS refuses the connection.
Main builtins.ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
I checked whether tcpServer is running or not using netstat, and it is in the listening state.
What am I supposed to do? | true | 37,773,568 | 1.2 | 0 | 0 | 0 | There are most likely two reasons for that:
1.) Your server application is not listening on that particular ip/port
2.) A firewall is blocking that ip/port
I would recommend checking your firewall settings. You could start with turning your firewall off to determine if it really is a firewall issue.
If so, just add an accept rule for your webservice (ip:port).
edit: And check your routing configuration if you are in a more or less complex network. Make sure that both networks can reach each other (e.g. ping the hosts or try to connect via telnet). | 0 | 331 | 1 | 0 | 2016-06-12T11:12:00.000 | python,sockets,tcpclient,tcpserver | Python, tcpServer tcpClient, [WinError 10061] | 1 | 1 | 1 | 37,773,623 | 0 |
0 | 0 | I'm writing a script that makes a post request to a url, I'd then like to open the response page in the browser of the system. I'm having trouble finding out how. | true | 37,793,923 | 1.2 | 0 | 0 | 1 | You could save the content to a local file and use webbrowser.open_new("file://yourlocalfile.html") but this has one major flaw:
Because of the browsers same origin policy this site could not load any external js, css or pictures. | 0 | 2,708 | 0 | 0 | 2016-06-13T15:40:00.000 | python,bots | Open a url response in the browser | 1 | 1 | 2 | 37,794,163 | 0 |
0 | 0 | I am selenium python and I would like to speed up my tests. let's say 5 tests simultaneously. How can I achieve that on the single machine with the help of selenium grid | false | 37,826,093 | 0 | 1 | 0 | 0 | You won't need a Selenium Grid for this. The Grid is used to distribute the test execution across multiple machines. Since you're only using one machine you don't need to use it.
You are running tests so I'm assuming you are using a test framework. You should do some research on how you can run tests in parallel using this framework.
There will probably also be a way to execute a function before test execution. In this function you can start the driver.
I'd be happy to give you a more detailed answer but your question is lacking the framework you are using to run the tests. | 0 | 303 | 0 | 0 | 2016-06-15T04:21:00.000 | python,selenium | How to Speed up Test Execution Using Selenium Grid on single machine | 1 | 1 | 2 | 37,828,643 | 0 |
0 | 0 | I'm currently writinng a script to interact with a live stream, mainly taking screenshots.
I'm using Selenium Webdriver for Python to open Chromedriver and go from there.
However, I want to build this behavior into a bigger program and hide the whole process of opening chromedriver, waiting for the stream to load and then taking a screenshot, so the user only gets the screenshot once it's done.
From what I've found online, it's not possible to hide the command-line console within my script with something like setVisible and I'm okay with the console showing up, but I really have to hide the website popup, so the screenshot will be taken in the background.
Is it possible to do so in Python/Selenium or do I have to switch to another language? | false | 37,857,927 | 0 | 0 | 0 | 0 | You need to create a script with vb or python which will close the popups on the basis of their titles.
Even you can minimise them also.
Code in vb
Set wshShell = CreateObject("WScript.Shell")
Do
ret = wshShell.AppActivate("title of the popup")
If ret = True Then
wshShell.SendKeys "%N"
Exit Do
End If
WScript.Sleep 500
Loop | 0 | 1,090 | 0 | 1 | 2016-06-16T11:31:00.000 | python,selenium,webdriver,video-streaming,selenium-chromedriver | Webdriver without window popup | 1 | 1 | 2 | 37,862,602 | 0 |
0 | 0 | I'm using the most basic service, running with Ubuntu (the standard config), I have developed some python scripts in my own PC that uses bs4, when I upload them it says the classical error:
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: xml. Do you need to install a parser library?
So I try pip install lxml, and it asks that libxml2 should be installed, and so on, and so on...
I'm not a Linux person, I'm more a Windows guy, I know maybe I have to compile something but I have no idea what or how. I've been looking for tutorials all noon, but I can't find nothing helpful. | true | 37,871,418 | 1.2 | 0 | 0 | 2 | If you're using ubuntu, it's way easier to install the pre-packaged version using apt-get install python-bs4or apt-get install python3-bs4 | 0 | 105 | 0 | 0 | 2016-06-17T00:38:00.000 | python,beautifulsoup,digital-ocean | How can I install BeautifulSoup in a Digital Ocean droplet? | 1 | 1 | 1 | 37,871,468 | 0 |
0 | 0 | So I've been doing a lot of work with Tweepy and Twitter data mining, and one of the things I want to do is to be able to get all Tweets that are replies to a particular Tweet. I've seen the Search api, but I'm not sure how to use it nor how to search specifically for Tweets in reply to a specific Tweet. Anyone have any ideas? Thanks all. | true | 37,897,064 | 1.2 | 1 | 0 | 0 | I've created a workaround that kind of works. The best way to do it is to search for mentions of a user, then filter those mentions by in_reply_to_id . | 0 | 3,634 | 0 | 2 | 2016-06-18T12:39:00.000 | python,search,twitter,tweepy | Tweepy Get Tweets in reply to a particular tweet | 1 | 1 | 2 | 37,902,045 | 0 |
0 | 0 | I have stored a pyspark sql dataframe in parquet format. Now I want to save it as xml format also. How can I do this? Solution for directly saving the pyspark sql dataframe in xml or converting the parquet to xml anything will work for me. Thanks in advance. | false | 37,945,725 | -0.099668 | 0 | 1 | -1 | You can map each row to a string with xml separators, then save as text file | 0 | 992 | 0 | 0 | 2016-06-21T13:24:00.000 | xml,python-2.7,pyspark,spark-dataframe,parquet | How to save a pyspark sql DataFrame in xml format | 1 | 1 | 2 | 37,989,050 | 0 |
1 | 0 | I'm using Dynamodb. I have a simple Employee table with fields like id, name, salary, doj, etc. What is the equivalent query of select max(salary) from employee in dynamodb? | false | 37,959,515 | 0.066568 | 0 | 0 | 1 | There is no cheap way to achieve this in Dynamodb. There is no inbuild function to determine the max value of an attribute without retrieving all items and calculate programmatically. | 0 | 9,688 | 0 | 11 | 2016-06-22T05:47:00.000 | python,amazon-dynamodb,boto3 | Dynamodb max value | 1 | 1 | 3 | 37,960,297 | 0 |
0 | 0 | User A shares a folder link.
I want to use that shared link to copy that folder to my business dropbox account.
Catch is I don't want a method which downloads the folder to my server and uploads it to my dropbox account. I want a method by which I can pass that shared link as a parameter and make the api call and then dropbox copies the folder to my dropbox account at there end.
Is there a way using dropbox-api to copy directly to my dropbox account.
Thanks | true | 37,966,600 | 1.2 | 0 | 0 | 1 | Unfortunately, the Dropbox API doesn't offer a way to add a folder from a Dropbox shared link directly to an account via the API, without downloading and re-uploading it. We'll consider this a feature request for a way to do so. | 0 | 79 | 0 | 1 | 2016-06-22T11:23:00.000 | python-2.7,dropbox,dropbox-api | Copy folder using dropbox shared link to a dropbox account without downloading and uploading again | 1 | 1 | 1 | 37,974,715 | 0 |
0 | 0 | I have setup a python server on server machine which is an aws instance and trying to access it using public_IP:80 from client machine which is in different network.
It is not able to load the data from the server. | false | 37,989,566 | 0.197375 | 0 | 0 | 1 | There can be multiple network blockers in this client/server communication.
One of those, which is highly probable is Security-group or NACLs in this AWS based communication.
If you are running your instance in EC2-Classic then you need to check security-group inbound rules for allowing client on port 80 and if it is running in AWS VPC then check the security-group inbound rules as well as Network ACLs for inbound as well as outbound rule.
In Security Group allow
Type Http, Protocol TCP and source IP should your client IP or
0.0.0.0/0 (Less secure).
And in case of NACLs adjust it as below:
INBOUND Rule: 100 HTTP (80) TCP (6) 80
Allow OUTBOUND Rule: 100 Custom TCP Rule TCP (6) 1024-65535
ALLOW
Ephemeral port range can be adjusted here depending upon OS and distribution.
Apart from these adjustments you need to check if Firewall on client/server is blocking any such communication or not. | 0 | 900 | 0 | 0 | 2016-06-23T10:53:00.000 | http,python-3.5 | Setting http.server in python3 | 1 | 1 | 1 | 38,712,590 | 0 |
0 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | false | 37,997,715 | 0.039979 | 0 | 0 | 1 | In Windows 10
1. Type in "Controlled folder Access"
2. Select "Allow an app through Controlled folder access" Select yes to "UAC"
3. Click on "+ Add an allowed app"
4. Select "recently blocked apps"
5. Find the executable for the C:\Python27
6. Click the + to add it.
7. Select Close
Then try running the Python Shell again. This worked for me 100%
Also, add exception through Windows Firewall Python27 select Private and Public. | 0 | 15,720 | 1 | 2 | 2016-06-23T17:02:00.000 | python,python-idle | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 1 | 4 | 5 | 59,004,916 | 0 |
0 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | false | 37,997,715 | 0 | 0 | 0 | 0 | First uninstall the application.Then reinstall it BUT at the time of reinstallation try -n at the end of location adress. It worked for me, you can copy the below text and paste it at the location while installing it.
“C:\Program Files\Python32\pythonw.exe” lib\idlelib\idle.py -n | 0 | 15,720 | 1 | 2 | 2016-06-23T17:02:00.000 | python,python-idle | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 1 | 4 | 5 | 49,338,940 | 0 |
0 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | false | 37,997,715 | 0 | 0 | 0 | 0 | IDLE's subprocess didn't make a connection. Either IDLE can't start or a personal firewall software is blocking the connection.
Having had this problem myself I did an uninstall and created a new directory in the C drive and reinstalled in that folder, which worked for me. | 0 | 15,720 | 1 | 2 | 2016-06-23T17:02:00.000 | python,python-idle | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 1 | 4 | 5 | 46,725,394 | 0 |
0 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | false | 37,997,715 | 0 | 0 | 0 | 0 | If you at the network environment then check on the secure Group (SG), to see if the user is listed under that group.
else as other had been suggested you have to have the (right click on the program the login as Admin right to enable the IDLE to run. | 0 | 15,720 | 1 | 2 | 2016-06-23T17:02:00.000 | python,python-idle | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 1 | 4 | 5 | 44,459,007 | 0 |
0 | 0 | I am building a web crawler which has to crawl hundreds of websites. My crawler keeps a list of urls already crawled. Whenever crawler is going to crawl a new page, it first searches the list of urls already crawled and if it is already listed the crawler skips to the next url and so on. Once the url has been crawled, it is added to the list.
Currently, I am using binary search to search the url list, but the problem is that once the list grows large, searching becomes very slow. So, my question is that what algorithm can I use in order to search a list of urls (size of list grows to about 20k to 100k daily).
Crawler is currently coded in Python. But I am going to port it to C++ or other better languages. | true | 37,998,013 | 1.2 | 1 | 0 | 3 | You have to decide at some point just how large you want your crawled list to become. Up to a few tens of millions of items, you can probably just store the URLs in a hash map or dictionary, which gives you O(1) lookup.
In any case, with an average URL length of about 80 characters (that was my experience five years ago when I was running a distributed crawler), you're only going to get about 10 million URLs per gigabyte. So you have to start thinking about either compressing the data or allowing re-crawl after some amount of time. If you're only adding 100,000 URLs per day, then it would take you 100 days to crawl 10 million URLs. That's probably enough time to allow re-crawl.
If those are your limitations, then I would suggest a simple dictionary or hash map that's keyed by URL. The value should contain the last crawl date and any other information that you think is pertinent to keep. Limit that data structure to 10 million URLs. It'll probably eat up close to 2 GB of space, what with dictionary overhead and such.
You will have to prune it periodically. My suggestion would be to have a timer that runs once per day and cleans out any URLs that were crawled more than X days ago. In this case, you'd probably set X to 100. That gives you 100 days of 100,000 URLs per day.
If you start talking about high capacity crawlers that do millions of URLs per day, then you get into much more involved data structures and creative ways to manage the complexity. But from the tone of your question, that's not what you're interested in. | 0 | 361 | 0 | 0 | 2016-06-23T17:19:00.000 | python,c++,algorithm,search | Efficiently searching a large list of URLs | 1 | 2 | 2 | 37,998,220 | 0 |
0 | 0 | I am building a web crawler which has to crawl hundreds of websites. My crawler keeps a list of urls already crawled. Whenever crawler is going to crawl a new page, it first searches the list of urls already crawled and if it is already listed the crawler skips to the next url and so on. Once the url has been crawled, it is added to the list.
Currently, I am using binary search to search the url list, but the problem is that once the list grows large, searching becomes very slow. So, my question is that what algorithm can I use in order to search a list of urls (size of list grows to about 20k to 100k daily).
Crawler is currently coded in Python. But I am going to port it to C++ or other better languages. | false | 37,998,013 | -0.099668 | 1 | 0 | -1 | I think hashing your values before putting them into your binary searched list- this will get rid of the probable bottleneck of string comparisons, swapping to int equality checks. It also keeps the O(log2(n)) binary search time- you may not get consistent results if you use python's builtin hash() between runs, however- it is implementation-specific. Within a run, it will be consistent. There's always the option to implement your own hash which can be consistent between sessions as well. | 0 | 361 | 0 | 0 | 2016-06-23T17:19:00.000 | python,c++,algorithm,search | Efficiently searching a large list of URLs | 1 | 2 | 2 | 37,998,279 | 0 |
0 | 0 | When I try to import passlib.hash in my python script I get a 502 error
502 - Web server received an invalid response while acting as a gateway or proxy server.
There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.
The only modules I'm importing are:
import cgi, cgitb
import passlib.hash
passlib.hash works fine when I try in a normal python script or if I try importing in python interactive shell
using python 2.7, iis 8
when I browse on the localhost I get this
HTTP Error 502.2 - Bad Gateway
The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Traceback (most recent call last): File "C:##path remove##\test.py", line 2, in import passlib.hash ImportError: No module named passlib.hash ". | false | 38,024,133 | 0 | 1 | 0 | 0 | I fixed the issue by uninstalling activePython which was installing modules under the users profile in the appdata folder.
This caused an issue where the anonymous isur of the website it no longer had access to the installed modules
I uninstall activePython and returned to the normal windows python install and re-installed the modules using PIP.
All scripts are working as expected, happy days. | 0 | 116 | 0 | 0 | 2016-06-25T01:33:00.000 | python-2.7,cgi,iis-8 | Importing passlib.hash with CGI | 1 | 1 | 1 | 38,183,797 | 0 |
0 | 0 | My requirement is to communicate socketio with nodejs server to Raspberry Pi running a local Python app. Please help me. I can find ways of communication with web app on google but is there any way to communicate with Python local app with above mentioned requirements. | true | 38,032,608 | 1.2 | 1 | 0 | 1 | It's unclear exactly which part you need help with. To make a socket.io connection work, you do the following:
Run a socket.io server on one of your two computers. Make sure it is listening on a known port (it can share a port with a web server if desired).
On the other computer, get a socket.io client library and use that to make a socket.io connection to the other computer.
Register message handlers on both computers for whatever custom messages you intend to send each way and write the code to process those incoming messages.
Write the code to send messages to the other computer at the appropriate time.
Socket.io client and server libraries exist for both node.js and python so you can either type of library for either type of system.
The important things to understand are that you must have a socket.io server up and running. The other endpoint then must connect to that server. Once the connection is up and running, you can then send message from either end to the other end.
For example, you could set up a socket.io server on node.js. Then, use a socket.io client library for python to make a socket.io connection to the node.js server. Then, once the connection is up and running, you are free to send messages from either end to the other and, if you have, message handlers listening for those specific messages, they will be received by the other end. | 0 | 885 | 0 | 0 | 2016-06-25T20:17:00.000 | python,node.js,socket.io,raspberry-pi | Raspberry Pi python app and nodejs socketio communication | 1 | 1 | 1 | 38,032,700 | 0 |
0 | 0 | How can i use tcp to interact with a server and download his html file into my computer? I know that first you need to preform the 3 way handshake and then to send a GET request. But what then? Thank you | false | 38,075,251 | 0 | 0 | 0 | 0 | HTML text is written in the packet's Raw layer in scapy. In order to save a HTML file, simply write the pkt[Raw].load into a text file. | 0 | 30 | 0 | 0 | 2016-06-28T11:41:00.000 | python-2.7,scapy | HTML download using tcp | 1 | 1 | 1 | 38,086,938 | 0 |
0 | 0 | I need to send email with same content to 1 million users.
Is there any way to do so by writing script or something?
Email Id's are stored in Excel format. | false | 38,090,643 | 0.099668 | 1 | 0 | 1 | It is absolutely possible for a bot to be made that creates gmail accounts, in fact many already exist. The main problem is how to solve the captcha that is required for each new account, however there are services already built to handle this. The only problem then is being willing to violate googles terms of services, as I'm sure this does in one way or another. | 0 | 113 | 0 | 0 | 2016-06-29T04:50:00.000 | python,python-2.7,smtp | Automatic email sending from a gmail account using script | 1 | 1 | 2 | 38,090,686 | 0 |
0 | 0 | I'm new using WAMP protocol and CrossbarIO servers that are based on the WAMP protocol. The problem is. I have and Arduino Uno + EthernetShield and I want to send the information to the CrossbarServer.
The Arduino Uno has not support for Autobahn or WAMP or Crossbar. I just can send normal packages via UDP and websocket with an UNO+Ethernet.
Is there someway that I can read this UDP packet in the CrossbarServer from the arduino? | true | 38,113,656 | 1.2 | 0 | 0 | 0 | Unfortunately you cannot do so directly at the moment. For the time being, you need to connect the Uno to some component which accepts messages from the Uno and can talk WAMP as well.
We are working on a C library for lower-end devices, but as far as I can tell (I'm not directly involved) something with the specs of the Uno will remain out of the scope of WAMP even then since the initial plan is that the library itself will consume about 8k of RAM. | 0 | 130 | 1 | 0 | 2016-06-30T03:47:00.000 | python-2.7,sockets,arduino-uno,crossbar,wamp-protocol | Receiving an UDP Packet from Arduino in a CrossbarServer in Python | 1 | 1 | 1 | 38,119,608 | 0 |
0 | 0 | If I do something like "import selenium" (or any other kind of third party library) in a .py file and then run it from the terminal, it works just fine. But if I make a new file in PyCharm CE and do the same thing, it can't find the library / module.
How can I fix this or get it to point in the right location? I use a Macbook Pro. | true | 38,134,361 | 1.2 | 0 | 0 | 5 | You need to setup your project in PyCharm to use the Python interpreter that has your libraries:
Go to: file->settings->project->project interpreter
And select the appropriate interpreter from the dropdown. After selecting an interpreter, the window displays a list of libraries installed on that interpreter; this should further help you make the right selection. | 0 | 2,916 | 0 | 2 | 2016-06-30T22:38:00.000 | python,macos,pycharm,libraries | Why won't PyCharm see my libraries? | 1 | 2 | 2 | 38,134,383 | 0 |
0 | 0 | If I do something like "import selenium" (or any other kind of third party library) in a .py file and then run it from the terminal, it works just fine. But if I make a new file in PyCharm CE and do the same thing, it can't find the library / module.
How can I fix this or get it to point in the right location? I use a Macbook Pro. | false | 38,134,361 | 0.099668 | 0 | 0 | 1 | I've faced a similar issue on Pop!_OS after installing PyCharm via Flatpak. I think the installation is somehow incomplete, as I've had these issues (among others):
Installer could not create the menu shortcut due to the lack of credentials. Unlike during a typical installation, it wouldn't ask for the password and instead I had to uncheck that option altogether.
Built-in terminal defaulted to sh. Even after changing to bash, it would not read my .bashrc and many commands were missing.
After changing the interpreter into a local virtualenv, it would just default to Python 3.7 (even though the version was actually 3.8) and it didn't see any of my installed libraries.
When I've tried to use a Docker Compose environment, IDE failed to detect Docker Compose installation.
I've eventually uninstalled PyCharm and downloaded it directly from Jetbrains website to make it work correctly. | 0 | 2,916 | 0 | 2 | 2016-06-30T22:38:00.000 | python,macos,pycharm,libraries | Why won't PyCharm see my libraries? | 1 | 2 | 2 | 64,860,059 | 0 |
0 | 0 | I've been trying to extract the domain names from a list of urls, so that http://supremecosts.com/contact-us/ would become http://supremecosts.com. I'm trying to find a clean way of doing it that will be adaptable to various gtlds and cctlds. | false | 38,157,567 | 0 | 1 | 0 | 0 | Probably a silly, yet valid way of doing this is:
Save the URL in a string and scan it from back to front. As soon as you come across a full stop, scrap everything from 3 spaces ahead. I believe urls do not have full stops after the domain names. Please correct me if I am wrong. | 0 | 282 | 0 | 1 | 2016-07-02T07:15:00.000 | python | Extract domain name only from url, getting rid of the path (Python) | 1 | 1 | 4 | 38,157,658 | 0 |
1 | 0 | I'm using Knime 3.1.2 on OSX and Linux for OPENMS analysis (Mass Spectrometry).
Currently, it uses static filename.mzML files manually put in a directory. It usually has more than one file pressed in at a time ('Input FileS' module not 'Input File' module) using a ZipLoopStart.
I want these files to be downloaded dynamically and then pressed into the workflow...but I'm not sure the best way to do that.
Currently, I have a Python script that downloads .gz files (from AWS S3) and then unzips them. I already have variations that can unzip the files into memory using StringIO (and maybe pass them into the workflow from there as data??).
It can also download them to a directory...which maybe can them be used as the source? But I don't know how to tell the ZipLoop to wait and check the directory after the python script is run.
I also could have the python script run as a separate entity (outside of knime) and then, once the directory is populated, call knime...HOWEVER there will always be a different number of files (maybe 1, maybe three)...and I don't know how to make the 'Input Files' knime node to handle an unknown number of input files.
I hope this makes sense.
Thanks! | false | 38,160,597 | 0.099668 | 0 | 0 | 1 | There are multiple options to let things work:
Convert the files in-memory to a Binary Object cells using Python, later you can use that in KNIME. (This one, I am not sure is supported, but as I remember it was demoed in one of the last KNIME gatherings.)
Save the files to a temporary folder (Create Temp Dir) using Python and connect the Pyhon node using a flow variable connection to a file reader node in KNIME (which should work in a loop: List Files, check the Iterate List of Files metanode).
Maybe there is already S3 Remote File Handling support in KNIME, so you can do the downloading, unzipping within KNIME. (Not that I know of, but it would be nice.)
I would go with option 2, but I am not so familiar with Python, so for you, probably option 1 is the best. (In case option 3 is supported, that is the best in my opinion.) | 0 | 1,032 | 0 | 1 | 2016-07-02T13:17:00.000 | python-2.7,file-io,knime | Python in Knime: Downloading files and dynamically pressing them into workflow | 1 | 1 | 2 | 38,161,395 | 0 |
1 | 0 | So I've been working on scraper that goes on 10k+pages and scrapes data from it.
The issue is that over time, memory consumption raises drastically. So to overcome this - instead of closing driver instance only at the end of scrape - the scraper is updated so that it closes the instance after every page is loaded and data extracted.
But ram memory still gets populated for some reason.
I tried using PhantomJS but it doesn't load data properly for some reason.
I also tried with the initial version of the scraper to limit cache in Firefox to 100mb, but that also did not work.
Note: I run tests with both chromedriver and firefox, and unfortunately I can't use libraries such as requests, mechanize, etc... instead of selenium.
Any help is appreciated since I've been trying to figure this out for a week now. Thanks. | false | 38,164,635 | 0 | 0 | 0 | 0 | I have experienced similar issue and destroying that driver my self (i.e setting driver to None) prevent those memory leaks for me | 0 | 10,312 | 0 | 11 | 2016-07-02T21:29:00.000 | python,selenium,firefox,selenium-webdriver,selenium-chromedriver | Selenium not freeing up memory even after calling close/quit | 1 | 2 | 4 | 53,867,150 | 0 |
1 | 0 | So I've been working on scraper that goes on 10k+pages and scrapes data from it.
The issue is that over time, memory consumption raises drastically. So to overcome this - instead of closing driver instance only at the end of scrape - the scraper is updated so that it closes the instance after every page is loaded and data extracted.
But ram memory still gets populated for some reason.
I tried using PhantomJS but it doesn't load data properly for some reason.
I also tried with the initial version of the scraper to limit cache in Firefox to 100mb, but that also did not work.
Note: I run tests with both chromedriver and firefox, and unfortunately I can't use libraries such as requests, mechanize, etc... instead of selenium.
Any help is appreciated since I've been trying to figure this out for a week now. Thanks. | false | 38,164,635 | 0.099668 | 0 | 0 | 2 | Are you trying to say that your drivers are what's filling up your memory? How are you closing them? If you're extracting your data, do you still have references to some collection that's storing them in memory?
You mentioned that you were already running out of memory when you closed the driver instance at the end of scraping, which makes it seem like you're keeping extra references. | 0 | 10,312 | 0 | 11 | 2016-07-02T21:29:00.000 | python,selenium,firefox,selenium-webdriver,selenium-chromedriver | Selenium not freeing up memory even after calling close/quit | 1 | 2 | 4 | 38,164,741 | 0 |
0 | 0 | I wrote a python app that manage gcm messaging for an android chat app, where could I host this app to be able to work 24/7, it's not a web app, Is it safe and reliable to use PythonAnywhere consoles? | true | 38,166,331 | 1.2 | 1 | 0 | 1 | PythonAnywhere dev here: I wouldn't recommend our consoles as a place to run an XMPP server -- they're meant more for exploratory programming. AWS (like Adam Barnes suggests) or a VPS somewhere like Digital Ocean would probably be a better option. | 0 | 239 | 0 | 0 | 2016-07-03T02:55:00.000 | python,server,xmpp,host | Where to host a python chat server? | 1 | 1 | 2 | 38,188,784 | 0 |
0 | 0 | Looking at Graphite's latest documentation, I see that I can feed data into Graphite via plaintext. But I can't seem to find a way in Python 3 to send plaintext via the server ip address and port 2003. All I can seem to do is send bytes via sock.sendall(message.encode()) and Graphite does not seem to read that. Is there a way for Python 3 to feed data into Graphite? | true | 38,235,295 | 1.2 | 0 | 0 | 0 | My code actually worked. For some reason, the graph itself did not update. So sock.sendall(message.encode()) actually does work for the plaintext protocol. | 0 | 740 | 0 | 0 | 2016-07-06T23:14:00.000 | sockets,python-3.x,graphite,plaintext | How do I send plaintext via Python 3 socket library? | 1 | 1 | 1 | 38,251,602 | 0 |
0 | 0 | I have a hypothesis that you could increase your chances of getting tickets for sell-out events by attempting to access the website from multiple locations. Just to be clear i'm not trying to be that guy who buys ALL of the tickets for events and then sells them on at 10X the price, incidentally i'm talking specifically about one event, Glastonbury festival, for which I have tried many years to buy a ticket, and never been successful.
The problem is that you literally can't get on the site when the tickets get released.
So i guess there area few qualifying questions to work out if i need to even ask the main question.
What is actually happening on the website's server(s) at these times? Does the sheer volume of traffic cause some users to get 'rejected'?
Is it down to chance who gets through to the site?
Would trying to access the site multiple times increase your chances?
If so, would you have to try to access it from multiple locations? I.e. as opposed to just opening multiple tabs in the same browser.
Which brings me to the actual question:
Could this be achieved as simply as using Python to open multiple instances of Tor? | true | 38,247,070 | 1.2 | 0 | 0 | 1 | This has very little to do with your connection. The server is simply drowning in requests. More requests from different locations won't help you. A faster connection might help you get into the queue before anyone else, but multiple connections won't help. If you really want tickets, figure out how to move through the website in an automated way such that you submit a request to move through the menus faster than any human could. | 1 | 206 | 0 | 0 | 2016-07-07T13:42:00.000 | python,vpn,tor | Programmatically access one website from multiple locations? | 1 | 1 | 1 | 38,247,424 | 0 |
1 | 0 | I am working on a small project where I have to submit a form to a website.
The website is, however, using onclick event to submit the form (using javascript).
How can the onclick event be simulated in python?
Which modules can be used? I have heard about selenium and mechanize modules. But, which module can be used or in case of both, which one is better?
I am new to web scraping and automation.So,it would be very helpful.
Thanks in advance. | false | 38,298,459 | 1 | 0 | 0 | 7 | Ideally you don't even need to clicks buttons in these kind of cases.
All you need is to see at what webservice does the form sends request when clicked on submit button.
For that open your developer's control in the browser, Go to the Network tab and select 'preserve log'. Now submit the form manually and look for the first xhr GET/POST request sent. It would be POST request 90% of times.
Now when you select that request in the request parameters it would show the values that you entered while submitting the form. Bingo!!
Now all you need to do is mimic this request with relevant request headers and parameters in your python code using requests. And Wooshh!!
Hope it helps.. | 0 | 13,226 | 0 | 5 | 2016-07-11T02:47:00.000 | javascript,python,selenium,web-scraping | How can I simulate onclick event in python? | 1 | 1 | 2 | 38,298,895 | 0 |
1 | 0 | What is the difference between giving required field in python file and xml file in openerp?
In xml file :field name="employee_id" required="1"
In python file: 'employee_id' : fields.char('Employee Name',required=True), | false | 38,367,206 | 0 | 0 | 0 | 0 | The difference is that in the python .py when you set a fields required argument to True, it's creates a NOT NULL constraint directly on the database, this means that no matter what happens (Provided data didn't already exist in the table) you can never insert data into that table without that field containing a value, if you try to do so directly from psql or Odoo's xmlrpc or jsonrpc api you'll get an SQL NOT NULL error, with something like this
`ERROR: null value in column "xxx" violates not-null constraint
On the other-hand if you set a field to be required on the view (xml) then no constraint is set on the database, this means that the only restriction is the view and you can bypass that and write to the database directly or if you're making an external web service you can use Odoo's ORM methods to write to the database directly
If you really want to make sure a column is not null and is required, then it's better to set that in the python code itself instead of the view. | 0 | 227 | 0 | 0 | 2016-07-14T06:44:00.000 | python,openerp | required field difference in python file and xml file | 1 | 1 | 1 | 38,368,546 | 0 |
0 | 0 | I have a network shared folder with file path :C\\Local_Reports. I would like to use os.listdir(":C\\Local_Reports"), but the output is ['desktop.ini', 'target.lnk']. This is not the correct output obviously. The correct output would be [Daemons, Reports, SQL]. How do I successfully access this? | false | 38,399,901 | 0.197375 | 0 | 0 | 1 | I'm silly. I figured it out. I just took the target of the Local_Reports folder and wrote os.listdir(r"\\03vs-cmpt04\Local_Reports"). This just searched the network for the folder and listed the correct output: [Daemons, Reports, SQL] | 0 | 792 | 0 | 1 | 2016-07-15T15:28:00.000 | python,networking,operating-system,share,listdir | listdir of a network shared folder in Python | 1 | 1 | 1 | 38,400,861 | 0 |
1 | 0 | Is there a simple way to get a PDF from a xml with an xsl-fo?
I would like to do it in python.
I know how to do an html from an xml&xsl, but I haven't find a code example to get a PDF.
Thanks | false | 38,412,298 | 0.099668 | 0 | 0 | 1 | XSL FO requires a formatting engine to create print output like PDF from XSL FO input. Freely available one is Apache FOP. There are several other commercial products also. I know of no XSL FO engines written in Python though some have Python interfaces. | 0 | 1,430 | 0 | 1 | 2016-07-16T14:45:00.000 | xml,python-2.7,xslt,pdf-generation,xsl-fo | xml + xslfo to PDF python | 1 | 1 | 2 | 38,413,882 | 0 |
1 | 0 | For a college project I'm tasked with getting a Raspberry Pi to control an RC car over WiFi, the best way to do this would be through a web interface for the sake of accessibility (one of the key reqs for the module). However I keep hitting walls, I can make a python script control the car, however doing this through a web interface has proven to be dificult to say the least.
I'm using an Adafruit PWM Pi Hat to control the servo and ESC within the RC car and it only has python libraries as far as I'm aware so it has to be witihn python. If there is some method of passing variables from javascript to python that may work, but in a live environment I don't know how reliable it would be.
Any help on the matter would prove most valuable, thanks in advance. | false | 38,418,140 | 0 | 1 | 0 | 0 | I can suggest a way to handle that situation but I'm not sure how much will it suit for your scenario.
Since you are trying to use a wifi network, I think it would be better if you can use a sql server to store commands you need to give to the vehicle to follow from the web interface sequentially. Make the vehicle to read the database to check whether there are new commands to be executed, if there are, execute sequentially.
From that way you can divide the work into two parts and handle the project easily. Handling user inputs via web interface to control vehicle. Then make the vehicle read the requests and execute them.
Hope this will help you in someway. Cheers! | 0 | 229 | 0 | 0 | 2016-07-17T05:18:00.000 | javascript,python,html,raspberry-pi2 | How do I control a python script through a web interface? | 1 | 1 | 1 | 38,418,381 | 0 |
1 | 0 | This is my first attempt at scraping. There is a website with a search function that I would like to use.
When I do a search, the search details aren't shown in the website url. When I inspect the element and look at the Network tab, the request url stays the same (method:post), but when I looked at the bottom, in the Form Data section, I clicked view source and there were my search details in url form.
My question is:
If the request url = http://somewebsite.com/search
and the form data source = startDate=09.07.2016&endDate=10.07.2016
How can I connect the two to pull data for scraping? I'm new to scraping, so if I'm going about this wrong, please tell me.
Thanks! | false | 38,419,528 | 0.066568 | 0 | 0 | 1 | Ethics
Using a bot to get at the content of sites can be beneficial to you and the site you're scraping. You can use the data to refer to content of the site, like search engines do. Sometimes you might want to provide a service to user that the original website doesn't offer.
However, sometimes scraping is used for nefarious purposes. Stealing content, using the computer resources of others, or worse.
It is not clear what intention you have. Helping you, might be unethical. I'm not saying it is, but it could be. I don't understand 'AucT', saying it is bad practice and then give an answer. What is that all about?
Two notes:
Search results take more resources to generate than most other webpages. They are especially vulnerable to denial-of-service attacks.
I run serveral sites, and I have notices that a large amount of traffic is caused by bots. It is literally costing me money. Some sites have more traffic from bots than from people. It is getting out of hand, and I had to invest quite a bit of time to get the problem under control. Bots that don't respect bandwidth limits are blocked by me, permanently. I do, of course, allow friendly bots. | 0 | 167 | 0 | 0 | 2016-07-17T08:57:00.000 | php,python,web-scraping | Web-scraping advice/suggestions | 1 | 1 | 3 | 38,419,736 | 0 |
0 | 0 | In a test case is used the keyword sleep 2s
and this is obviously too slow, so i would like to replace with the wait keyword.
Thing is, that it is used for a download. So the user downloads a file and then is used the sleep 2s in order to give some time to the Robot Framework to complete the download.
But I cannot use
wait until element is visible,
wait until page contains,
or wait until page contains element
because nothing changes on the page :/
Any ideas? How you handle this?
Thank you in advance! | true | 38,433,102 | 1.2 | 0 | 0 | 1 | You could use the keyword of wait until keyword succeedsand just keep repeating the next keyword you want to use until the download is done. Or you could set the implicit wait time to be higher so the webdriver waits for an implicit amount of time before it executes another keyword. | 0 | 5,431 | 0 | 0 | 2016-07-18T09:26:00.000 | python-2.7,robotframework | Replace sleep with wait keyword on RobotFramework | 1 | 1 | 1 | 46,287,269 | 0 |
0 | 0 | I'm writing a file cache server to hold copies of static files for a web server. Whenever a thread in the web server needs a static file, it opens a socket connection to the cache server and sends it the result of socket.share() + the name of the file it wants. The cache server uses the result of socket.share to gain access to the http client via socket.fromshare and sends the contents of a static file. Then it closes its copy of the http client socket, and the thread's connection to it.
I'm wondering if using socket.detach instead of socket.close will automagically improve performance? The documentation for socket.detach says this:
Put the socket object into closed state without actually closing the underlying file descriptor. The file descriptor is returned, and can be reused for other purposes.
Do I have to explicitly use the returned file descriptors somehow when new sockets are created by the cache server, or does the socket module know about existing reusable file descriptors? | false | 38,447,361 | 0 | 0 | 0 | 0 | I'm not sure why:
You're using socket.share and
You think it would improve performance.
You say that you're already threading. A web server is going to be IO bound. Most of your time will be spent:
negotiating a TCP/IP connection between client and server
finding the information on disk (memory? Sweet, faster!)
reading from the disk (memory?) and writing to the socket
You should also profile your code before you go about making improvements. What is the actual holdup? Making the connection? Reading from disk? Sending back to the client?
Unless you've already done some tremendous improvements, I'm pretty sure that the actual TCP/IP negotiation is a few orders of magnitude faster than getting your information from the disk. | 1 | 543 | 0 | 2 | 2016-07-18T23:21:00.000 | python,windows,sockets,python-3.x | Can I reuse a socket file handle with socket.fromshare? | 1 | 1 | 2 | 38,458,160 | 0 |
1 | 0 | I want to create a python script that will allow me to upload files to OneNote via command line. I have it working perfectly and it authenticates fine. However, everytime it goes to authenticate, it has to open a browser window. (This is because authentication tokens only last an hour with OneNote, and it has to use a refresh token to get a new one.) While I don't have to interact with the browser window at all, the fact that it needs to open one is problematic because the program has to run exclusively in a terminal environment. (E.g. the OneNote authentication code tries to open a browser, but it can't because there isn't a browser to open).
How can I get around this problem? Please assume it's not possible to change the environment setup.
UPDATE:
You have to get a code in order to generate an access token. This is the part that launches the browser. It is only required the first time though, for that initial token. Afterwards, refresh token requests don't need the code. (I was calling it for both, which was the issue).
That solves the problem of the browser opening each time I run my program. However, it still leaves the problem of the browser having to open that initial time. I can't do that in a terminal environment. Is there a way around that?
E.g. Can I save the code and call it later to get the access token (how long until it expires)? Will the code work for any user, or will it only work for me? | false | 38,515,700 | 0 | 0 | 0 | 0 | If this is always with the same account - you can make the "browser opening and password typing" a one time setup process. Once you've authenticated, you have the "access token" and the "refresh token". You can keep using the access token for ~1hr. Once it expires, you can use the "refresh token" to exchange it for an "access token" without any user interaction. You should always keep the refresh token so you can get new access tokens later.
This is how "background" apps like "IFTTT" keep access to your account for a longer period of time.
Answer to your updated question:
The initial setup has to be through UI in a browser. If you want to automate this, you'll have to write some UI automation. | 0 | 505 | 0 | 2 | 2016-07-21T23:01:00.000 | python,authentication,terminal,onenote,onenote-api | How Do I Authenticate OneNote Without Opening Browser? | 1 | 1 | 2 | 38,515,902 | 0 |
0 | 0 | I'm looking to use this API for Google Flights to gather some flight data for project I hope to complete. I have one question tho. Can anyone see a way to request multiple dates for the same route in just one call? Or does it have to multiple requests?
Thanks so much I have seen it suggested that it is possible but haven't found any evidence:) | false | 38,575,369 | 0 | 0 | 0 | 0 | It is possible to add more than one flight to the request, see the google developer tutorial for qpx. I am not sure though how many flights fit in one request. | 0 | 54 | 0 | 0 | 2016-07-25T18:50:00.000 | python,json,r,api,web-scraping | QPX. How many returns per query? | 1 | 1 | 1 | 39,146,729 | 0 |
0 | 0 | So I am currently working on a project that involves the google maps API. In order to display data on this, the file needs to be in a geojson format. So far in order to accomplish this, I have been using two programs, 1 in javascript that converts a .json to a CSV, and another that converts a CSV to a geojson file, which can then be dropped on the map. However, I need to make both processes seamless, therefore I am trying to write a python script that checks the format of the file, and then converts it using the above programs and outputs the file. I tried to use many javascript to python converters to convert the javascript file to a python file, and even though the files were converted, I kept getting multiple errors for the past week that show the converted program not working at all and have not been able to find a way around it. I have only seen articles that discuss how to call a javascript function from within a python script, which I understand, but this program has a lot of functions and therefore I was wondering how to call the entire javascript program from within python and pass it the filename in order to achieve the end result. Any help is greatly appreciated. | false | 38,586,396 | 0 | 0 | 0 | 0 | I was able to write a conversion script, and it's working now, thanks! | 1 | 196 | 0 | 0 | 2016-07-26T09:43:00.000 | javascript,python,json,csv,geojson | How to execute an entire Javascript program from within a Python script | 1 | 1 | 2 | 38,633,221 | 0 |
1 | 0 | I am using python boto library to implement SWF.
We are simulating a workflow where we want to execute same task 10 times in a workflow. After the 10th time, the workflow will be marked complete.
The problem is, we want to specify an interval for execution which varies based on the execution count. For example: 5 minutes for 1st execution, 10 minutes for 2nd execution, and so on.
How do I schedule a task by specifying time to execute? | false | 38,588,000 | 0.197375 | 0 | 0 | 1 | There is no delay option when scheduling an activity. The solution is to schedule a timer with delay based on activity execution count and when the timer fires schedule an activity execution. | 0 | 218 | 0 | 0 | 2016-07-26T10:57:00.000 | python,amazon-web-services,boto,amazon-swf | Amazon SWF to schedule task | 1 | 1 | 1 | 38,601,627 | 0 |
0 | 0 | I'm busy trying to use socket.getaddrinfo() to resolve a domain name. When I pass in:
host = 'www.google.com', port = 80, family = socket.AF_INET, type = 0, proto = 0, flags = 0
I get a pair of socket infos like you'd expect, one with SocketKind.SOCK_DGRAM (for UDP) and and the other with SocketKind.SOCK_STREAM (TCP).
When I set proto to socket.IPPROTO_TCP I narrow it to only TCP as expected.
However, when I use proto = socket.SOCK_STREAM (which shouldn't work) I get back a SocketKind.SOCK_RAW.
Also, Python won't let me use proto = socket.IPPROTO_RAW - I get 'Bad hints'.
Any thoughts on what's going on here? | false | 38,593,744 | 0 | 0 | 0 | 0 | socket.SOCK_STREAM should be passed in the type field. Using it in the proto field probably has a very random effect, which is what you're seeing. Proto only takes the IPPROTO constants. For a raw socket, you should use type = socket.SOCK_RAW. I'm not sure getaddrinfo supports that though, it's mostly for TCP and UDP.
It's probably better to have some actual code in your questions. It's much easier to see what's going on then. | 0 | 175 | 1 | 0 | 2016-07-26T15:14:00.000 | python,sockets,python-3.x,getaddrinfo | Unexpected socket.getaddrinfo behavior in Python using SOCK_STREAM | 1 | 1 | 1 | 38,660,201 | 0 |
1 | 0 | at the first place, I could not help myself with the correct search terms on this.
secondly, I couldnt pretty much make it working with standard smtplib or email package in python.
The question is, I have a normal html page(basically it contains a that is generated from bokeh package in python, and all it does is generating an html page the javascript within renders a nice zoomable plot when viewed in a browser.
My aim is to send that report (the html basically) over to recipients in a mail. | false | 38,650,665 | 0.379949 | 1 | 0 | 2 | Sorry, but you'll not be able to send an email with JavaScript embedded. That is a security risk. If you're lucky, an email provider will strip it before rendering, if you're unlucky, you'll be sent directly to spam and the provider will distrust your domain.
You're better off sending an email with a link to the chart. | 0 | 516 | 0 | 0 | 2016-07-29T04:43:00.000 | javascript,python,email,bokeh,smtplib | sending dynamic html email containing javascript via a python script | 1 | 1 | 1 | 38,650,801 | 0 |
0 | 0 | I am scraping data of football player statistics from the web using python and Beautiful Soup. I will be scraping from multiple sources, and each source will have a variety of variables about each player which include strings, integers, and booleans. For example player name, position drafted, pro bowl pick (y/n).
Eventually I would like to put this data into a data mining tool or an analysis tool in order to find trends. This will need to be searchable and I will need to be able to add data to a player's info when I am scraping from a new source in a different order.
What techniques should I use to store the data so that I will best be able to add too it and the analyze it later? | false | 38,661,144 | 1 | 0 | 0 | 6 | Use a layered approach: downloading, parsing, storage, analysis.
Separate the layers. Most importantly, don't just download data and then store it in the final parsed format. You will inevitably realise you missed something and need to scrape it all over again. Use something like requests + requests_cache (I found that extending requests_cache.backends.BaseCache and storing it on the filesystem is more convenient examining scraped html than the default sqlite storage backend).
For parsing you're already using beautiful soup which works fine.
For storage & analysis use a database. Avoid the temptation to go with NoSQL -- as soon as you need to run aggregate queries you'll regret it. | 0 | 2,197 | 0 | 1 | 2016-07-29T14:20:00.000 | python,database,numpy,web-scraping | Best way to store scraped data in Python for analysis | 1 | 1 | 1 | 38,661,572 | 0 |
0 | 0 | Boto3 Mavens,
What is the functional difference, if any, between Clients and Resources?
Are they functionally equivalent?
Under what conditions would you elect to invoke a Boto3 Resource vs. a Client (and vice-versa)?
Although I've endeavored to answer this question by RTM...regrets, understanding the functional difference between the two eludes me.
Your thoughts?
Many, many thanks!
Plane Wryter | false | 38,670,372 | 1 | 0 | 0 | 24 | Resources are just a resource-based abstraction over the clients. They can't do anything the clients can't do, but in many cases they are nicer to use. They actually have an embedded client that they use to make requests. The downside is that they don't always support 100% of the features of a service. | 0 | 6,208 | 0 | 50 | 2016-07-30T04:36:00.000 | python,amazon-web-services,boto3 | Are Boto3 Resources and Clients Equivalent? When Use One or Other? | 1 | 1 | 2 | 38,707,084 | 0 |
1 | 0 | I'm just starting to use Web2PY.
My basic one page app authenticates users to a AD based LDAP service.
I need to collect other data via rest api calls on behave of the user from the server side of the app.
I'd like to cache the username and password of the user for a session so the user doesn't have to be prompted for credentials multiple times.
Is there an easy way to do this ? | false | 38,699,035 | 0 | 0 | 0 | 0 | Just wanted to close out on this in case someone in the future is looking at this as well.
I was able to capture the password used to login by adding the following to my db.py
def on_ldap_connect(form):
username = request.vars.username
password = request.vars.password
You can save user/password to some session variable or secure file to
use for authenticating to other services.
auth.settings.login_onaccept.append(on_ldap_connect) | 0 | 71 | 0 | 0 | 2016-08-01T12:33:00.000 | python | Web2PY caching password | 1 | 1 | 1 | 38,837,806 | 0 |
1 | 0 | With Seafile one is able to create a public upload link (e.g. https://cloud.seafile.com/u/d/98233edf89/) to upload files via Browser w/o authentication.
Seafile webapi does not support any upload w/o authentication token.
How can I use such kind of link from command line with curl or from python script? | false | 38,742,893 | 1 | 0 | 0 | 9 | needed 2 hours to find a solution with curl, it needs two steps:
make a get-request to the public uplink url with the repo-id as query parameter as follows:
curl 'https://cloud.seafile.com/ajax/u/d/98233edf89/upload/?r=f3e30b25-aad7-4e92-b6fd-4665760dd6f5' -H 'Accept: application/json' -H 'X-Requested-With: XMLHttpRequest'
The answer is (json) a id-link to use in next upload-post e.g.:
{"url": "https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680"}
Use this link to initiate the upload post:
curl 'https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680' -F file=@./tmp/index.html -F filename=index.html -F parent_dir="/my-repo-dir/"
The answer is json again, e.g.
[{"name": "index.html", "id": "0a0742facf24226a2901d258a1c95e369210bcf3", "size": 10521}]
done ;) | 0 | 2,271 | 1 | 5 | 2016-08-03T11:54:00.000 | python,curl,urllib2,http-upload,seafile-server | How to use a Seafile generated upload-link w/o authentication token from command line | 1 | 1 | 2 | 38,743,242 | 0 |
1 | 0 | i am scraping data through multiple websites.
To do that i have written multiple web scrapers with using selenium and PhantomJs.
Those scrapers return values.
My question is: is there a way i can feed those values to a single python program that will sort through that data in real time.
What i want to do is not save that data to analyze it later i want to send it to a program that will analyze it in real time.
what i have tried: i have no idea where to even start | true | 38,751,024 | 1.2 | 0 | 0 | -1 | You can try writing the data you want to share to a file and have the other script read and interpret it. Have the other script run in a loop to check if there is a new file or the file has been changed. | 0 | 87 | 0 | 0 | 2016-08-03T18:22:00.000 | python,macos,os.system | Sharing data with multiple python programms | 1 | 2 | 3 | 38,751,112 | 0 |
1 | 0 | i am scraping data through multiple websites.
To do that i have written multiple web scrapers with using selenium and PhantomJs.
Those scrapers return values.
My question is: is there a way i can feed those values to a single python program that will sort through that data in real time.
What i want to do is not save that data to analyze it later i want to send it to a program that will analyze it in real time.
what i have tried: i have no idea where to even start | false | 38,751,024 | -0.066568 | 0 | 0 | -1 | Simply use files for data exchange and a trivial locking mechanism.
Each writer or reader (only one reader, it seems) gets a unique number.
If a writer or reader wants to write to the file, it renames it to its original name + the number and then writes or reads, renaming it back after that.
The others wait until the file is available again under its own name and then access it by locking it in a similar way.
Of course you have shared memory and such, or memmapped files and semaphores. But this mechanism has worked flawlessly for me for over 30 years, on any OS, over any network. Since it's trivially simple.
It is in fact a poor man's mutex semaphore.
To find out if a file has changed, look to its writing timestamp.
But the locking is necessary too, otherwise you'll land into a mess. | 0 | 87 | 0 | 0 | 2016-08-03T18:22:00.000 | python,macos,os.system | Sharing data with multiple python programms | 1 | 2 | 3 | 38,751,162 | 0 |
0 | 0 | I've got a network application written in Python3.5 which takes advantage of pythons Asyncio which concurrently handles each incoming connection.
On every concurrent connection, I want to store the connected clients data in a list. I'm worried that if two clients connect at the same time (which is a possibility) then both tasks will attempt to write to the list at the same time, which will surely raise an issue. How would I solve this? | false | 38,787,989 | 0.197375 | 0 | 0 | 2 | There is lots of info that is missing in your question.
Is your app threaded? If yes, then you have to wrap your list in a threading.Lock.
Do you switch context (e.g. use await) between writes (to the list) in the request handler? If yes, then you have to wrap your list in a asyncio.Lock.
Do you do multiprocessing? If yes then you have to use multiprocessing.Lock
Is your app divided onto multiple machines? Then you have to use some external shared database (e.g. Redis).
If answers to all of those questions is no then you don't have to do anything since single-threaded async app cannot update shared resource parallely. | 1 | 7,818 | 0 | 7 | 2016-08-05T11:20:00.000 | python,python-3.x,concurrency,python-asyncio,shared-resource | Python3 Asyncio shared resources between concurrent tasks | 1 | 1 | 2 | 38,791,950 | 0 |
0 | 0 | I want to connect a Tun to a socket so that whatever data is stored in the Tun file will then end up being pushed out to a socket which will receive the data. I am struggling with the higher level conceptual understanding of how I am supposed to connect the socket and the Tun. Does the Tun get a dedicated socket that then communicates with another socket (the receive socket)? Or does the Tun directly communicate with the receive socket? Or am I way off all together? Thanks! | false | 38,796,252 | 0 | 0 | 0 | 0 | If I am understanding your problem, you should be able to write an application that connects to the tun device and also maintains another network socket. You will need some sort of multiplexing such as epoll or select. But, basically, whenever you see data on the tun interface, you can receive the data into a buffer and then provide this buffer (with the correct number of received octets) to the send call of the other socket. Typically you use such a setup when you insert some custom header or something to e.g., implement a custom VPN solution. | 0 | 420 | 0 | 0 | 2016-08-05T19:08:00.000 | python,sockets,networking,tun | Connecting a Tun to a socket | 1 | 1 | 1 | 38,833,280 | 0 |
0 | 1 | I'm facing problem like this. I used tweepy to collect +10000 tweets, i use nltk naive-bayes classification and filtered the tweets into +5000.
I want to generate a graph of user friendship from that classified 5000 tweet. The problem is that I am able to check it with tweepy.api.show_frienship(), but it takes so much and much time and sometime ended up with endless ratelimit error.
is there any way i can check the friendship more eficiently? | false | 38,818,981 | 0 | 1 | 0 | 0 | I don't know much about the limits with Tweepy, but you can always write a basic web scraper with urllib and BeautifulSoup to do so.
You could take a website such as www.doesfollow.com which accomplishes what you are trying to do. (not sure about request limits with this page, but there are dozens of other websites that do the same thing) This website is interesting because the url is super simple.
For example, in order to check if Google and Twitter are "friends" on Twitter, the link is simply www.doesfollow.com/google/twitter.
This would make it very easy for you to run through the users as you can just append the users to the url such as 'www.doesfollow.com/'+ user1 + '/' + user2
The results page of doesfollow has this tag if the users are friends on Twitter:
<div class="yup">yup</div>,
and this tag if the users are not friends on Twitter:
<div class="nope">nope</div>
So you could parse the page source code and search to find which of those tags exist to determine if the users are friends on Twitter.
This might not be the way that you wanted to approach the problem, but it's a possibility. I'm not entirely sure how to approach the graphing part of your question though. I'd have to look into that. | 0 | 592 | 0 | 3 | 2016-08-07T22:03:00.000 | python,twitter,tweepy | Most efficient way to check twitter friendship? (over 5000 check) | 1 | 1 | 1 | 38,819,049 | 0 |
0 | 0 | I am trying to run pcapy_sniffer.py but i get this
pcapy.PcapError: eth1: You don't have permission to capture on that device (socket: Operation not permitted) | true | 38,832,347 | 1.2 | 0 | 0 | 0 | If you're running on linux or OS X try running as root or with sudo, otherwise if you're on windows try running as administrator. | 0 | 1,739 | 1 | 0 | 2016-08-08T14:50:00.000 | python,scapy,pcap | pcapy.PcapError: eth1: You don't have permission to capture on that device | 1 | 1 | 1 | 38,832,386 | 0 |
0 | 0 | If our network has a proxy , then some sites can not be opened.
I want to check iteratively , how many sites can be accessed through our network. | false | 38,834,104 | 0 | 0 | 0 | 0 | Find out what the source code of the Proxy Block page is.
Use urllib and BeautifulSoup to try and scrape the page and parse the page's source code to see if you can find something unique that can tell you if the site is accessible or not.
For example, in my office, when a page is blocked by our proxy the title tag of the source code is <title>Network Error</title>. Something such as that could be an identifier for you.
Just a quick idea.
So for example you could have the URL's to test in a list and iterate through the list in a loop and try and scrape each site. | 0 | 243 | 0 | 0 | 2016-08-08T16:19:00.000 | python,python-3.x,url | Python : How to check if a given site is accessible through a proxy network? | 1 | 1 | 1 | 38,834,146 | 0 |
0 | 0 | TCP flows by their own nature will grow until they fill the maximum capacity of the links used from src to dst (if all those links are empty).
Is there an easy way to limit that ? I want to be able to send TCP flows with a maximum X mbps rate.
I thought about just sending X bytes per second using the socket.send() function and then sleeping the rest of the time. However if the link gets congested and the rate gets reduced, once the link gets uncongested again it will need to recover what it could not send previously and the rate will increase. | false | 38,836,898 | 0.197375 | 0 | 0 | 1 | At the TCP level, the only control you have is how many bytes you pass off to send(), and how often you call it. Once send() has handed over some bytes to the networking stack, it's entirely up to the networking stack how fast (or slow) it wants to send them.
Given the above, you can roughly limit your transmission rate by monitoring how many bytes you have sent, and how much time has elapsed since you started sending, and holding off subsequent calls to send() (and/or the number of data bytes your pass to send()) to keep the average rate from going higher than your target rate.
If you want any finer control than that, you'll need to use UDP instead of TCP. With UDP you have direct control of exactly when each packet gets sent. (Whereas with TCP it's the networking stack that decides when to send each packet, what will be in the packet, when to resend a dropped packet, etc) | 0 | 1,845 | 0 | 0 | 2016-08-08T19:15:00.000 | python,linux,sockets,unix | Limiting TCP sending rate | 1 | 1 | 1 | 38,838,673 | 0 |
1 | 0 | I'm trying to integrate reactjs with Odoo, and successfully created components. Now my problem is that I cant get the JSON via odoo. The odoo programmer has to write special api request to make this happen. This is taking more time and code repetitions are plenty.
I tried may suggestions and none worked.
Is there a better way to convert the browse objects, that odoo generate, to JSON ?
Note: Entirely new to python and odoo, please forgive my mistakes, if any mentioned above. | false | 38,844,651 | 0.197375 | 0 | 0 | 1 | maybe this will help you:
Step 1: Create js that runs reqiest to /example_link
Step 2: Create controller who listens that link @http.route('/example_link', type="json")
Step 3: return from that function json return json.dumps(res) where res is python dictionary and also dont forget to import json.
Thats all, it's not very hard, hope I helped you, good luck. | 1 | 3,015 | 0 | 3 | 2016-08-09T07:31:00.000 | python,json,openerp,odoo-8,odoo-10 | Can I convert an Odoo browse object to JSON | 1 | 1 | 1 | 38,847,214 | 0 |
0 | 0 | I am using visual studio, when I run this code below I am getting this message and the program did not run correctly:
The thread 'MainThread' (0x339c) has exited with code 0 (0x0).
The program '[10996] python.exe' has exited with code 0 (0x0).
from selenium import webdriver
path = chrome_path = r'C:\Program Files (x86)\Google\Chrome\Application/chromedriver'
driver = webdriver.Chrome(path)
driver.get('https://google.com/') | false | 38,879,399 | 0 | 0 | 0 | 0 | Visual Studio is displaying the contents of stdout and stderr. When each thread and finally the entire program exits, it'll show that they exited and the code they returned when they exited.
Your program doesn't print anything to stdout or stderr, which is why no output appears before the program exits.
Your problem with Chrome could be caused by running 32-bit Visual Studio or 32-bit Python on a 64-bit machine, causing it not to find the 32-bit version of Chrome in the 32-bit folder (because the 32-bit folder is just ordinary Program Files as far as a 32-bit program is concerned). | 1 | 1,551 | 0 | 0 | 2016-08-10T16:56:00.000 | python,visual-studio | The thread 'MainThread' (0x339c) has exited with code 0 (0x0) | 1 | 1 | 1 | 51,743,400 | 0 |
1 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | false | 38,887,061 | 0 | 1 | 0 | 0 | My guess would be that you missed a step on setup. There's one where you have to set the "event source". IF you don't do that, I think you get that message.
But the debug options are limited. I wrote EchoSim (the original one on GitHub) before the service simulator was written and, although it is a bit out of date, it does a better job of giving diagnostics.
Lacking debug options, the best is to do what you've done. Partition and re-test. Do static replies until you can work out where the problem is. | 0 | 3,387 | 0 | 2 | 2016-08-11T04:02:00.000 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 1 | 3 | 4 | 38,895,935 | 0 |
1 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | true | 38,887,061 | 1.2 | 1 | 0 | 3 | tl;dr: The remote endpoint could not be called, or the response it returned was invalid. also means there may have been a timeout waiting for the endpoint.
I was able to narrow it down to a timeout.
Seems like the Alexa service simulator (and the Alexa itself) is less tolerant to long responses than the lambda testing console. During development I had increased the timeout of ARN:1 to 30 seconds (whereas I believe the default is 3 seconds). The DynamoDB table used by ARN:1 has more data and it takes slightly longer to process than ARN:3 which has an almost empty table. As soon as I commented out some of the data loading stuff it was running slightly faster and the Alexa service simulator was working again. I can't find the time budget documented anywhere, I'm guessing 3 seconds? I most likely need to move to another backend, DynamoDB+Python on lambda is too slow for very trivial requests. | 0 | 3,387 | 0 | 2 | 2016-08-11T04:02:00.000 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 1 | 3 | 4 | 38,902,127 | 0 |
1 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | false | 38,887,061 | 0.049958 | 1 | 0 | 1 | I think the problem you having for ARN:1 is you probably didn't set a trigger to alexa skill in your lambda function.
Or it can be the alexa session timeout which is by default set to 8 seconds. | 0 | 3,387 | 0 | 2 | 2016-08-11T04:02:00.000 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 1 | 3 | 4 | 39,245,816 | 0 |
0 | 0 | It is wired to get an exception at about 7:30am(utc+8) everyday when calling softlayer-api.
TransportError: TransportError(0): HTTPSConnectionPool(host='api.softlayer.com', port=443): Max retries exceeded with url: /xmlrpc/v3.1/SoftLayer_Product_Package (Caused by ProxyEr
ror('Cannot connect to proxy.', error('Tunnel connection failed: 503 Service Unavailable',)))
And I uses a proxy to forward https request to softlayer's server. At first I thougth it is caused by the proxy, but when I looked into the log, it showed every request had been forwarded successfully. So maybe it is caused by the server. Does the server do something so busy at that moment everyday that it fails to server? | false | 38,890,319 | 0 | 0 | 0 | 0 | We don't have any report about this kind of issue nor if the server is busy in SoftLayer's side. but regarding to your issue, it is something related with network issues. It seems that there is something happening with your proxy connection.
First we need to discard if the proxy can be the reason for this issue, it would be very useful if can verify that this issue is reproducible without using a proxy from your side, let me know if you could test it.
If you could check this without proxy, I recommend to submit a ticket for further investigation about this issue. | 0 | 106 | 0 | 0 | 2016-08-11T07:47:00.000 | python,ibm-cloud-infrastructure | TransportError happened when calling softlayer-api | 1 | 1 | 1 | 38,898,571 | 0 |
0 | 0 | Purpose:
I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router.
Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own.
Approaches I have explored:
Creating a bat file that issues commands by means of netsh, then running the bat.
Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly).
(I'm aware programs such as GameRanger do this)
Using the Socket Module.
If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it.
Thank you.
Edit: Purpose | false | 38,931,064 | 0 | 0 | 0 | 0 | I'm not sure if that's possible, as much as I know, ports aren't actually a thing their just some abstraction convention made by protocols today and supported by your operating system that allows you to have multiple connections per one machine,
now sockets are basically some object provided to you by the operating system that implements some protocol stack and allows you to communicate with other systems, the API provides you some very nice API called the socket API which allows you use it's functionality in order to communicate with other computers, Port forwarding is not an actual thing, it just means that when the operating system of the router when receiving incoming packets that are destined to some port it will drop them if the port is not open, think of your router as some bouncer or doorman, standing in the entrance of a building, the building is your LAN, your apartment is your machine and rooms within your apartment are ports, some package or mail arrives to your doorman under the port X, a port rule means on IP Y and Port X of the router -> forward to IP Z and port A of some computer within the LAN ( provides and implements the NAT/PAT ) so what happens if we'll go back to my analogy is something such as this: doorman receives mail destined to some port, and checks if that port is open, if not it drops the mail if it is it allows it to go to some room within some apartment.. (sounds complex I know apologize) my point is, every router chooses to implement port rules or port blocking a little bit different and there is no standard protocol for doing, socket is some object that allows you program to communicate with others, you could create some server - client with sockets but that means that you'll need to create or program your router, and I'm not sure if that's possible,
what you COULD do is:
every router provides some http client ( web client ) that is used to create and forward ports, maybe if you read about your router you could get access to that client and write some python http script that forwards ports automatically
another point I've forgot is that you need to make sure you're own firewall isn't blocking ports, but there's no need for sockets / python to do so, just manually config it | 0 | 707 | 1 | 3 | 2016-08-13T09:00:00.000 | python,sockets,batch-file,portforwarding | How to make a port forward rule in Python 3 in windows? | 1 | 2 | 2 | 38,932,875 | 0 |
0 | 0 | Purpose:
I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router.
Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own.
Approaches I have explored:
Creating a bat file that issues commands by means of netsh, then running the bat.
Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly).
(I'm aware programs such as GameRanger do this)
Using the Socket Module.
If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it.
Thank you.
Edit: Purpose | false | 38,931,064 | 0 | 0 | 0 | 0 | You should read first some sort of informations about UPnP (Router Port-Forwarding) and that it's normally disabled.
Dependent of your needs, you could also try a look at ssh reverse tunnels and at ssh at all, as it can solve many problems.
But you will see that working with windows and things like adavanced network things is a bad idea.
At least you should use cygwin.
And when you really interessted in network traffic at all, wireshark should be installed. | 0 | 707 | 1 | 3 | 2016-08-13T09:00:00.000 | python,sockets,batch-file,portforwarding | How to make a port forward rule in Python 3 in windows? | 1 | 2 | 2 | 38,932,807 | 0 |
0 | 0 | I can't seem to find a good explanation of how to use Python modules. Take for example the urllib module. It has commands such as
req = urllib.request.Request().
How would I find out what specific commands, like this one, are in certain Python modules?
For all the examples I've seen of people using specific Python modules, they just know what to type, and how to use them.
Any suggestions will be much appreciated. | false | 38,936,584 | 0.197375 | 0 | 0 | 1 | My flow chart looks something like this:
Reading the published documentation (or use help(moduleName) which gives you the same information without an internet connection in a harder to read format). This can be overly verbose if you're only looking for one tidbit of information, in which case I move on to...
Finding tutorials or similar stack overflow posts using specific keywords in your favorite search engine. This is generally the approach you will use 99% of the time.
Just recursively poking around with dir() and __doc__ if you think the answer for what you're looking for is going to be relatively obvious (usually if the module has relatively simple functions such as math that are obvious by the name)
Looking at the source of the module if you really want to see how things works. | 0 | 138 | 0 | 1 | 2016-08-13T20:12:00.000 | python,python-3.x,python-module | How to find good documentation for Python modules | 1 | 1 | 1 | 38,937,103 | 0 |
0 | 0 | I am testing my app using the terminal, which is quite handy in a pre-development phase.
so far, I have used spotipy.Spotify(client_credentials_manager=client_credentials_manager) within my python scripts in order to access data.
SpotifyClientCredentials() requires client_idand client_secret as parameters.
now I need to access analysis_url data, which requires an access token.
Is there a way to include this access token requirement via my python script ran at command line or do I have to build an app on the browser just to do a simple test?
many thanks on advance. | false | 38,939,085 | 0 | 0 | 0 | 0 | Copy and paste the entire redirect URI from your browser to the terminal (when prompted) after successful authentication. Your access token will be cached in the directory (look for .cache.<username>) | 0 | 422 | 1 | 0 | 2016-08-14T04:19:00.000 | python,command-line,spotify,spotipy | Spotify - access token from command line | 1 | 1 | 1 | 39,049,945 | 0 |
1 | 0 | I have a problem that I am trying to conceptualize whether possible or not. Nothing too fancy (i.e. remote login or anything etc.)
I have Website A and Website B.
On website A a user selects on a few links from website B, i would like to then remotely click on behalf of the user on the link (as Website B creates a cookie with the clicked information) so when the user gets redirected to Website B, the cookie (and the links) are pre-selected and the user does not need to click on them one by one.
Can this be done? | false | 38,979,170 | 0 | 1 | 0 | 0 | IF you want to interact to anorher webservice the resolution is send post/get request and parse response
Question is what is your goal? | 0 | 38 | 0 | 0 | 2016-08-16T15:42:00.000 | javascript,jquery,python,cookies | How to remote click on links from a 3rd party website | 1 | 1 | 1 | 38,979,448 | 0 |
0 | 0 | I am using the Python AWS API. I would like to invoke a lambda function from client code, but I have not been able to find documentation on whether the payload sent during invocation is encrypted.
Can someone watching the network potentially snoop on the AWS invocation payload? Or is the payload transmitted over a secure channel? | false | 39,057,569 | 0 | 1 | 0 | 0 | If you turn on the debug logging you should see how exactly is data transmitted. Or try netstat or Wireshark to see if it makes connection to port 443 rather than 80.
From my experience with boto3 and S3 (not Lambda) it uses HTTPS, which I would consider somewhat secure. I hope the certificates are verified... | 0 | 943 | 0 | 1 | 2016-08-20T18:43:00.000 | python,amazon-web-services,encryption,aws-lambda,boto3 | Does Boto3 encrypt the payload during transmission when invoking a lambda function? | 1 | 1 | 2 | 39,058,104 | 0 |
1 | 0 | Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.
Then I googled AWS and Python, Amazon mentioned Boto.
I don't know why using Boto. Because if I type python, it already installed.
What I want to do is run a script on day time.
Is there a need for me to reading about Boto or just run xx.py on AWS ?
Any help is appreciated. | false | 39,086,388 | 0.197375 | 1 | 0 | 2 | Boto is a Python wrapper for AWS APIs. If you want to interact with AWS using its published APIs, you need boto/boto3 library installed. Boto will not be supported for long. So if you are starting to use Boto, use Boto3 which is much simpler than Boto.
Boto3 supports (almost) all AWS services. | 0 | 561 | 0 | 1 | 2016-08-22T18:30:00.000 | python,amazon-web-services,boto3,boto | Running Python scripts on Amazon Web Services? Do I need to use Boto? | 1 | 2 | 2 | 39,086,478 | 0 |
1 | 0 | Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.
Then I googled AWS and Python, Amazon mentioned Boto.
I don't know why using Boto. Because if I type python, it already installed.
What I want to do is run a script on day time.
Is there a need for me to reading about Boto or just run xx.py on AWS ?
Any help is appreciated. | true | 39,086,388 | 1.2 | 1 | 0 | 3 | Boto is a python interface to Amazon Services (like copying to S3, etc).
You don't need it to just run regular python as you would on any linux instance with python installed, except to access AWS services from your EC2 instance. | 0 | 561 | 0 | 1 | 2016-08-22T18:30:00.000 | python,amazon-web-services,boto3,boto | Running Python scripts on Amazon Web Services? Do I need to use Boto? | 1 | 2 | 2 | 39,086,443 | 0 |
0 | 0 | I have an application (spark based service), which when starts..works like following.
At localhost:9000
if I do nc -lk localhost 9000
and then start entering the text.. it takes the text entered in terminal as an input and do a simple wordcount computation on it.
how do i use the requests library to programmatically send the text, instead of manually writing them in the terminal.
Not sure if my question is making sense.. | true | 39,086,420 | 1.2 | 0 | 0 | 1 | requests is a HTTP request library, while Spark's wordcount example provides a raw socket server, so no, requests is not the right package to communicate with your Spark app. | 0 | 32 | 1 | 0 | 2016-08-22T18:32:00.000 | python,python-requests | Using requests package to make request | 1 | 1 | 1 | 39,086,692 | 0 |
1 | 0 | The homepage for the web application I'm testing has a loading screen when you first load it, then a username/password box appears. It is a dynamically generated UI element and the cursor defaults to being inside the username field.
I looked around and someone suggested using action chains. When I use action chains, I can immediately input text into the username and password fields and then press enter and the next page loads fine. Unfortunately, action chains are not a viable long-term answer for me due to my particular setup.
When I use the webdriver's find_element_by_id I am able to locate it and I am not able to send_keys to the element though because it is somehow not visible. I receive
selenium.common.exceptions.ElementNotVisibleException: Message: element not visible.
I'm also not able to click the field or otherwise interact with it without getting this error.
I have also tried identifying and interacting with the elements via other means, such as "xpaths" and css, to no avail. They are always not visible.
Strangely, it works with dynamic page titles. When the page first loads it is Loading... and when finished it is Login. The driver will return the current title when driver.title is called.
Does anyone have a suggestion? | false | 39,090,768 | 0 | 0 | 0 | 0 | As suggested by saurabh , use
1 self.wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, OR.Sub_categories)))
Else put a sleep and see however it is not advisable to use that, may be the xpath you have changes at the time of page load | 0 | 69 | 0 | 1 | 2016-08-23T00:53:00.000 | python,selenium,testing | Webpage contained within one dynamic page, unable to use driver interactions with Selenium (Python) | 1 | 1 | 2 | 39,091,712 | 0 |
0 | 0 | I need to delete a big number of canvases and they have filters and segments as dependencies.
I've done an application in Python which sends API calls to get the segments and filters by using a search key but I can't delete them because they are dependencies in canvases.
Is there a way to delete the segments and filters using Eloqua REST API? The segments are also dependencies on some newer canvases that shouldn't be deleted.
Thanks for the help! | false | 39,151,732 | 0 | 0 | 0 | 0 | If the Campaign Canvas has dependencies, that means the Campaign Canvas is explicitly referenced. In example, the dependent Segment or Filter includes a "Clicked Emails from Campaigns" filer criteria that references the Campaign Canvas.
In this example, in order to remove dependencies, you must edit the criteria to not include the Campaign Canvas you want to delete. | 0 | 200 | 0 | 0 | 2016-08-25T17:48:00.000 | python,rest,api,eloqua | Eloqua delete canvas dependencies using REST API | 1 | 1 | 1 | 39,824,955 | 0 |
0 | 0 | I am having trouble understanding what the advantage of using Celery is. I realize you can use Celery with Redis, RabbitMQ etc, but why wouldn't I just get the client for those message queue services directly rather than sitting Celery in front of it? | true | 39,188,662 | 1.2 | 0 | 0 | 4 | The advantage of using Celery is that we mainly need to write the task processing code and handling of task delivery delivery to the task processors is taken care of by the Celery framework. Scaling out task processing is also easy by just running more Celery workers with higher concurrency (more of processing threads/processes). We don't even need to write code for submitting tasks to queues and consuming tasksfrom the queues.
Also, it has built in facility to add/removing consumers for any of the task queues. The framework supports retry of tasks, failure handling, results accumulating etc. It has many many features which helps us to concentrate on implementing the task processing logic only.
Just for an analogy, implementing a map-reduce program to run on Hadoop is not a very complex task. If data is small, we can write a simple Python script to implement the map-reduce logic which will outperform a Hadoop map-reduce Job processing the same data. But when data is very huge, we have to divide the data across machines, we will need to run multiple processes across machines and co-ordinate their executions. The complexity lies in running multiple instances of mappers and then reducers tasks across multiple machines, collecting inputs and distributing the inputs to mappers, transferring the outputs of mappers to appropriate reducers, monitoring progress, relaunching failed tasks, detecting job completion etc.
But because we have Hadoop, we don't need to care much about the underlying complexity of executing a distribute job. Same way Celery also helps us to concentrate mainly on task execution logic. | 0 | 1,102 | 1 | 0 | 2016-08-28T06:41:00.000 | java,python,rabbitmq,celery,messaging | Celery with Redis vs Redis Alone | 1 | 1 | 1 | 39,188,804 | 0 |
0 | 0 | Let's say my python server has three different responses available. And one user send three HTTP requests at the same time.
How can I make sure that one requests get one unique response out of my three different responses?
I'm using python and mysql.
The problem is that even though I store already responded status in mysql, it's a bit too late by the time the next request came in. | false | 39,290,234 | 0 | 0 | 0 | 0 | For starters, if MySQL isn't handling your performance requirements (and it rightly shouldn't, that doesn't sound like a very sane use-case),
consider using something like in-memory caching, or for more flexibility, Redis:
It's built for stuff like this, and will likely respond much, much quicker.
As an added bonus, it has an even simpler implementation than SQL.
Second, consider hashing some user and request details and storing that hash with the response to be able to identify it.
Upon receiving a request, store an entry with a 'pending' status, and only handle 'pending' requests - never ones that are missing entirely. | 1 | 34 | 0 | 0 | 2016-09-02T10:31:00.000 | python,distributed-computing | How to response different responses to the same multiple requests based on whether it has been responded? | 1 | 1 | 1 | 39,290,454 | 0 |
0 | 0 | More in detail, would like to know:
what is the default SYN_RECEIVED timer,
how do i get to change it,
are SYN cookies or SYN caches implemented.
I'm about to create a simple special-purpose publically accessible server. i must choose whether using built-in TCP sockets or RAW sockets and re-implement the TCP handshake if these security mechanisms are not present. | true | 39,438,003 | 1.2 | 0 | 0 | 1 | What you describe are internals of the TCP stack of the operating system. Python just uses this stack via the socket interface. I doubt that any of these settings can be changed specific to the application at all, i.e. these are system wide settings which can only be changed with administrator privileges. | 0 | 216 | 0 | 1 | 2016-09-11T16:01:00.000 | python,sockets,tcp,ddos,python-sockets | what anti-ddos security systems python use for socket TCP connections? | 1 | 1 | 1 | 39,438,366 | 0 |
0 | 0 | A Python 3 function receives an SSH address like [email protected]:/random/file/path. I want to access this file with the paramiko lib, which needs the username, IP address, and file path separately.
How can I split this address into these 3 parts, knowing that the input will sometimes omit the username ? | false | 39,497,199 | -0.291313 | 0 | 0 | -3 | use
not set(p).isdisjoint(set("0123456789$,")) where p is the SSH. | 0 | 247 | 0 | 2 | 2016-09-14T18:23:00.000 | python,python-3.x,parsing,ssh,ip-address | How to split an SSH address + path? | 1 | 1 | 2 | 39,500,132 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.