url
stringlengths 50
53
| repository_url
stringclasses 1
value | labels_url
stringlengths 64
67
| comments_url
stringlengths 59
62
| events_url
stringlengths 57
60
| html_url
stringlengths 38
43
| id
int64 597k
2.65B
| node_id
stringlengths 18
32
| number
int64 1
6.83k
| title
stringlengths 1
296
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
int64 0
211
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 3
values | active_lock_reason
stringclasses 4
values | body
stringlengths 0
65.6k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 59
62
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | issue_comments
listlengths 0
30
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/psf/requests/issues/1629
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1629/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1629/comments
|
https://api.github.com/repos/psf/requests/issues/1629/events
|
https://github.com/psf/requests/pull/1629
| 20,099,553 |
MDExOlB1bGxSZXF1ZXN0ODYyMjI5Mw==
| 1,629 |
Made proxy support more clear in README.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/238652?v=4",
"events_url": "https://api.github.com/users/schlamar/events{/privacy}",
"followers_url": "https://api.github.com/users/schlamar/followers",
"following_url": "https://api.github.com/users/schlamar/following{/other_user}",
"gists_url": "https://api.github.com/users/schlamar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/schlamar",
"id": 238652,
"login": "schlamar",
"node_id": "MDQ6VXNlcjIzODY1Mg==",
"organizations_url": "https://api.github.com/users/schlamar/orgs",
"received_events_url": "https://api.github.com/users/schlamar/received_events",
"repos_url": "https://api.github.com/users/schlamar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/schlamar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schlamar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/schlamar",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 3 |
2013-09-26T09:45:06Z
|
2021-09-08T23:06:22Z
|
2013-09-26T19:01:25Z
|
CONTRIBUTOR
|
resolved
|
see #1622
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1629/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1629/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1629.diff",
"html_url": "https://github.com/psf/requests/pull/1629",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1629.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1629"
}
| true |
[
"This suits me fine, but the README is inside Kenneth's wheelhouse so I'm leaving it up to him. =)\n",
"i'm going to use slightly different wording, but thanks!\n",
"@kennethreitz Did you forget about this? Can't find a relevant commit ;)\n"
] |
https://api.github.com/repos/psf/requests/issues/1628
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1628/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1628/comments
|
https://api.github.com/repos/psf/requests/issues/1628/events
|
https://github.com/psf/requests/pull/1628
| 20,099,542 |
MDExOlB1bGxSZXF1ZXN0ODYyMjI4NA==
| 1,628 |
Update host header on redirect.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/157797?v=4",
"events_url": "https://api.github.com/users/Scorpil/events{/privacy}",
"followers_url": "https://api.github.com/users/Scorpil/followers",
"following_url": "https://api.github.com/users/Scorpil/following{/other_user}",
"gists_url": "https://api.github.com/users/Scorpil/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Scorpil",
"id": 157797,
"login": "Scorpil",
"node_id": "MDQ6VXNlcjE1Nzc5Nw==",
"organizations_url": "https://api.github.com/users/Scorpil/orgs",
"received_events_url": "https://api.github.com/users/Scorpil/received_events",
"repos_url": "https://api.github.com/users/Scorpil/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Scorpil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scorpil/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Scorpil",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 26 |
2013-09-26T09:44:58Z
|
2021-09-08T23:11:05Z
|
2013-12-05T22:39:12Z
|
NONE
|
resolved
|
If you manually specify HTTP Host header, any redirect to host different from specified will result in TooManyRedirects exception.
Example:
```
>>> import requests
>>> requests.get('http://reddit.com', headers={"Host": "reddit.com"})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "requests/sessions.py", line 357, in request
resp = self.send(prep, **send_kwargs)
File "requests/sessions.py", line 480, in send
history = [resp for resp in gen] if allow_redirects else []
File "requests/sessions.py", line 82, in resolve_redirects
raise TooManyRedirects('Exceeded %s redirects.' % self.max_redirects)
requests.exceptions.TooManyRedirects: Exceeded 30 redirects.
```
After receiving first 301 response, requests library makes a request for url specified in it's "Location" field (in the example above it was http://www.reddit.com/), but it also preserves all headers set by user, so you end up having header "Host: reddit.com" while host is now **www**.reddit.com. Server checks "host" header and responds with 301 again. And now we're in a loop.
I'd be glad to add tests for this fix, but httpbin does not support subdomains.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1628/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1628/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1628.diff",
"html_url": "https://github.com/psf/requests/pull/1628",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1628.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1628"
}
| true |
[
"Thanks for this! The Pull Request itself looks good.\n\nBefore we merge this, I want to nail down a few things because it's not clear to me this this fix won't cause people problems. In particular, if you've set `Host` explicitly you clearly feel that Requests is going to choose an incorrect value for that header. If you're then 302'ed to the _same_ domain, or with a relative URI, this will overwrite your previous header with the new one which is presumably not acceptable. Is that a real problem, or have I made that up?\n",
"> if you've set Host explicitly you clearly feel that Requests is going to choose an incorrect value for that header\n\nIt can also be used to work around missing DNS for that host, e.g.:\n\n``` python\n\nrequests.get('http://127.0.0.1/', headers={'Host': 'my-hostname.com'})\n\n```\n",
"@jkbr Agreed, but that falls into the problem I suggested: if that host does a relative redirect this change will wipe your previous Host header.\n",
"@Lukasa Right, It's quite tricky. \n\nI guess a solution could be to remember that a custom `Host` has been supplied and use it only as an alias of the one included in the request URL for any subsequent requests to the same host within a session. Let me try to demonstrate:\n\n``` python\n\n# Request a page that redirects and specify custom Host:\n>>> response = requests.get('http://127.0.0.1/', headers={'Host': 'my-hostname.com'})\n>>> response.headers['Location']\n'http://my-hostname.com/foo'\n\n\n# And this time follow:\n>>> response2 = requests.get('http://127.0.0.1/', headers={'Host': 'my-hostname.com'}, follow=True)\n>>> response2.url\n'http://my-hostname.com/foo' \n# We know that the Location hostname is an alias because it was \n# included in the custom Host header of the first request.\n# Therefore, we used the original URL host for the redirect location instead:\n>>> response2.request.url \n'http://127.0.0.1/foo' \n\n\n# Redirect handling (pseudocode)\nif 'Location' in response.headers:\n location = get_absolute_url(response.headers['Location'])\n if 'Host' in prepared_request.headers:\n # Custom Host provided\n if urlparse(location).hostname == prepared_request.headers['Host']:\n location = location.replace(\n prepared_request.headers['Host'], \n urlparse(prepared_request.url).hostname,\n 1\n )\n\n print('Redirect to', location)\n\n```\n",
"See, that feels like massive overkill to me. For stuff like this it's generally been expected that the user will just take over redirection themselves (e.g. set `allow_redirects=False`).\n",
"I agree. It's an edge case and handling it properly would add a lot of complexity. Throwing an exception with an explanatory message might be a good idea, though.\n",
"Seems like healthy choice here is to update \"Host\" only when hostname in original url and \"Location\" url is different.\n\n```\n# this code is now present in requesets/sessions.py\nif not urlparse(url).netloc: \n url = urljoin(resp.url, requote_uri(url)) # url gets \"reconstructed\" if we encountered relative Location \nelse: \n url = requote_uri(url)\n\n# this is new code\nis_same_hostname = urlparse(resp.request.url).hostname == urlparse(url).hostname\nif 'Host' in prepared_request.headers and not is_same_hostname:\n prepared_request.headers['Host'] = urlparse(url).hostname\n```\n\nThe only case when user can encounter redirection Loop in this case is when request is constructed like this:\n\n```\nrequests.get('http://www.reddit.com', headers={\"Host\": \"reddit.com\"})\n```\n\nand it's obvious programming error on the side of the user.\n\nEdit: How you can test this:\n\nFollowing requests return 200:\n\n```\n# 54.225.138.124 is current ip of httpbin.\nrequests.get('http://54.225.138.124/redirect-to?url=/', headers={\"Host\": \"httpbin.org\"}) # Host is not rewritten, since redirect is relative (hence the same)\nrequests.get('http://54.225.138.124/redirect-to?url=http://54.225.138.124/', headers={\"Host\": \"httpbin.org\"}) # Host is not rewritten since redirect points to the same hostname\nrequests.get('http://reddit.com', headers={\"Host\": \"reddit.com\"}) # host is rewritten to www.reddit.com since redirect points to another hostname\n```\n",
"@Scorpil That's not a bad idea, but I disagree about your final statement. When written like that it's an obvious programming error, but when hidden in variables and loops it's absolutely not clear. _Especially_ as it was the example you gave in your original post. =)\n\nWe have two options: we write this so that you cannot hit this redirection loop, or we say \"Hey, if you're playing with the `Host` header you want to worry about redirect loops. Maybe turn `allow_redirects` off?\".\n",
"Good point. Fortunately, it's not hard to detect situation when \"Host\" does not match the url before raising TooManyRedirects exception and correct error message accordingly. Kind of like this but should be nicer in final version:\n\n```\nif i >= self.max_redirects: \n message = 'Exceeded %s redirects.' % self.max_redirects \n if 'Host' in prepared_request.headers and prepared_request.headers['Host'] != urlparse(resp.url).hostname: \n message += ' Warning: HTTP Host header does not match URL domain' \n raise TooManyRedirects(message)\n```\n",
"I think that's probably the route we want to take with this.\n",
"@Scorpil _Ping_.\n\nWant to update this PR?\n",
"Whoops... Sorry. Update on it's way.\n",
"Any feedback on update, @Lukasa?\n",
"Looks good to me. One comment in the diff, otherwise I'm happy. =)\n",
"It's a little verbose, but i hope also helpful.\n",
"Cool, I'm happy. =) Add yourself to the AUTHORS file and your work here will be done. Thanks so much! ;cake:\n",
"Done. I'm glad I could help.\n",
"@Scorpil can you rebase your pull request? Something like this should work:\n\n```\ngit remote add upstream git://github.com/kennethreitz/master\ngit fetch upstream\ngit rebase upstream/master\ngit push -f\n```\n\nNote that there will be conflicts while rebasing so you should be very careful in resolving them and us in reviewing the resulting diff.\n",
"Done. The only conflict was in AUTHORS file. And i've checked manually requests/sessions.py: changes there are compatible with mine, fast-forward did the job.\n",
"Thanks @Scorpil !\n",
"My gut tells me to just recommend users sending their own host headers to just handle their own redirects.\n",
"@kennethreitz I have avoided being too negative about this, but I agree with you. I don't see anything wrong with the current behaviour. We're inconveniencing users who are doing something we strongly recommend against, that doesn't seem like a bad thing frankly.\n",
"@Scorpil keep the pull requests coming! This was a really great change! Sorry we couldn't accept it right now :)\n\nEverything about your contribution was perfect, though. Requests just is overly opinionated :)\n",
"Thanks @kennethreitz \nNo problem, if you think it's not appropriate change - then it's not. No hard feelings :)\n",
"@Scorpil Here's some :cake:. You deserve it. :)\n",
"Yeah, this was a really well-handled pull request, thanks. =) Exemplary work, you're a model contributor.\n"
] |
https://api.github.com/repos/psf/requests/issues/1627
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1627/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1627/comments
|
https://api.github.com/repos/psf/requests/issues/1627/events
|
https://github.com/psf/requests/pull/1627
| 20,092,700 |
MDExOlB1bGxSZXF1ZXN0ODYxODY3Mg==
| 1,627 |
Fixed #1623. Added 'MD5-sess' algorithm to HTTPDigestAuth
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3482660?v=4",
"events_url": "https://api.github.com/users/daftshady/events{/privacy}",
"followers_url": "https://api.github.com/users/daftshady/followers",
"following_url": "https://api.github.com/users/daftshady/following{/other_user}",
"gists_url": "https://api.github.com/users/daftshady/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daftshady",
"id": 3482660,
"login": "daftshady",
"node_id": "MDQ6VXNlcjM0ODI2NjA=",
"organizations_url": "https://api.github.com/users/daftshady/orgs",
"received_events_url": "https://api.github.com/users/daftshady/received_events",
"repos_url": "https://api.github.com/users/daftshady/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daftshady/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daftshady/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daftshady",
"user_view_type": "public"
}
|
[
{
"color": "e11d21",
"default": false,
"description": null,
"id": 44501305,
"name": "Not Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTMwNQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 2 |
2013-09-26T06:46:26Z
|
2021-09-08T23:05:12Z
|
2013-10-07T23:22:50Z
|
CONTRIBUTOR
|
resolved
|
See https://github.com/kennethreitz/requests/issues/1623
I've added md5-sess algorithm referencing 'http://en.wikipedia.org/wiki/Digest_access_authentication'
Any criticism is welcome :)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1627/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1627/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1627.diff",
"html_url": "https://github.com/psf/requests/pull/1627",
"merged_at": "2013-10-07T23:22:50Z",
"patch_url": "https://github.com/psf/requests/pull/1627.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1627"
}
| true |
[
"Thanks for this!\n\nI'm not in the best place to review this, having not contributed to this section of our codebase, but I can point out a few things. @sigmavirus24 would do a better job of reviewing this change.\n\nFirstly, according to RFC 2617, ['If the \"algorithm\" directive's value is \"MD5-sess\", then A1 is calculated only once - on the first request by the client following receipt of a WWW-Authenticate challenge from the server. It uses the server nonce from that challenge...'](http://tools.ietf.org/html/rfc2617#page-13). This is something that is very difficult for our current authentication handler system to do, but appears to mean that we are severely limited in our ability to properly execute 'MD5-sess'.\n\nSecondly, the `build_digest_header` method is now _massive_ and quite difficult to follow. I think we should be aiming to refactor this method, either as part of this pull-request or more generally.\n",
"I'll take a look at this later today\n"
] |
https://api.github.com/repos/psf/requests/issues/1626
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1626/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1626/comments
|
https://api.github.com/repos/psf/requests/issues/1626/events
|
https://github.com/psf/requests/issues/1626
| 20,091,051 |
MDU6SXNzdWUyMDA5MTA1MQ==
| 1,626 |
Cann't run request in crontab
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/317622?v=4",
"events_url": "https://api.github.com/users/massifor/events{/privacy}",
"followers_url": "https://api.github.com/users/massifor/followers",
"following_url": "https://api.github.com/users/massifor/following{/other_user}",
"gists_url": "https://api.github.com/users/massifor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/massifor",
"id": 317622,
"login": "massifor",
"node_id": "MDQ6VXNlcjMxNzYyMg==",
"organizations_url": "https://api.github.com/users/massifor/orgs",
"received_events_url": "https://api.github.com/users/massifor/received_events",
"repos_url": "https://api.github.com/users/massifor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/massifor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/massifor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/massifor",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-09-26T05:31:21Z
|
2021-09-09T00:34:30Z
|
2013-10-11T11:04:15Z
|
NONE
|
resolved
|
I write a python script for sending post request, and add the script to crontab.
The code like this:
for i in range(10):
requests.post(url, data=data, files=files)
When I run the script on shell, it working well.
But run it in crontab, the loop just execute once, not 10 times.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1626/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1626/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nUh..that doesn't seem at all possible. Can you redirect stdout and stderr to a file and see if anything is being outputted?\n",
"@massifor do you have any more details for us?\n",
"Closing due to inactivity.\n"
] |
https://api.github.com/repos/psf/requests/issues/1625
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1625/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1625/comments
|
https://api.github.com/repos/psf/requests/issues/1625/events
|
https://github.com/psf/requests/pull/1625
| 20,045,156 |
MDExOlB1bGxSZXF1ZXN0ODU5NTAyMA==
| 1,625 |
kbutton.org button added to README file to clone requests in Koding cloud.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4077495?v=4",
"events_url": "https://api.github.com/users/fka/events{/privacy}",
"followers_url": "https://api.github.com/users/fka/followers",
"following_url": "https://api.github.com/users/fka/following{/other_user}",
"gists_url": "https://api.github.com/users/fka/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fka",
"id": 4077495,
"login": "fka",
"node_id": "MDEyOk9yZ2FuaXphdGlvbjQwNzc0OTU=",
"organizations_url": "https://api.github.com/users/fka/orgs",
"received_events_url": "https://api.github.com/users/fka/received_events",
"repos_url": "https://api.github.com/users/fka/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fka/subscriptions",
"type": "Organization",
"url": "https://api.github.com/users/fka",
"user_view_type": "public"
}
|
[
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
}
] |
closed
| true | null |
[] | null | 5 |
2013-09-25T14:33:23Z
|
2021-09-08T21:01:12Z
|
2013-09-25T15:34:22Z
|
NONE
|
resolved
|
It adds the kbutton to clone requests repository into Koding easily. You can find the details in http://kbutton.org
Thank you so much! :)
github: @f
twitter: @fkadev
Fatih Kadir Akin
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1625/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1625/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1625.diff",
"html_url": "https://github.com/psf/requests/pull/1625",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1625.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1625"
}
| true |
[
"I'm totally +0 on this, it's up to @kennethreitz. =)\n",
"@Lukasa Sure. It's just a request. :)\n",
"Is this necessary for users to use that site or is this just a way of promoting that site? If the former then I'm +0.5 otherwise -1\n",
"@fka This is really cool! I think I'll hold off on it for now, though. :)\n",
"Thanks @kennethreitz :) It's up to you.\n"
] |
https://api.github.com/repos/psf/requests/issues/1624
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1624/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1624/comments
|
https://api.github.com/repos/psf/requests/issues/1624/events
|
https://github.com/psf/requests/issues/1624
| 20,039,390 |
MDU6SXNzdWUyMDAzOTM5MA==
| 1,624 |
bad authentication with Twitter and update_with_media
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/752122?v=4",
"events_url": "https://api.github.com/users/monocasual/events{/privacy}",
"followers_url": "https://api.github.com/users/monocasual/followers",
"following_url": "https://api.github.com/users/monocasual/following{/other_user}",
"gists_url": "https://api.github.com/users/monocasual/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/monocasual",
"id": 752122,
"login": "monocasual",
"node_id": "MDEyOk9yZ2FuaXphdGlvbjc1MjEyMg==",
"organizations_url": "https://api.github.com/users/monocasual/orgs",
"received_events_url": "https://api.github.com/users/monocasual/received_events",
"repos_url": "https://api.github.com/users/monocasual/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/monocasual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monocasual/subscriptions",
"type": "Organization",
"url": "https://api.github.com/users/monocasual",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 4 |
2013-09-25T12:51:17Z
|
2021-09-09T01:22:20Z
|
2013-09-25T14:54:17Z
|
NONE
|
resolved
|
Hi guys, thank you for this great library!
I'm trying to publish a tweet with a picture in it, by calling update_with_media, without success. This is my code so far:
``` python
import requests_oauthlib
import requests
auth = requests_oauthlib.OAuth1(YOUR_APP_KEY, YOUR_APP_SECRET, USER_OAUTH_TOKEN, USER_OAUTH_TOKEN_SECRET)
req = requests.request('POST', 'https://api.twitter.com/1.1/statuses/update_with_media.json', data={'status':'test'}, files={'media[]' : open('/tmp/tmp.jpg', 'rb').read()})
print req.json() # error!
```
and the result is:
```
{u'errors': [{u'message': u'Bad Authentication data', u'code': 215}]}
```
That's not true, because I can call any other API's end point, both via POST and GET. For example, I can tweet a text-only message:
``` python
import requests_oauthlib
import requests
auth = requests_oauthlib.OAuth1(YOUR_APP_KEY, YOUR_APP_SECRET, USER_OAUTH_TOKEN, USER_OAUTH_TOKEN_SECRET)
req = requests.request('POST', 'https://api.twitter.com/1.1/statuses/update.json', data={'status':'test'})
print req.json() # ok!
```
This is what I found reading some Twitter docs and support pages:
```
Parameters being sent to the API that are not the oauth_* named parameters do not become part of the OAuth signature base string when the HTTP method being used is POST but the post body is not of the "Content-Type: x-www-form-urlencoded" variety.
Since you're using multipart POST, you're not sending a POST body with the Content-Type set as x-www-form-urlencoded and it's therefore erroneous to include parameters like "status" or "media[]" in the OAuth signature base string.
Some implementations of OAuth do not account for this gotcha.
```
I sense there's something wrong with the headers or the body: maybe a missing parameter in my request() call?
Requests version: 1.2.3
Python version: 2.7.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/752122?v=4",
"events_url": "https://api.github.com/users/monocasual/events{/privacy}",
"followers_url": "https://api.github.com/users/monocasual/followers",
"following_url": "https://api.github.com/users/monocasual/following{/other_user}",
"gists_url": "https://api.github.com/users/monocasual/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/monocasual",
"id": 752122,
"login": "monocasual",
"node_id": "MDEyOk9yZ2FuaXphdGlvbjc1MjEyMg==",
"organizations_url": "https://api.github.com/users/monocasual/orgs",
"received_events_url": "https://api.github.com/users/monocasual/received_events",
"repos_url": "https://api.github.com/users/monocasual/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/monocasual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monocasual/subscriptions",
"type": "Organization",
"url": "https://api.github.com/users/monocasual",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1624/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1624/timeline
| null |
completed
| null | null | false |
[
"One thing I noticed is that you're never using the auth instance you create. Is this just a mistake in your example code or are you actually never using it? On top of that, OAuth is only supposed to sign certain kinds of data if I remember correctly so and multipart form-data is not one of those kinds.\n\nCan I ask why you aren't using something like Twython for this? It should handle properly setting up the authentication for you and posting the data correctly.\n",
"Twython do exactly this and use Requests + requests-oauthlib to do it. I imagine the problem is not using the auth object. =)\n",
"OK this is an absolute facepalm: I left out the auth object since the beginning... Twython: I will definitely check it out. I'm very sorry, please forgive the noise :-)\n",
"No worries @monocasual \n\nAlso, @Lukasa sometimes you have to ask the obvioius question\n"
] |
https://api.github.com/repos/psf/requests/issues/1623
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1623/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1623/comments
|
https://api.github.com/repos/psf/requests/issues/1623/events
|
https://github.com/psf/requests/issues/1623
| 20,030,222 |
MDU6SXNzdWUyMDAzMDIyMg==
| 1,623 |
Use MD5-sess authentication got "UnboundLocalError: local variable 'hash_utf8' referenced before assignment"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/537911?v=4",
"events_url": "https://api.github.com/users/linuxhngg/events{/privacy}",
"followers_url": "https://api.github.com/users/linuxhngg/followers",
"following_url": "https://api.github.com/users/linuxhngg/following{/other_user}",
"gists_url": "https://api.github.com/users/linuxhngg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/linuxhngg",
"id": 537911,
"login": "linuxhngg",
"node_id": "MDQ6VXNlcjUzNzkxMQ==",
"organizations_url": "https://api.github.com/users/linuxhngg/orgs",
"received_events_url": "https://api.github.com/users/linuxhngg/received_events",
"repos_url": "https://api.github.com/users/linuxhngg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/linuxhngg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linuxhngg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/linuxhngg",
"user_view_type": "public"
}
|
[
{
"color": "02e10c",
"default": false,
"description": null,
"id": 76800,
"name": "Feature Request",
"node_id": "MDU6TGFiZWw3NjgwMA==",
"url": "https://api.github.com/repos/psf/requests/labels/Feature%20Request"
},
{
"color": "0b02e1",
"default": false,
"description": null,
"id": 191274,
"name": "Contributor Friendly",
"node_id": "MDU6TGFiZWwxOTEyNzQ=",
"url": "https://api.github.com/repos/psf/requests/labels/Contributor%20Friendly"
}
] |
closed
| true | null |
[] | null | 7 |
2013-09-25T09:13:46Z
|
2021-09-09T01:22:21Z
|
2013-09-26T07:54:08Z
|
NONE
|
resolved
|
Error Message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, *_kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, *_kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/sessions.py", line 360, in request
resp = self.send(prep, *_send_kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/sessions.py", line 468, in send
r = dispatch_hook('response', hooks, r, *_kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/hooks.py", line 41, in dispatch_hook
_hook_data = hook(hook_data, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/auth.py", line 163, in handle_401
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/auth.py", line 96, in build_digest_header
KD = lambda s, d: hash_utf8("%s:%s" % (s, d))
UnboundLocalError: local variable 'hash_utf8' referenced before assignment
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/537911?v=4",
"events_url": "https://api.github.com/users/linuxhngg/events{/privacy}",
"followers_url": "https://api.github.com/users/linuxhngg/followers",
"following_url": "https://api.github.com/users/linuxhngg/following{/other_user}",
"gists_url": "https://api.github.com/users/linuxhngg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/linuxhngg",
"id": 537911,
"login": "linuxhngg",
"node_id": "MDQ6VXNlcjUzNzkxMQ==",
"organizations_url": "https://api.github.com/users/linuxhngg/orgs",
"received_events_url": "https://api.github.com/users/linuxhngg/received_events",
"repos_url": "https://api.github.com/users/linuxhngg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/linuxhngg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linuxhngg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/linuxhngg",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1623/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1623/timeline
| null |
completed
| null | null | false |
[
"I just saw this is not implemented yet, so please ignore.\n",
"I don't think we should ignore this, I think we should fix it. =) I'll try to take a run at it sometime soon.\n",
"I think it's working correctly when i tested with md5.\nIn my opinion, that UnboundLocalError only happens when 'www-authenticate' header has another hashing algorithm (not md5, sha1)\n@linuxhngg Are you trying to use 'MD5' digest authentication?\n",
"Here is the header I got as below:\n(Pdb) res.headers\nCaseInsensitiveDict({'transfer-encoding': 'chunked', 'set-cookie': 'MIEP_digest_session=186eec93ce4429853ca5e68e4a546b63; Path=/', 'vary': 'Accept-Encoding', 'www-authenticate': 'Digest realm=\"iptv-portal\", qop=\"auth\", nonce=\"888c6a6441d8791165887559aaf96b5f\", opaque=\"186eec93ce4429853ca5e68e4a546b63\", algorithm=MD5-sess, domain=\"private domains now showing here\"'})\n\nI removed the domain information. \n\nWhat I did in curl as below and it is working:\n\ncurl -vvv -c 1tmpcookies.txt --digest -u xxxxx:yyyyy http://172.16.73.49(private_network)/iap-dataapi/private/bc/channels?X-Claimed-UserId=xxxxx -o channels.json\n",
"@linuxhngg \noh.. there is no way to parse 'MD5-sess' in current DigestAuthenticator.\nAccording to http standard, 'MD5-sess' is valid algorithm name.\nI will attach quick fix on it.\n",
"wow ~ sweet! must try it now.\nthanks a lot for your help on this.\n",
"it is working great! thanks a lot for this quick fix.\n"
] |
https://api.github.com/repos/psf/requests/issues/1622
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1622/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1622/comments
|
https://api.github.com/repos/psf/requests/issues/1622/events
|
https://github.com/psf/requests/issues/1622
| 20,020,060 |
MDU6SXNzdWUyMDAyMDA2MA==
| 1,622 |
Requests 2.0.0 breaks SSL proxying via https_port of Squid.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/507637?v=4",
"events_url": "https://api.github.com/users/GrahamDumpleton/events{/privacy}",
"followers_url": "https://api.github.com/users/GrahamDumpleton/followers",
"following_url": "https://api.github.com/users/GrahamDumpleton/following{/other_user}",
"gists_url": "https://api.github.com/users/GrahamDumpleton/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/GrahamDumpleton",
"id": 507637,
"login": "GrahamDumpleton",
"node_id": "MDQ6VXNlcjUwNzYzNw==",
"organizations_url": "https://api.github.com/users/GrahamDumpleton/orgs",
"received_events_url": "https://api.github.com/users/GrahamDumpleton/received_events",
"repos_url": "https://api.github.com/users/GrahamDumpleton/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/GrahamDumpleton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GrahamDumpleton/subscriptions",
"type": "User",
"url": "https://api.github.com/users/GrahamDumpleton",
"user_view_type": "public"
}
|
[
{
"color": "e10c02",
"default": false,
"description": null,
"id": 117744,
"name": "Bug",
"node_id": "MDU6TGFiZWwxMTc3NDQ=",
"url": "https://api.github.com/repos/psf/requests/labels/Bug"
},
{
"color": "f7c6c7",
"default": false,
"description": null,
"id": 167537670,
"name": "Propose Close",
"node_id": "MDU6TGFiZWwxNjc1Mzc2NzA=",
"url": "https://api.github.com/repos/psf/requests/labels/Propose%20Close"
}
] |
closed
| true | null |
[] | null | 40 |
2013-09-25T03:16:36Z
|
2021-09-08T23:06:05Z
|
2015-01-19T09:25:49Z
|
CONTRIBUTOR
|
resolved
|
For a Squid configuration of:
```
http_port 3128
https_port 3129 cert=/usr/local/opt/squid/etc/ssl/squid.crt key=/usr/local/opt/squid/etc/ssl/squid.key
```
And test names where:
The 'http_port' annotation indicates whether Squid HTTP port was used.
The 'https_port' annotation indicates whether Squid HTTPS port was used.
The 'scheme' annotation in test names refers to whether the scheme was part of proxy definition in proxies dictionary passed to requests
```
no_scheme --> { 'https': 'localhost:3129' }
http_scheme --> { 'https': 'http://localhost:3129' }
https_scheme --> { 'https': 'https://localhost:3129' }
```
Results for different requests versions are:
```
requests 0.9.3 1.2.3 2.0.0
no_proxy PASS PASS PASS
http_port_no_scheme PASS FAIL FAIL
http_port_with_http_scheme PASS PASS PASS
http_port_with_https_scheme FAIL FAIL PASS
https_port_no_scheme FAIL PASS FAIL
https_port_with_http_scheme FAIL FAIL FAIL
https_port_with_https_scheme PASS PASS FAIL
```
The problem one I am concerned about is https_port_with_https_scheme as this no longer works any more.
I fully realise that http_port_with_https_scheme now works and that this presumably would use CONNECT as is the safest option, but we would have to notify all customers relying on https_port_with_https_scheme that what they used before no longer works and they will need to change their configuration they have. If they don't heed that instruction, then we will start see failed connections and complaints.
BTW, it would be really good if the documentation explained the differences between all the combinations, what mechanisms they use and what represents best practice for being the safest/best to use.
Test used for https_port_with_https_scheme is:
```
import unittest
import requests
PROXY_HOST = 'localhost'
PROXY_HTTP_PORT = 3128
PROXY_HTTPS_PORT = 3129
REMOTE_URL = 'https://pypi.python.org/pypi'
class TestProxyingOfSSLRequests(unittest.TestCase):
def test_proxy_via_squid_https_port_with_https_scheme(self):
proxies = { 'https': 'https://%s:%s' % (PROXY_HOST, PROXY_HTTPS_PORT) }
response = requests.get(REMOTE_URL, proxies=proxies)
self.assertTrue(len(response.content) != 0)
if __name__ == '__main__':
unittest.main()
```
For full set of tests and results see:
- https://dl.dropboxusercontent.com/u/22571016/requests-ssl-proxy.tar
Cheat sheet for setting up Squid on MacOS X with SSL support from our own cheat sheet is:
To test proxying via SSL, the easiest thing to do is install 'squid' via 'brew' under MacOSX, but avoid the standard recipe and instead use:
```
brew install https://raw.github.com/mxcl/homebrew/a7bf4c381f4e38c24fb23493a92851ea8339493e/Library/Formula/squid.rb
```
This will install 'squid' with SSL support.
You can then generate a self signed certificate to use:
```
cd /usr/local/opt/squid/etc
mkdir ssl
cd ssl
openssl genrsa -des3 -out squid.key 1024
openssl req -new -key squid.key -out squid.csr
cp squid.key squid.key.org
openssl rsa -in squid.key.org -out squid.key
openssl x509 -req -days 365 -in squid.csr -signkey squid.key -out squid.crt
```
Then edit the the squid configuration file at '/usr/local/opt/squid/etc/squid.conf', adding:
```
https_port 3129 cert=/usr/local/opt/squid/etc/ssl/squid.crt key=/usr/local/opt/squid/etc/ssl/squid.key
```
Then use configuration of:
```
proxy_host = localhost
proxy_port = 3129
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1622/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1622/timeline
| null |
completed
| null | null | false |
[
"Unless someone else beats me to it, I'll probably take a look at duplicating this weekend. I have pretty much no context on the proxy work though so if anyone with some background on that wants to pair on this let me know.\n",
"To explain more about what has been broken. Consider the following;\n\nrequests - 1.2.3\n\nhttp_port_no_scheme (FAIL) - As no scheme is provided, requests will use the scheme of the target URL, which in these tests is a 'https' request. It therefore attempts to connect to http_port using a SSL connection, which will fail as Squid isn't expecting that on that port. Expected result.\n\nhttp_port_with_http_scheme (PASS) - Scheme provided was 'http', which means that requests connects to Squid over a non SSL connection. It then issues a 'GET https//..'. Squid will then create a SSL connection for the outgoing request. Both the request and response content between requests and Squid aren't encrypted and can be sniffed. This is undesirable and why a change to CONNECT was being attempted. Expected result.\n\nhttp_port_with_https_scheme (FAIL) - Scheme provided was 'https' which means that requests connects to http_port using a SSL connection, which will fail as Squid isn't expecting that on that port. Expected result.\n\nhttps_port_no_scheme (PASS) - As no scheme is provided, requests will use the scheme of the target URL, which in these tests is a 'https' request. It therefore attempts to connect to https_port using a SSL connection, which is all fine because that is what it is expecting. Within that encrypted connection, it then issues a 'GET https://..'. Squid will then create a SSL connection for the outgoing request. Both the request and response content between requests and Squid aren't encrypted within the SSL connection, so cannot be sniffed. Expected result.\n\nhttps_port_with_http_scheme (FAIL) - Scheme provided was 'http', which means that requests connects to Squid over a non SSL connection, which will fails as Squid was expecting a SSL connection on https_port. Expected result.\n\nhttps_port_with_https_scheme (PASS) - Scheme provided was 'https'. It therefore attempts to connect to https_port using a SSL connection, which is all fine because that is what it is expecting. Within that encrypted connection, it then issues a 'GET https://..'. Squid will then create a SSL connection for the outgoing request. Both the request and response content between requests and Squid aren't encrypted within the SSL connection, so cannot be sniffed. Expected result.\n\nrequests - 2.0.0\n\nhttp_port_with_http_scheme (PASS) - Scheme provided was 'http', which means that requests connects to Squid over a non SSL connection. It then issues a CONNECT instead of GET and tunnels the encrypted request over that connection. Expected result.\n\nhttp_port_with_https_scheme (PASS) - Scheme provided was 'https'. The requests module though ignores what the scheme is and connects using a non SSL connection. It then issues a CONNECT instead of GET and tunnels the encrypted request over that connection. It works, but breaks the one way that used to sort of work with old requests versions. See below.\n\nhttps_port_with_http_scheme (FAIL) - Scheme provided was 'http', which means that requests connects to Squid over a non SSL connection, which will fails as Squid was expecting a SSL connection on https_port. Expected result.\n\nhttps_port_with_https_scheme (FAIL) - Scheme provided was 'https'. The requests module though ignores what the scheme is and connects using a non SSL connection. This fails because Squid was expecting a SSL connection on https_port. Unfortunately this was the closest there was to getting things working prior to requests 2.0.0, so breaking this may break some existing setups and they will be forced to change configurations.\n\nSummary:\n\nPrior to requests 2.0.0 the only way of having encryption traffic over the network was by using https_port_with_https_scheme. There were two issues with this. It meant that if you wanted host SSL certification validation, then Squid had to do it because there were actually two SSL connections. The first between requests and Squid and the other between Squid and the target server. The second is that since Squid decrypted in the middle, then technically a modified Squid could capture the traffic that was being sent.\n\nUsing CONNECT resolves this as a SSL connection is tunnelled the whole way. CONNECT though is done over a connection which is initially non SSL, so when doing https_port_with_https_scheme it will not actually be a SSL connection.\n\nWhat you break therefore is those people who may have been getting by by using https_port_with_https_scheme to encrypt traffic over local network as well as public network beyond Squid. There isn't likely though any way of dealing with that automatically and still wouldn't be as secure.\n\nEnd result is that you are unlikely to make any changes as you obviously want the best level of security. What you do need though is a dedicated page in the documentation about SSL security issues and what all the combinations are in regard to proxies so that users have a clue, especially since the majority of users have no idea about this stuff.\n",
"This is a good catch, and thanks so much for the bug report. The problem is almost certainly in the underlying urllib3 implementation of the proxy behaviour though. Let's not raise an issue there just yet until we can isolate the problem, but I think the fix will have to be in that library.\n\nHey @schlamar, can you take a look at this?\n",
"@GrahamDumpleton Could you please run these tests against plain urllib3, too (see https://gist.github.com/schlamar/5080598#file-test_proxy-py for example code)?\n",
"It is most likely due to the introduction of HTTPSConnectionPool._prepare_conn() in 'requests/packages/urllib3/connectionpool.py'. This is called at the end of HTTPSConnectionPool._new_conn(). It is called whenever a proxy is being used for a HTTPS connection, but doesn't take into consideration the scheme that was specified for talking to the proxy. Thus it tries to connect to a SSL port as non SSL and then create a tunnel. It fails as SSL connection expected.\n\nWhat appears to work, is to change:\n\n```\n return self._prepare_conn(connection)\n```\n\nat the end of _new_conn() to:\n\n```\n if self.proxy.scheme != 'https':\n return self._prepare_conn(connection)\n else:\n return connection\n```\n\nI need to retest properly all cases, but am guessing that result of everything would then be:\n\n```\n no_proxy PASS\n http_port_no_scheme FAIL\n http_port_with_http_scheme PASS\n http_port_with_https_scheme FAIL\n https_port_no_scheme FAIL\n https_port_with_http_scheme FAIL\nhttps_port_with_https_scheme PASS\n```\n\nThis makes a bit more sense in that https scheme against http_port will fail and http against https_port will fail. Is only when scheme matches port type that will work.\n\nYou still have the issue that for https_port_with_https_scheme it is two separate SSL connections and a modified proxy could intercept any data being sent via the proxy, plus host certificate validation for target URL has to be done by the proxy, but does at least preserve how it worked before and less surprises.\n\nThe end result is that the safest option is a http_port on the proxy with http scheme. For a https target URL, then CONNECT is still used in this case and everything is encrypted end to end.\n",
"Out of interest, why can't we send CONNECT over the SSL connection to the squid proxy? That feels like it's probably the right thing to do.\n",
"You would be tunnelling SSL over a connection which is already SSL. That is a bit strange to start with. Am not even sure whether a proxy server would allow that. I don't know enough about urllib3/httplib to even work out how to make it even attempt it.\n\nAnyway, I have tested that change in our actual product and it appears to work in all the cases I can think of that need to work.\n",
"So to give some perspective, my worry is about security. Specifically, if you do this:\n\n``` python\nr = requests.get(some_https_url, proxies={'https': 'https://127.0.0.1:8888/'})\n```\n\nIt's not clear from any of that that the proxy will be terminating the SSL and starting a new connection. It seems bizarre to have this line be more secure:\n\n``` python\nr = requests.get(some_https_url, proxies={'https': 'http://127.0.0.1:8888/'})\n```\n",
"Technically tunnelling on top of SSL is possible:\n- http://stackoverflow.com/questions/6594604/connect-request-to-a-forward-http-proxy-over-an-ssl-connection\n\nHow widely it is supported by proxies may be an issue.\n\nAs I had already pointed out, the https_port_with_https_scheme does have the man in the middle problem if the proxy is compromised. But then am sure that some corporates probably set it up that way so they can see your data.\n\nThis is why I said back at the start that probably didn't expect a change as I figured that someone decided to make it as secure as possible and deliberately break this specific scenario so it didn't work.\n\nIn making that decision though, they have broken something that used to work, even though the level of security is questionable.\n\nThis is why I called out that what at least should be done is decent documentation on all the security implications of using proxies and SSL connections. The details in the proxies section is very scant and it isn't clear in that what is a good setup and what isn't. People are forced to look elsewhere for details, and I am not even sure where to go search out a good reference for the information. I only understand what is happening from putting a TCP proxy between the client and proxy and observing what happens.\n",
"I agree that we're going to have to document our decision-making process here, whatever we decide to do. I'm just trying to think about what the best procedure would be, both from a Requests and a urllib3 perspective. =)\n",
"@GrahamDumpleton I'm pretty sure the only issue is your setup. Please check first how curl behaves (I expect exactly like requests 2.x).\n\n> they have broken something that used to work\n\nThat's wrong. Proxy support was broken before, it is fixed with 2.x. \n\n`http_port_with_https_scheme` is now exactly how it should work. There is just a TCP connection to the proxy (doesn't matter if port 80 or port 443). All HTTP data is encrypted with the cert of the target. There is no security issue here (you can check with curl/wireshark).\n\n`https_port_with_https_scheme` should fail at cert validation because squid is configured as MITM proxy. Adding your custom cert to the request call should work.\n",
"I have supplied all the test scripts and log output from running the tests at the URL I gave. All the variations are laid out in the tests and as summarised in the tables above. This isn't a case of having a single specific setup wrong. All the possibilities were enumerated for connecting to a target web server with a https URL and what occurs simply described for comparison.\n\nComparing to other proxy aware clients is certainly worthwhile for comparison as well, but the point of the comparisons to this point was to illustrate how requests itself has changed compared to prior versions to make it clear for anyone who cares.\n\nYou can say that proxy support was broken before, but even so, some of us still had to find ways to make that work even though requests was broken, and https_port_with_https_scheme was the closest one could get to at least having stuff going over the network encrypted, even if the proxy itself was susceptible to a MITM as I quite clearly acknowledged.\n\nI did not say that http_port_with_https_scheme itself is now a security issue. I was inferring that the ignoring of the scheme for everything as was done in this case, also meant that https_port_with_https_scheme would now break because the proxy was still going to expect a SSL connection and requests would always use a non SSL connection.\n\nAs to https_port_with_https_scheme failing at cert validation, it doesn't even get that far as the proxy is listening on a SSL port and requests is connecting to it over a non SSL connection.\n\nThe main point of me pointing all this out is that the documentation is far from adequate. It doesn't explain well to an audience who have no clue about this stuff, what the implications are of different settings.\n\nIn the mean time I still need to work out how to deal with customers who were relying on the closest thing that worked from prior broken versions of requests. As I already said, we will likely just work out who they are tell them they are forced to change their configuration so they no longer use the https_port of their proxy as they were.\n\nThat I even indicated what code could be changed was only to illustrate where the issue arise. I said back at the beginning I wasn't expecting to have people agree to change anything as someone likely felt how it now works is better even though it breaks one variation of the configuration which used to work even though it wasn't fully secure.\n",
"So, let's just clarify what we expect `{'https': 'https://127.0.0.1/'}` to actually do.Should we be expecting an SSL connection to the proxy followed by CONNECT, or a non SSL connection followed by CONNECT?\n",
"It attempts a non SSL connection and then a CONNECT. This is all it can do, as you cannot know the other end is actually going to talk SSL or not anymore based on the scheme because it has to ignore the scheme for http_port_with_https_scheme to work. This is why https_port_with_https_scheme cannot ever work if the intent it is to support CONNECT for http_port_with_https_scheme. So that is reality and just needs to be documented which combinations will and wont work.\n",
"> This isn't a case of having a single specific setup wrong.\n\nNope, I assume that the _general_ squid setup does not work like intended. I think that squid does not allow CONNECT on your HTTPS port. Please verify that with curl/browser, e.g.:\n\n```\n$ export https_proxy=https://127.0.0.1:3129/\n$ curl -vv https://google.com\n```\n",
"> Technically tunnelling on top of SSL is possible: [...] How widely it is supported by proxies may be an issue.\n\nDefinitely not by squid (and by requests/urllib3).\n\n> The CONNECT method is a way to tunnel any kind of connection through an HTTP proxy\n\nfrom http://wiki.squid-cache.org/Features/HTTPS\n",
"One last comment\n\n> https_port_with_https_scheme (PASS) - Scheme provided was 'https'. It therefore attempts to connect to https_port using a SSL connection, which is all fine because that is what it is expecting. Within that encrypted connection, it then issues a 'GET https://..'. Squid will then create a SSL connection for the outgoing request. Both the request and response content between requests and Squid aren't encrypted within the SSL connection, so cannot be sniffed. Expected result.\n\nThis is **not** the expected result, because doing `GET` to an HTTPS resource via a proxy is just wrong (which was fixed in requests 2.x).\n\n> The main point of me pointing all this out is that the documentation is far from adequate. It doesn't explain well to an audience who have no clue about this stuff, what the implications are of different settings.\n\nThere are no different settings. Tunnel is tunnel no matter what scheme you have set on your proxy. This is mainly an issue for version 1.x. There is really an security issue because HTTPS connections via HTTP proxy are not encrypted (which I have pointed out [a long time ago](https://github.com/kennethreitz/requests/issues/655#issuecomment-10281268)).\n",
"You miss the point of those descriptions. It was simply giving the observed behaviour for that particular version of the requests module (1.2.3) and whether for how that version of requests/urllib3 was implemented, whether it behaved as it would be expected from that code at that time. It was not telling you how it should overall work in giving the description of what it did in 1.2.3. I did not say that the behaviour that occurred with 1.2.3 was correct and should supersede what 2.0.0 does. The descriptions explained why for old versions it was insecure. I already acknowledged back in the initial description that CONNECT is the better way and use of it would was the correct way, thus why I expected that no change would be made. Please stop confusing a description of the state of things as they were for the older version as somehow being a statement that I believe that that old behaviour should prevail, I have not said that.\n",
"> Please stop confusing a description of the state of things as they were for the older version as somehow being a statement that I believe that that old behaviour should prevail.\n\nI didn't think (nor said) that :) But it was not clear if you understood why your code was failing at https_port_with_https_scheme because in general it works if the TCP endpoint at https_port supports HTTP connect. This implies that you have not to document which combinations work because all combinations work if the TCP endpoint works.\n\nSo: what's your point after all? :)\n",
"I believe the point is that we should better document the behaviour of various proxy settings. =)\n",
"> I believe the point is that we should better document the behaviour of various proxy settings.\n\nWhich is always the same on 2.x, so there is IMO no need for it. :)\n",
"Let me quote again what I said before:\n\n\"\"\"\nhttps_port_with_https_scheme (FAIL) - Scheme provided was 'https'. The requests module though ignores what the scheme is and connects using a non SSL connection. This fails because Squid was expecting a SSL connection on https_port.\n\"\"\"\n\nSo in this case requests is going to contact the https_port on Squid. Squid is expecting a SSL connection to be established. The requests library IS NOT establishing a SSL connection, it is establishing a NON SSL connection. Within that NON SSL connection it is then sending:\n\n```\nCONNECT pypi.python.org:443 HTTP/1.0\n```\n\nThis is in the clear. It can be seen by monitoring the network traffic or using tcpwatch as an intermediary to capture traffic. It is not within an encrypted connection.\n\nIn the case of:\n\n\"\"\"\nhttp_port_with_https_scheme (PASS) - Scheme provided was 'https'. The requests module though ignores what the scheme is and connects using a non SSL connection. It then issues a CONNECT instead of GET and tunnels the encrypted request over that connection. It works, but breaks the one way that used to sort of work with old requests versions.\n\"\"\"\n\nSquid is expecting to receive a non SSL connection. Requests was told though that the scheme for the proxy was htttps. In prior versions of requests and in any sane interpretation of what https means for connecting to the proxy that I have seen, it means that requests should have connected to the proxy using a SSL connection. It did not this, requests used a NON SSL connection instead which meant that this variation worked where as it previously failed with older requests because in the older requests it honoured the proxy scheme and connected using a SSL connection.\n\nIn summary, requests is now totally ignoring the proxy scheme and in all cases is connecting using a NON SSL connection and trying to issue a CONNECT within that. This effectively means there is no difference any more between http://proxy and https://proxy.\n\nSo it means absolutely nothing as to whether the proxy supports CONNECT after establishing the connection, because a connection cannot be established for https_port_with_https_scheme because requests is not creating a SSL connection and fails on the initial connection setup.\n\nSince you seem to still believe there is no problem, please confirm the following statement as correct then.\n\nThat the proxy scheme, ie., the 'http' and 'https' in http://proxy:port and https://proxy.port is totally irrelevant any more because requests 2.0.0. ignores it and always connects using a NON SSL connection anyway.\n\nIf you disagree with that statement, then requests is broken, because that is what it is doing as has been described.\n\nIf you agree with that statement, then it breaks what I have generally always seen as the meaning of having the scheme in the proxy URL. That is, that 'https' implies a SSL connection is used to the proxy. In that case, you need to document clearly that the scheme is no longer taken into consideration.\n",
"You have linked to a SO question above, maybe you should read all answers to that question, especially http://stackoverflow.com/a/14905773/851737\n\nPlus:\n\n> Unfortunately, popular modern browsers do not permit configuration of TLS/SSL encrypted proxy connections. There are open bug reports against most of those browsers now, waiting for support to appear. If you have any interest, please assist browser teams with getting that to happen.\n> \n> Meanwhile, tricks using stunnel or SSH tunnels are required to encrypt the browser-to-proxy connection before it leaves the client machine.\n\nfrom http://wiki.squid-cache.org/Features/HTTPS\n\n> Squid is expecting to receive a non SSL connection. Requests was told though that the scheme for the proxy was htttps. In prior versions of requests\n\nThis was totally wrong implemented in 1.x It never really worked. You have a wrong assumption here.\n\n> and in any sane interpretation of what https means for connecting to the proxy that I have seen,\n\nAgain: have a look at curl. It behaves exactly like requests 2.x. No client except Chrome is supporting proxy via HTTPS. And even Chrome falls back to HTTP connect if the proxy endpoint does not support SSL.\n\n> In summary, requests is now totally ignoring the proxy scheme and in all cases is connecting using a NON SSL connection and trying to issue a CONNECT within that. This effectively means there is no difference any more between http://proxy and https://proxy.\n\nThis is the behavior for every HTTP client around (see above).\n",
"Then why did you ever bother making the scheme mandatory in the proxies map when you ignore it? You could have still accepted host:port alone and just always made it use http anyway.\n\nAnyway, what it comes down to is the documentation needs to state then that the scheme is totally irrelevant and that http:// is the only one that makes sense, because https:// is a lie.\n",
"> Then why did you ever bother making the scheme mandatory in the proxies map when you ignore it? You could have still accepted host:port aline and just always made it use http anyway.\n\nI have asked that myself. Because curl and other HTTP clients do support `http_proxy=host:port` without scheme. @Lukasa ?\n\n> Anyway, what it comes down to is the documentation needs to state then that the scheme is totally irrelevant and that http:// is the only one that makes sense, because https:// is a lie.\n\n[subjective opinion] The documentation shows only examples with http:// so this kind of implies that. I think you cannot expect CONNECT to work via SSL while it is not explicitly stated. \n",
"Ugh. The answer is because the work to make proxy schemes explicit was actually implemented before we had CONNECT support.\n",
"Allowing proxy without scheme, would that be an API breaking change? If not we might should support that, so we are compatible with curl in terms of proxy handling. Probably in form of undocumented feature.\n",
"Uh, we made this change in [2.0](http://docs.python-requests.org/en/latest/community/updates/#id1):\n\n> Proxy URLs now must have an explicit scheme. A `MissingSchema` exception will be raised if they don’t.\n",
"Ah, it looks like requests 1.x added the proxy scheme based on target URL (which is horribly wrong...): https://github.com/kennethreitz/requests/commit/840540b6b1f07ef87faab73392c03fbef0dcc9fe I guess this is the reason for this change. \n\nWould be adding `http` if no scheme was given an acceptable change for 2.x?\n",
"@schlamar Sorry I let this slip past me. The answer there is 'maybe'. @t-8ch was one of the people strongly pushing for explicit proxy schemes: do you have an opinion?\n"
] |
https://api.github.com/repos/psf/requests/issues/1621
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1621/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1621/comments
|
https://api.github.com/repos/psf/requests/issues/1621/events
|
https://github.com/psf/requests/pull/1621
| 20,017,801 |
MDExOlB1bGxSZXF1ZXN0ODU4MDMxNA==
| 1,621 |
Added migrating to 2.x docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/185043?v=4",
"events_url": "https://api.github.com/users/davidfischer/events{/privacy}",
"followers_url": "https://api.github.com/users/davidfischer/followers",
"following_url": "https://api.github.com/users/davidfischer/following{/other_user}",
"gists_url": "https://api.github.com/users/davidfischer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davidfischer",
"id": 185043,
"login": "davidfischer",
"node_id": "MDQ6VXNlcjE4NTA0Mw==",
"organizations_url": "https://api.github.com/users/davidfischer/orgs",
"received_events_url": "https://api.github.com/users/davidfischer/received_events",
"repos_url": "https://api.github.com/users/davidfischer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davidfischer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidfischer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davidfischer",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 5 |
2013-09-25T01:42:32Z
|
2021-09-08T21:01:11Z
|
2013-09-25T02:02:18Z
|
CONTRIBUTOR
|
resolved
|
In a similar vein to the migrating to 1.x docs, this section details changes that can break existing code in order to ease the migration to 2.x. I didn't talk about any of the added APIs since those don't really break existing code. Compared with the last major release, there's not much to say.
This is largely based on @Lukasa's [blog post](http://lukasa.co.uk/2013/09/Requests_20/) and the [changelog](https://github.com/kennethreitz/requests/blob/master/HISTORY.rst). I did not know how folks feel about linking to a blog post from the docs so I didn't add the link.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1621/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1621/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1621.diff",
"html_url": "https://github.com/psf/requests/pull/1621",
"merged_at": "2013-09-25T02:02:18Z",
"patch_url": "https://github.com/psf/requests/pull/1621.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1621"
}
| true |
[
":cake:\n",
"@davidfischer I think the link would be fantastic, actually. Want to add it?\n",
"@kennethreitz linked!\n",
":100: \n",
"This is awesome @davidfischer, thanks! =D :cake:\n"
] |
https://api.github.com/repos/psf/requests/issues/1620
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1620/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1620/comments
|
https://api.github.com/repos/psf/requests/issues/1620/events
|
https://github.com/psf/requests/issues/1620
| 20,006,697 |
MDU6SXNzdWUyMDAwNjY5Nw==
| 1,620 |
requests doesn't fill CookieJar
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-09-24T21:15:35Z
|
2021-09-09T01:22:22Z
|
2013-09-24T21:21:20Z
|
NONE
|
resolved
|
First of all, i cannot use session to manage cookies across many requests.
Based on this answer http://stackoverflow.com/questions/6878418/putting-a-cookie-in-a-cookiejar, requests should fill CookieJar object
jar = cookielib.CookieJar()
r = requests.get(URL, cookies=jar)
But it doesn't work for me.
Is this a bug or not?
> jar = cookielib.CookieJar()
> r = requests.get('http://192.168.226.141/test.php', cookies=jar)
> r.cookies
> <<class 'requests.cookies.RequestsCookieJar'>[Cookie(version=0, name='test1', value='1', port=None, port_specified=False, domain='192.168.226.141', domain_specified=False, domain_initial_dot=False, path='/', path_specified=False, secure=False, expires=1380060661, discard=False, comment=None, comment_url=None, rest={}, rfc2109=False)]>
> jar
> <cookielib.CookieJar[]>
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1620/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1620/timeline
| null |
completed
| null | null | false |
[
"Why can't you use a session?\n",
"To answer the question:\n\nThat answer is two years old. Today marked the release of the second major API breaking version release. Answers from two years prior are not even remotely applicable to today's versions of requests. In other words, no that is not still possible and it is an intentional design (if I remember correctly), i.e., not a bug.\n\nI'd also like to know why you can't use a session.\n\nThird, when asking questions, why not use [StackOverflow](http://stackoverflow.com/questsions/tagged/python-requests)?\n\nFinally why can you not use the cookies returned on the response?\n",
"Sorry. Thanks!\nI will use cookies returned on the response\n"
] |
https://api.github.com/repos/psf/requests/issues/1619
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1619/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1619/comments
|
https://api.github.com/repos/psf/requests/issues/1619/events
|
https://github.com/psf/requests/pull/1619
| 19,996,506 |
MDExOlB1bGxSZXF1ZXN0ODU2ODU3Mg==
| 1,619 |
Up to date CL.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 0 |
2013-09-24T18:27:07Z
|
2021-09-08T21:01:12Z
|
2013-09-24T18:31:13Z
|
MEMBER
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1619/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1619/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1619.diff",
"html_url": "https://github.com/psf/requests/pull/1619",
"merged_at": "2013-09-24T18:31:13Z",
"patch_url": "https://github.com/psf/requests/pull/1619.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1619"
}
| true |
[] |
|
https://api.github.com/repos/psf/requests/issues/1618
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1618/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1618/comments
|
https://api.github.com/repos/psf/requests/issues/1618/events
|
https://github.com/psf/requests/issues/1618
| 19,965,833 |
MDU6SXNzdWUxOTk2NTgzMw==
| 1,618 |
Encountering off-host redirects using allow_redirects=True doesn't handle authentication sanely
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/132824?v=4",
"events_url": "https://api.github.com/users/smokey42/events{/privacy}",
"followers_url": "https://api.github.com/users/smokey42/followers",
"following_url": "https://api.github.com/users/smokey42/following{/other_user}",
"gists_url": "https://api.github.com/users/smokey42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smokey42",
"id": 132824,
"login": "smokey42",
"node_id": "MDQ6VXNlcjEzMjgyNA==",
"organizations_url": "https://api.github.com/users/smokey42/orgs",
"received_events_url": "https://api.github.com/users/smokey42/received_events",
"repos_url": "https://api.github.com/users/smokey42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smokey42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smokey42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smokey42",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 5 |
2013-09-24T09:24:12Z
|
2021-09-09T01:22:22Z
|
2013-09-24T17:50:21Z
|
NONE
|
resolved
|
Problem:
A host which requires authentication redirects off-hostname. Using `allow_redirects=True` will then send the target the current authentication headers. This is almost certainly wrong behaviour.
Probably a solution:
- Simple: Just reset the authentication when encountering an off-hostname redirect.
- Complex: Introduce a dict of authentication objects keyed on hostname. May break or introduce new API functionality though.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1618/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1618/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nI seem to recall we discussed this in the past, and decided that what we do here is a user-level issue. Some users may absolutely want their authentication headers to redirect off-host. Does that sound right to you @kennethreitz?\n",
"Ah, here we go. Original discussion on #975, relevant bits start [here](https://github.com/kennethreitz/requests/issues/975#issuecomment-12616989). Never mind Kenneth, you can ignore this. =)\n",
"Yeah I was of a similar opinion @smokey42 but I agree with the decision in the issue linkes by @Lukasa. It allows for the most degrees of freedom.\n",
"Fair enough, yet to quote PEP 20: \"Explicit is better than implicit.\"\n\nIt was undocumented, unintuitive behavior and also took me a while to figure out what was going wrong. I had to enable urllib3 debugging to see the request headers.\n\nI don't have any objections this being the default behavior, but it should be either documented or settable. Both would make it explicit.\n\nThanks.\n",
"Adding docs is easy and if you'd like you can send a PR with that update to make sure it gets done.\n"
] |
https://api.github.com/repos/psf/requests/issues/1617
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1617/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1617/comments
|
https://api.github.com/repos/psf/requests/issues/1617/events
|
https://github.com/psf/requests/issues/1617
| 19,952,471 |
MDU6SXNzdWUxOTk1MjQ3MQ==
| 1,617 |
Inconsistent ConnectionError Behavior
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1452980?v=4",
"events_url": "https://api.github.com/users/laruellef/events{/privacy}",
"followers_url": "https://api.github.com/users/laruellef/followers",
"following_url": "https://api.github.com/users/laruellef/following{/other_user}",
"gists_url": "https://api.github.com/users/laruellef/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/laruellef",
"id": 1452980,
"login": "laruellef",
"node_id": "MDQ6VXNlcjE0NTI5ODA=",
"organizations_url": "https://api.github.com/users/laruellef/orgs",
"received_events_url": "https://api.github.com/users/laruellef/received_events",
"repos_url": "https://api.github.com/users/laruellef/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/laruellef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laruellef/subscriptions",
"type": "User",
"url": "https://api.github.com/users/laruellef",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 10 |
2013-09-24T01:52:34Z
|
2015-03-10T08:12:42Z
|
2013-09-24T02:45:49Z
|
NONE
| null |
Looking for guidance to solve the following problem:
On dev env / MacOSX, the following:
``` python
url = "http://insurance-guide.netfirms.com"
response = requests.get(url, timeout=90, verify=False, stream=True)
```
results in:
``` python
ConnectionError: HTTPConnectionPool(host='insurance-guide.netfirms.com', port=80): Max retries exceeded with url: / (Caused by <class 'socket.gaierror'>: [Errno 8] nodename nor servname provided, or not known)
```
on prod env / Centos 6, the exact same code never timesout and just stays stuck forever...
both env are python 2.7.5 and requests 1.2.3
what could be the cause of the diff behavior on prod vs dev?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1617/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1617/timeline
| null |
completed
| null | null | false |
[
"It should be noted that the `ConnectionError` is raised without the parameters you specify:\n\n``` python\nresponse = requests.get('http://insurance-guide.netfirms.com')\n```\n\nproduces this exception as reliably as your example does. Oddly enough, I get a different reason for the exception though:\n\n```\nrequests.exceptions.ConnectionError: HTTPConnectionPool(host='insurance-guide.netfirms.com', port=80): Max retries exceeded with url: / (Caused by <class 'socket.gaierror'>: [Errno -3] Temporary failure in name resolution)\n[Errno -3] Temporary failure in name resolution)\n```\n\nI pulled the relevant bit into the second line. \n\nWhy this doesn't happen on Centos is an excellent question. We don't currently test on any Centos (or similar) systems so we can't really say. Were I you, I would set the timeout considerably lower. Try less than 1 second (say 0.1 or 0.01).\n\nAll of that advice aside, this seems more like you're looking for the answer to a question that I doubt Kenneth, Cory or I could answer without getting our hands on a Centos system. You might have better luck on [StackOverflow](http://stackoverflow.com/questions/tagged/python-requests). I would use both the Centos and Requests tags. I can also promote it from there if it doesn't receive enough attention.\n",
"One thought that just came to me, however, is that I'm having name resolution issues with that URL. On the Centos system (presuming you have shell access to it), what is the result of\n\n```\ndig netfirms.com\n```\n",
"dig result is consistent across both systems:\non centos:\n; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 <<>> netfirms.com\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46225\n;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0\n\n;; QUESTION SECTION:\n;netfirms.com. IN A\n\n;; ANSWER SECTION:\nnetfirms.com. 2756 IN A 65.254.227.16\n\n;; Query time: 10 msec\n;; SERVER: 8.8.8.8#53(8.8.8.8)\n;; WHEN: Tue Sep 24 02:26:25 2013\n;; MSG SIZE rcvd: 46\n\non macosx:\n; <<>> DiG 9.6-ESV-R4-P3 <<>> netfirms.com\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2977\n;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0\n\n;; QUESTION SECTION:\n;netfirms.com. IN A\n\n;; ANSWER SECTION:\nnetfirms.com. 3600 IN A 65.254.227.16\n\n;; Query time: 142 msec\n;; SERVER: 192.168.1.1#53(192.168.1.1)\n;; WHEN: Mon Sep 23 19:26:13 2013\n;; MSG SIZE rcvd: 46\n\nFred~\n\nOn Mon, Sep 23, 2013 at 7:14 PM, Ian Cordasco [email protected]:\n\n> One thought that just came to me, however, is that I'm having name\n> resolution issues with that URL. On the Centos system (presuming you have\n> shell access to it), what is the result of\n> \n> dig netfirms.com\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1617#issuecomment-24970370\n> .\n",
"tried with lower timeout:\non centos:\nurl = \"http://insurance-guide.netfirms.com\"\nresponse = requests.get(url, timeout=0.1, verify=False, stream=True)\n\n> > requests.exceptions.Timeout: HTTPConnectionPool(host='\n> > insurance-guide.netfirms.com', port=80): Request timed out. (timeout=0.1)\n\non macosx:\nsame code\n\n> > requests.exceptions.ConnectionError: HTTPConnectionPool(host='\n> > insurance-guide.netfirms.com', port=80): Max retries exceeded with url: /\n> > (Caused by <class 'socket.gaierror'>: [Errno 8] nodename nor servname\n> > provided, or not known)\n\nThe bottomline tho, is that I need this call to timeout properly on centos,\nwhy there is a diff is not that interesting to me... ;-)\nand I need the timeout to stay high, ie 90, coz I have sites that are\nsometimes backed up and I want to wait that long...\n\nis this related to?\nhttps://github.com/kennethreitz/requests/issues/1577\n\nFred~\n\nOn Mon, Sep 23, 2013 at 7:12 PM, Ian Cordasco [email protected]:\n\n> It should be noted that the ConnectionError is raised without the\n> parameters you specify:\n> \n> response = requests.get('http://insurance-guide.netfirms.com')\n> \n> produces this exception as reliably as your example does. Oddly enough, I\n> get a different reason for the exception though:\n> \n> requests.exceptions.ConnectionError: HTTPConnectionPool(host='insurance-guide.netfirms.com', port=80): Max retries exceeded with url: / (Caused by <class 'socket.gaierror'>: [Errno -3] Temporary failure in name resolution)\n> [Errno -3] Temporary failure in name resolution)\n> \n> I pulled the relevant bit into the second line.\n> \n> Why this doesn't happen on Centos is an excellent question. We don't\n> currently test on any Centos (or similar) systems so we can't really say.\n> Were I you, I would set the timeout considerably lower. Try less than 1\n> second (say 0.1 or 0.01).\n> \n> All of that advice aside, this seems more like you're looking for the\n> answer to a question that I doubt Kenneth, Cory or I could answer without\n> getting our hands on a Centos system. You might have better luck on\n> StackOverflow http://stackoverflow.com/questions/tagged/python-requests.\n> I would use both the Centos and Requests tags. I can also promote it from\n> there if it doesn't receive enough attention.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1617#issuecomment-24970344\n> .\n",
"@laruellef yeah I was just about to go looking for that issue. It almost certainly seems related to #1577. We're still waiting on the urllib3 PR that @Lukasa alluded to in that issue so I'm going to close this while we wait for that to work itself out so we can provide that interface to you.\n\nThat said, if you run into timeout issues again, and setting it lower doesn't resolve your issue, then open an issue. Otherwise it might be safe to assume it's related to #1577.\n",
"oh, but wait,\nthe proposed work around is not working here... :-(\nie\nhttps://github.com/kennethreitz/requests/issues/1577#issuecomment-23794576\n\nFred~\n\nOn Mon, Sep 23, 2013 at 7:46 PM, Ian Cordasco [email protected]:\n\n> @laruellef https://github.com/laruellef yeah I was just about to go\n> looking for that issue. It almost certainly seems related to #1577https://github.com/kennethreitz/requests/issues/1577.\n> We're still waiting on the urllib3 PR that @Lukasahttps://github.com/Lukasaalluded to in that issue so I'm going to close this while we wait for that\n> to work itself out so we can provide that interface to you.\n> \n> That said, if you run into timeout issues again, and setting it lower\n> doesn't resolve your issue, then open an issue. Otherwise it might be safe\n> to assume it's related to #1577https://github.com/kennethreitz/requests/issues/1577\n> .\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1617#issuecomment-24971338\n> .\n",
"I never said it would. One solution does not fit all related problems.\n\nIn this case, I expect that the connection is being made but the server is not sending any data. Until it starts, I think even with `stream=True` requests will not return. You're expecting `stream=True` to make requests behave asynchronously when in order to do so you need erequests or grequests or requests-futures.\n",
"Ok, so what is the work around for this then?\nThe timeout param should really be universal and more robust, no?\nOr add a param as suggested in other case,\nBut please capture all cases...\n\nSent from my iPad\n\nOn Sep 23, 2013, at 8:13 PM, Ian Cordasco [email protected] wrote:\n\n> I never said it would. One solution does not fit all related problems.\n> \n> In this case, I expect that the connection is being made but the server is not sending any data. Until it starts, I think even with stream=True requests will not return. You're expecting stream=True to make requests behave asynchronously when in order to do so you need erequests or grequests or requests-futures.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"Hum, we have prod processes that are currently on hold because of his problem,\nIs there any someone who could provide a work around?\nAre there other good choices to replace requests?, coz this timeout issue is becoming a real problem for us... :-(\n\nPer http://docs.python-requests.org/en/latest/\nrequests \"makes your integration with web services seamless\", \nthe number of work arounds we have to put in place to handle the lack of a robust timeout is not so seamless ... :-(\n\nSent from my iPad\n\nOn Sep 23, 2013, at 8:13 PM, Ian Cordasco [email protected] wrote:\n\n> I never said it would. One solution does not fit all related problems.\n> \n> In this case, I expect that the connection is being made but the server is not sending any data. Until it starts, I think even with stream=True requests will not return. You're expecting stream=True to make requests behave asynchronously when in order to do so you need erequests or grequests or requests-futures.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n",
"The urllib3 timeout parameter has been implemented. I'm waiting on Kenneth to provide an idea of what kind of API he wants before I go ahead and plumb some part of it through.\n"
] |
https://api.github.com/repos/psf/requests/issues/1616
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1616/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1616/comments
|
https://api.github.com/repos/psf/requests/issues/1616/events
|
https://github.com/psf/requests/issues/1616
| 19,857,212 |
MDU6SXNzdWUxOTg1NzIxMg==
| 1,616 |
Incorrect key types in request headers (python3)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/165325?v=4",
"events_url": "https://api.github.com/users/abn/events{/privacy}",
"followers_url": "https://api.github.com/users/abn/followers",
"following_url": "https://api.github.com/users/abn/following{/other_user}",
"gists_url": "https://api.github.com/users/abn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abn",
"id": 165325,
"login": "abn",
"node_id": "MDQ6VXNlcjE2NTMyNQ==",
"organizations_url": "https://api.github.com/users/abn/orgs",
"received_events_url": "https://api.github.com/users/abn/received_events",
"repos_url": "https://api.github.com/users/abn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abn",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-09-21T14:21:04Z
|
2021-09-09T01:22:25Z
|
2013-09-21T14:31:31Z
|
NONE
|
resolved
|
Request header keys are being encoded and then checked for later. In python 3 this is causing issues.
In an example use case, if `Content-Type` is explicitly set, when making a post request, you end up getting both the user-set type and the default type, eg: `application/x-www-form-urlencoded`.
https://github.com/kennethreitz/requests/blob/master/requests/models.py#L364
``` py
headers = dict((name.encode('ascii'), value) for name, value in headers.items())
```
https://github.com/kennethreitz/requests/blob/master/requests/models.py#L417
``` py
# Add content-type if it wasn't explicitly provided.
if (content_type) and (not 'content-type' in self.headers):
self.headers['Content-Type'] = content_type
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1616/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1616/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nThis is a well known bug. We've had a fix for this for a very long time (more than 7 months), which has been scheduled for our next major release (2.0). This is planned to be our next release. In the meantime, the current `2.0` branch includes that change and many others. =)\n\nAdditionally, in future can you please do a quick GitHub issue search for your issue? If you had, you'd have found the 5 or 6 duplicates of this issue that have been raised in the last two months as well as the 7-month-old pull request that fixed it. =)\n",
"@Lukasa my searches were bad! Thanks for the quick response.\n"
] |
https://api.github.com/repos/psf/requests/issues/1615
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1615/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1615/comments
|
https://api.github.com/repos/psf/requests/issues/1615/events
|
https://github.com/psf/requests/issues/1615
| 19,856,265 |
MDU6SXNzdWUxOTg1NjI2NQ==
| 1,615 |
ConnectionError on timeout=0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1592927?v=4",
"events_url": "https://api.github.com/users/Damgaard/events{/privacy}",
"followers_url": "https://api.github.com/users/Damgaard/followers",
"following_url": "https://api.github.com/users/Damgaard/following{/other_user}",
"gists_url": "https://api.github.com/users/Damgaard/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Damgaard",
"id": 1592927,
"login": "Damgaard",
"node_id": "MDQ6VXNlcjE1OTI5Mjc=",
"organizations_url": "https://api.github.com/users/Damgaard/orgs",
"received_events_url": "https://api.github.com/users/Damgaard/received_events",
"repos_url": "https://api.github.com/users/Damgaard/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Damgaard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damgaard/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Damgaard",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 8 |
2013-09-21T13:02:18Z
|
2021-09-09T01:22:23Z
|
2013-09-24T17:49:47Z
|
CONTRIBUTOR
|
resolved
|
I get the following exception when doing a request with `timeout=0`.
``` bash
>>> requests.get('http://github.com', timeout=0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 327, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='github.com', port=80): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 115] Operation now in progress)
```
I don't see any mention of this in the issues. Shouldn't the expected behavior of this be equivalent to `timeout=None`?
I'm running `requests` version 1.2.3 and python 2.7.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1615/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1615/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nI don't think I agree with your assessment of what the 'expected behaviour' of this should be. You've set a timeout of zero seconds. That implies that the connection process should time out after zero seconds, e.g. immediately.\n\nI'm a bit mixed here. On the one hand, we're doing exactly what the code asked for. On the other hand, I'd argue that falsy values for timeout should be treated the same as `None`. @sigmavirus24, thoughts?\n",
"If it times out immediately, then it should raise an `Timeout` exception. Which I'd also be fine with. But it should behave either as `None` or a very low number.\n",
"Unfortunately the problem here is in `httplib`. In `httplib`, `timeout=0` actually causes a call on the socket after the `connect()` call to throw EINPROGRESS, which is not strictly a timeout error. Still, I think we should say that `timeout=0` should be synonymous with `timeout=None`.\n",
"@Lukasa they are thoroughly distinct values in this context. I would be okay with seeing `timeout=0` and immediately raising a `Timeout` exception before doing anything. Since we don't return a response (or request) this won't affect the API.\n\nWhile I don't think we're doing anything wrong by passing this along and catching (and raising) the right exception, I think @Damgaard is justified in his confusion.\n\nRegardless, I'm not entirely convinced this justifies a change. Perhaps the documentation could be improved, but if we do change this it will be a breaking API change and will have to be put off until 2.0. There are probably plenty of people properly handling this and changing the exception raised would break their code.\n",
"Mm, maybe specialcasing `timeout=0` is the right thing to do. A bit weirder if we plumb through `urllib3`'s new timeout API though.\n",
"In all candor I'm of the opinion that we already special case too many things in requests. A fair number of them are untested and that means that they can regress at any time.\n",
"Related shazow/urllib3/pull/246\n",
"Good spot @piotr-dobrogost, that should solve our problems nicely.\n"
] |
https://api.github.com/repos/psf/requests/issues/1614
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1614/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1614/comments
|
https://api.github.com/repos/psf/requests/issues/1614/events
|
https://github.com/psf/requests/pull/1614
| 19,856,069 |
MDExOlB1bGxSZXF1ZXN0ODUwNDExNQ==
| 1,614 |
Making requests PEP 302 compatible which makes its zip_safe for .egg distrubution
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/735015?v=4",
"events_url": "https://api.github.com/users/rocktavious/events{/privacy}",
"followers_url": "https://api.github.com/users/rocktavious/followers",
"following_url": "https://api.github.com/users/rocktavious/following{/other_user}",
"gists_url": "https://api.github.com/users/rocktavious/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rocktavious",
"id": 735015,
"login": "rocktavious",
"node_id": "MDQ6VXNlcjczNTAxNQ==",
"organizations_url": "https://api.github.com/users/rocktavious/orgs",
"received_events_url": "https://api.github.com/users/rocktavious/received_events",
"repos_url": "https://api.github.com/users/rocktavious/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rocktavious/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rocktavious/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rocktavious",
"user_view_type": "public"
}
|
[
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
},
{
"color": "e11d21",
"default": false,
"description": null,
"id": 44501305,
"name": "Not Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTMwNQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 8 |
2013-09-21T12:41:51Z
|
2021-09-08T23:06:28Z
|
2013-09-24T17:40:52Z
|
NONE
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1614/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1614/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1614.diff",
"html_url": "https://github.com/psf/requests/pull/1614",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1614.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1614"
}
| true |
[
"Thanks for this!\n\nI don't know how much Kenneth will want/not want this. However, in its current form this PR can't be accepted. The reason is that our `requirements.txt` file actually defines our _testing_ requirements. For actual use, Requests has _no_ external dependencies. This is important to us, which means that adding setuptools as an external dependency is something we're very strongly resistant to.\n",
"Ahh, ok well since PEP 302 is accepted and part of the standards track, i believe the functionality of pkg_resources will eventually become part of the standard library.\n\nI personally try to use all my external modules as .egg because its much easier to version control a bunch of .egg files instead of a bunch of loose files that get .pyc files mixed among them when you try to upgrade to new versions of packages but only want the source checked into your version control and not the .pyc files.\n",
"I'm perfectly happy to do this in principle, but I have a fuzzy memory of Kenneth not liking `zip_safe`.\n",
"Ok, well i'll get this pull request into a good place with Travis CI, so if you guys decide to add this support it should be easily merged.\n",
"> I personally try to use all my external modules as .egg because its much easier to version control a bunch of .egg files instead of a bunch of loose files that get .pyc files mixed among them when you try to upgrade to new versions of packages but only want the source checked into your version control and not the .pyc files.\n\nSo there are a few issues with this:\n- If you're vendoring requests, it shouldn't be working just yet. (It will work in 2.0)\n- New versions of packages shouldn't be hard to pull from either GitHub or PyPI.\n- If your version control system does not know how to ignore files based on a pattern match (e.g., `.pyc`) then you're using an awful VCS.\n- Perhaps since there's exactly one different line in the `setup.py` file (and that rarely changes in the first place) you might be able to automate a process that looks like:\n - Download package from PyPI (or GitHub)\n - Replace existing setup.py with your setup.py\n - Run `python setup.py bdist_egg` (or whichever the command is, I sincerely forgot)\n - Use the `.egg` generated by that instead.\n- Finally, `zip_safe`'s status (PEP or not) is not a deciding factor for us as maintainers (or author in Kenneth's case). We don't follow PEP-008 or PEP-257 so it is reasonable to expect that we might not bother with PEP-302 either.\n\nI appreciate the fact that you put the effort forth and for all I know Kenneth may change his mind w/r/t using `zip_safe` but your argument about `pyc` files being checked into your VCS is fundamentally flawed in my opinion.\n",
"Honestly, I'm afraid to change this because 'cacert.pem' works perfectly for everyone now.\n",
"Closing for now, but please keep the comments coming. I just need to be convinced that this worth the risk :)\n",
"Well after reading more about the history of PEP 302, really it seems no one uses external package data as intended by the PEP, in which cause they basically say you shouldn't be part of a .egg in you have external data file dependancies. There appears to be a builtin module called pkgutil(created for PEP 302) which has a method get_data() which returns the binary data from the requested data file within the .egg. Obviously the way cacert.pem is used by being passed along to other methods and modules not as binary data but as a filepath, as with most other external data files like that.\n\nSo setuptools gets around this by using the resource_filename method which extracts the requested data file to a temp location, and returns the filepath to that temp location. Obviously this leaves around unwanted temp files(there is a cleanup method that can be hooked to atexit, but that seems dirty and error prone)\n\nIt would be super nice if we could get alot of python packages to work inside of .egg(much like .war in the java world) but at this point in time it seems to be way more hassle then its worth.\n\nI have looked at a number of other python modules to see what it would take to make then more PEP 302 compliant but it appears to be way more hassle then its worth. Access to data file dependencies just doesn't work inside of the zipped .egg format.\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1613
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1613/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1613/comments
|
https://api.github.com/repos/psf/requests/issues/1613/events
|
https://github.com/psf/requests/pull/1613
| 19,832,562 |
MDExOlB1bGxSZXF1ZXN0ODQ5MTgzMQ==
| 1,613 |
Update urllib3 to b4391c1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/69920?v=4",
"events_url": "https://api.github.com/users/JshWright/events{/privacy}",
"followers_url": "https://api.github.com/users/JshWright/followers",
"following_url": "https://api.github.com/users/JshWright/following{/other_user}",
"gists_url": "https://api.github.com/users/JshWright/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JshWright",
"id": 69920,
"login": "JshWright",
"node_id": "MDQ6VXNlcjY5OTIw",
"organizations_url": "https://api.github.com/users/JshWright/orgs",
"received_events_url": "https://api.github.com/users/JshWright/received_events",
"repos_url": "https://api.github.com/users/JshWright/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JshWright/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JshWright/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JshWright",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-20T18:18:14Z
|
2021-09-08T23:06:28Z
|
2013-09-20T20:05:17Z
|
NONE
|
resolved
|
urllib3 recently added support for https proxies (https://github.com/shazow/urllib3/pull/170). This has been a blocking issue for our us of the requests library.
All tests pass, and no backwards incompatible changes are noted in the urllib3 docs.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1613/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1613/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1613.diff",
"html_url": "https://github.com/psf/requests/pull/1613",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1613.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1613"
}
| true |
[
"Thanks for raising this!\n\nThese changes are actually already available in the `2.0` branch. We're just as excited to have them merged as you are! The 2.0 release is intended to be our next release, but if you can't wait until then you should feel free to use Requests as it stands in the `2.0` branch. We've got a set of fairly up-to-date release notes for `2.0` in #1541.\n"
] |
https://api.github.com/repos/psf/requests/issues/1612
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1612/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1612/comments
|
https://api.github.com/repos/psf/requests/issues/1612/events
|
https://github.com/psf/requests/pull/1612
| 19,823,340 |
MDExOlB1bGxSZXF1ZXN0ODQ4NjQ0Nw==
| 1,612 |
Multi-part file encoding requires types that allow buffer interface
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2776090?v=4",
"events_url": "https://api.github.com/users/sanmayaj/events{/privacy}",
"followers_url": "https://api.github.com/users/sanmayaj/followers",
"following_url": "https://api.github.com/users/sanmayaj/following{/other_user}",
"gists_url": "https://api.github.com/users/sanmayaj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanmayaj",
"id": 2776090,
"login": "sanmayaj",
"node_id": "MDQ6VXNlcjI3NzYwOTA=",
"organizations_url": "https://api.github.com/users/sanmayaj/orgs",
"received_events_url": "https://api.github.com/users/sanmayaj/received_events",
"repos_url": "https://api.github.com/users/sanmayaj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanmayaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanmayaj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanmayaj",
"user_view_type": "public"
}
|
[
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
},
{
"color": "eb6420",
"default": false,
"description": null,
"id": 44501256,
"name": "Breaking API Change",
"node_id": "MDU6TGFiZWw0NDUwMTI1Ng==",
"url": "https://api.github.com/repos/psf/requests/labels/Breaking%20API%20Change"
},
{
"color": "e11d21",
"default": false,
"description": null,
"id": 44501305,
"name": "Not Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTMwNQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 3 |
2013-09-20T15:19:51Z
|
2021-09-08T23:06:19Z
|
2013-09-24T17:44:28Z
|
NONE
|
resolved
|
"TypeError: 'long' does not have the buffer interface" fixed for multi-part file encoding.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1612/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1612/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1612.diff",
"html_url": "https://github.com/psf/requests/pull/1612",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1612.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1612"
}
| true |
[
"Your tests are failing on Python 3. Could you fix them please?\n",
"@sanmayaj do I understand correctly that you expect requests to string-ify everything you send in? Why can you not do this yourself on your inputs?\n\n> Be liberal in what you receive and conservative in what you send.\n\nYou should be assuming that everything is a string (or byte) that leaves your computer via requests. This includes headers too. Personally I'm -1 on this. Up to @kennethreitz and @Lukasa though\n",
"Sorry, I should have spotted this, but we fixed it already! Merged into the 2.0 branch as PR #1537. @sanmayaj Can you try the 2.0 branch and confirm that it works fine for your use case?\n"
] |
https://api.github.com/repos/psf/requests/issues/1611
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1611/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1611/comments
|
https://api.github.com/repos/psf/requests/issues/1611/events
|
https://github.com/psf/requests/issues/1611
| 19,760,284 |
MDU6SXNzdWUxOTc2MDI4NA==
| 1,611 |
Specifying the source_address parameter for HTTP and HTTPS connections.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/668627?v=4",
"events_url": "https://api.github.com/users/standeck/events{/privacy}",
"followers_url": "https://api.github.com/users/standeck/followers",
"following_url": "https://api.github.com/users/standeck/following{/other_user}",
"gists_url": "https://api.github.com/users/standeck/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/standeck",
"id": 668627,
"login": "standeck",
"node_id": "MDQ6VXNlcjY2ODYyNw==",
"organizations_url": "https://api.github.com/users/standeck/orgs",
"received_events_url": "https://api.github.com/users/standeck/received_events",
"repos_url": "https://api.github.com/users/standeck/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/standeck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/standeck/subscriptions",
"type": "User",
"url": "https://api.github.com/users/standeck",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 5 |
2013-09-19T15:05:08Z
|
2021-09-09T01:22:25Z
|
2013-09-20T21:00:19Z
|
NONE
|
resolved
|
Add a possibility specify the source_address parameter.
HTTPConnection and HTTPSConnection support this parameter since Python 2.7 and Python 3.2.
This is necessary when one server has multiple ip.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/668627?v=4",
"events_url": "https://api.github.com/users/standeck/events{/privacy}",
"followers_url": "https://api.github.com/users/standeck/followers",
"following_url": "https://api.github.com/users/standeck/following{/other_user}",
"gists_url": "https://api.github.com/users/standeck/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/standeck",
"id": 668627,
"login": "standeck",
"node_id": "MDQ6VXNlcjY2ODYyNw==",
"organizations_url": "https://api.github.com/users/standeck/orgs",
"received_events_url": "https://api.github.com/users/standeck/received_events",
"repos_url": "https://api.github.com/users/standeck/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/standeck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/standeck/subscriptions",
"type": "User",
"url": "https://api.github.com/users/standeck",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1611/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1611/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nThere's no particular interest in providing this parameter on the functional API at this time. By and large we expect that this behaviour will be controlled via [Transport Adapters](http://docs.python-requests.org/en/latest/user/advanced/#transport-adapters). For an example of how Transport Adapters are implemented, you can see [this example](https://lukasa.co.uk/2012/12/Writing_A_Transport_Adapter/).\n",
"Thanks for answer.\n\nAdd this functional is not difficult, small code writing and not break current API. Instead, writing own Transport Adapter is huge effort for minimal changes in current mechanism. I think this is same case when it should be realize inside the request.\nAfter all, it's only require add one parameter which pass through inside to [HTTPConnection](https://github.com/kennethreitz/requests/blob/master/requests/packages/urllib3/connectionpool.py#L206) and [HTTPSConnection](https://github.com/kennethreitz/requests/blob/master/requests/packages/urllib3/connectionpool.py#L557) and this is just enough.\n",
"Oh, I guess I understood all complexity. Yes, this is not that trivial than I suggested before.\n",
"The question isn't one of triviality. Instead, it's about API cleanliness and separation of concerns. Our primary functional API confines itself to talking about HTTP, and HTTP alone. The notion of 'source address' is a connection-level issue, and so really belongs at the transport-adapter layer.\n",
"Thanks again, it issue has become clear for me now.\n"
] |
https://api.github.com/repos/psf/requests/issues/1610
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1610/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1610/comments
|
https://api.github.com/repos/psf/requests/issues/1610/events
|
https://github.com/psf/requests/issues/1610
| 19,755,904 |
MDU6SXNzdWUxOTc1NTkwNA==
| 1,610 |
Add copyright header to each source files to pass Debian's licensecheck.pl script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/75011?v=4",
"events_url": "https://api.github.com/users/maruel/events{/privacy}",
"followers_url": "https://api.github.com/users/maruel/followers",
"following_url": "https://api.github.com/users/maruel/following{/other_user}",
"gists_url": "https://api.github.com/users/maruel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maruel",
"id": 75011,
"login": "maruel",
"node_id": "MDQ6VXNlcjc1MDEx",
"organizations_url": "https://api.github.com/users/maruel/orgs",
"received_events_url": "https://api.github.com/users/maruel/received_events",
"repos_url": "https://api.github.com/users/maruel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maruel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maruel",
"user_view_type": "public"
}
|
[
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
}
] |
closed
| true |
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
] | null | 3 |
2013-09-19T13:55:23Z
|
2021-09-09T01:22:23Z
|
2013-09-24T17:51:02Z
|
NONE
|
resolved
|
It is a side effect of using requests in the Chromium project.
We have over 100 third party libraries in the codebase. While we audit all of their licenses, it's also helpful for Open Source Distributions like Debian and others to at least semi-automatically see what licenses apply to our 100+ libraries. For a team of volunteers this is a pretty big task, and having license headers in each file is helpful. Note that it doesn't need to be a full text of the license. For example in Chromium project we use this:
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
Still, it's preferred to use a full header if possible to help avoid any ambiguity. When people copy your library around, and bundle it as part of other third party project (the nesting can be 1 or even 2 levels deep), finding the right scope of a LICENSE file can be tricky, especially if it's missing (fortunately not the case here, but in general nothing is guaranteed). And then is it LICENSE, LICENSE.txt, COPYING, COPYING.txt and so on - it's trivial for people, but becomes increasingly non-trivial to automate.
Also see the following for recommendations (not all directly related to BSD, but still):
http://producingoss.com/en/license-quickstart.html
"The standard way to do this is to put the full license text in a file called COPYING (or LICENSE) included with the source code, and then put a short notice in a comment at the top of each source file, naming the copyright date, holder, and license, and saying where to find the full text of the license."
http://www.gnu.org/licenses/gpl-howto.html
"This statement should go near the beginning of every source file, close to the copyright notices."
http://www.mozilla.org/MPL/2.0/FAQ.html
"To apply the Mozilla Public License to software that you have written, add the header from Exhibit A of the license to each source code file in your project."
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1610/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1610/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this!\n\nYeah, this is something I'm aware of. I recall briefly discussing with Kenneth when he switched licenses to Apache, which includes a similar recommendation. The way I remember it, I'm pretty strongly opposed to putting the full license in each file, and the only person more opposed to it than me is Kenneth. =D\n\nWith that said, I'm definitely more open to using a shorter license-type string as you've suggested.\n\nThis is fundamentally Kenneth's call though. I consider this low priority for his attention, so I'm marking it as Needs BDFL Attention and leaving it open. =)\n",
"Frankly I've always found these annoying. They only really make sense when you have one source file and there's no real need to have a LICENSE file.\n",
"@maruel if you send a pull request, i'll merge it :)\n"
] |
https://api.github.com/repos/psf/requests/issues/1609
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1609/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1609/comments
|
https://api.github.com/repos/psf/requests/issues/1609/events
|
https://github.com/psf/requests/issues/1609
| 19,724,453 |
MDU6SXNzdWUxOTcyNDQ1Mw==
| 1,609 |
status_code wrong
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33916?v=4",
"events_url": "https://api.github.com/users/chadgh/events{/privacy}",
"followers_url": "https://api.github.com/users/chadgh/followers",
"following_url": "https://api.github.com/users/chadgh/following{/other_user}",
"gists_url": "https://api.github.com/users/chadgh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chadgh",
"id": 33916,
"login": "chadgh",
"node_id": "MDQ6VXNlcjMzOTE2",
"organizations_url": "https://api.github.com/users/chadgh/orgs",
"received_events_url": "https://api.github.com/users/chadgh/received_events",
"repos_url": "https://api.github.com/users/chadgh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chadgh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chadgh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chadgh",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 7 |
2013-09-19T02:57:59Z
|
2021-09-09T01:22:26Z
|
2013-09-19T14:01:23Z
|
NONE
|
resolved
|
```
In [1]: import requests
In [2]: requests.__version__
Out[2]: '1.2.3'
In [3]: requests.get('http://mock.issues.com/404').status_code
Out[3]: 200
In [4]: requests.get('http://mock.issues.com/500').status_code
Out[4]: 200
```
mock.issues.com seems to be sending the correct HTTP status codes using other tools to check.
Am I doing something wrong, or is this broken?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33916?v=4",
"events_url": "https://api.github.com/users/chadgh/events{/privacy}",
"followers_url": "https://api.github.com/users/chadgh/followers",
"following_url": "https://api.github.com/users/chadgh/following{/other_user}",
"gists_url": "https://api.github.com/users/chadgh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chadgh",
"id": 33916,
"login": "chadgh",
"node_id": "MDQ6VXNlcjMzOTE2",
"organizations_url": "https://api.github.com/users/chadgh/orgs",
"received_events_url": "https://api.github.com/users/chadgh/received_events",
"repos_url": "https://api.github.com/users/chadgh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chadgh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chadgh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chadgh",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1609/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1609/timeline
| null |
completed
| null | null | false |
[
"@chadgh `mock.issues.com` is down, whereas `issues.com` isn't. Can you compare the results with `httpie` and `curl`. `http -h http://mock.issues.com/500`, `curl -i http://mock.issues.com/500`\n",
"Is `mock.issues.com` down, or nonexistent? I can't resolve the domain at all.\n",
"In fact @chadgh, if your ISP is anything like mine they'll find they can't resolve the DNS and then secretly point you to their own servers for advertising purposes. That would cause the behaviour you're seeing.\n",
"Also, `issues.com` appears to be being domain squatted. I'm reasonably convinced `mock.issues.com` does not exist as a real domain.\n",
"Fellows,\n\nI'm sorry to have wasted your time. The url I was trying to use is mock.isssues.com looks like I messed the extra 's'. For future reference, it is a great site to use to test out making web service calls. I guess just don't forget the extra 's'.\n\nSorry again.\n",
"BTW, requests works great with the actual url. http://mock.isssues.com/500 gives a 500 error as expected.\n\nthanks \n",
"Not a problem, I'm glad you got it sorted out!\n"
] |
https://api.github.com/repos/psf/requests/issues/1608
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1608/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1608/comments
|
https://api.github.com/repos/psf/requests/issues/1608/events
|
https://github.com/psf/requests/issues/1608
| 19,613,705 |
MDU6SXNzdWUxOTYxMzcwNQ==
| 1,608 |
"Content-Type header: application/x-www-form-urlencoded" won't be overridden
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4493404?v=4",
"events_url": "https://api.github.com/users/sebastien-bratieres/events{/privacy}",
"followers_url": "https://api.github.com/users/sebastien-bratieres/followers",
"following_url": "https://api.github.com/users/sebastien-bratieres/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastien-bratieres/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sebastien-bratieres",
"id": 4493404,
"login": "sebastien-bratieres",
"node_id": "MDQ6VXNlcjQ0OTM0MDQ=",
"organizations_url": "https://api.github.com/users/sebastien-bratieres/orgs",
"received_events_url": "https://api.github.com/users/sebastien-bratieres/received_events",
"repos_url": "https://api.github.com/users/sebastien-bratieres/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sebastien-bratieres/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastien-bratieres/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sebastien-bratieres",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 10 |
2013-09-17T13:22:27Z
|
2021-09-09T01:22:26Z
|
2013-09-18T13:16:50Z
|
NONE
|
resolved
|
Simple test under Python 3.3.2, using the current 2.0 branch of requests (same behaviour with requests-1.2.3 btw); inspecting request headers with requestb.in.
The Content-Type header contains two entries. I expected my stipulated header to override (replace) application/x-www-form-urlencoded.
import requests
data = open('C:/my_audio_files\sample_50.wav', 'rb').read()
response = requests.post(url = "http://requestb.in/183gv9g1"
data=data,
headers={'Content-Type': 'application/octet-stream'}
)
FORM/POST PARAMETERS
None
HEADERS
Content-Length: 38206
User-Agent: python-requests/1.2.3 CPython/3.3.2 Windows/8
Connection: close
Host: requestb.in
Content-Type: application/octet-stream, application/x-www-form-urlencoded
Accept: _/_
Accept-Encoding: identity, gzip, deflate, compress
RAW BODY
RIFF6WAVEfmt etc...
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4493404?v=4",
"events_url": "https://api.github.com/users/sebastien-bratieres/events{/privacy}",
"followers_url": "https://api.github.com/users/sebastien-bratieres/followers",
"following_url": "https://api.github.com/users/sebastien-bratieres/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastien-bratieres/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sebastien-bratieres",
"id": 4493404,
"login": "sebastien-bratieres",
"node_id": "MDQ6VXNlcjQ0OTM0MDQ=",
"organizations_url": "https://api.github.com/users/sebastien-bratieres/orgs",
"received_events_url": "https://api.github.com/users/sebastien-bratieres/received_events",
"repos_url": "https://api.github.com/users/sebastien-bratieres/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sebastien-bratieres/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastien-bratieres/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sebastien-bratieres",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1608/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1608/timeline
| null |
completed
| null | null | false |
[
"This is a known issue that was [just opened and closed](https://github.com/kennethreitz/requests/issues/1607). Please search in both closed and open issues next time before submitting a bug report.\n",
"This is not the same issue. @sebastien-bratieres claims that he observed this using the 2.0 branch of Requests. If that's the case, we still have a problem. =)\n",
"However, I don't observe this problem on the 2.0 branch:\n\n``` python\n>>> import requests\n>>> import json\n>>> r = requests.post('http://httpbin.org/post', data=json.dumps({'hi': 'there'}), headers={'Content-Type': 'application/json'})\n>>> r.status_code\n200\n>>> print(r.text)\n{\n \"url\": \"http://httpbin.org/post\",\n \"form\": {},\n \"data\": \"{\\\"hi\\\": \\\"there\\\"}\",\n \"files\": {},\n \"origin\": \"192.91.191.29\",\n \"json\": {\n \"hi\": \"there\"\n },\n \"headers\": {\n \"User-Agent\": \"python-requests/1.2.3 CPython/3.3.0 Windows/7\",\n \"Accept-Encoding\": \"gzip, deflate, compress\",\n \"Accept\": \"*/*\",\n \"Content-Length\": \"15\",\n \"Connection\": \"close\",\n \"Content-Type\": \"application/json\",\n \"Host\": \"httpbin.org\"\n },\n \"args\": {}\n}\n```\n\n@sebastien-bratieres Can you attempt the exact same test I just did?\n\nNB: Wireshark also shows correct behaviour.\n",
"Actually, this is particularly bizarre, as `application/x-www-form-urlencoded` shouldn't be applied to this _at all_. The data is a bytestring, so we should not apply a default content-type header.\n",
"Here are tests and their results on my system:\n\n------------ your original test\nimport requests\nimport json\nr = requests.post('http://httpbin.org/post', data=json.dumps({'hi': 'there'}), headers={'Content-Type': 'application/json'})\nprint(r.status_code)\nprint(r.text)\n\n200\n{\n \"headers\": {\n \"Accept\": \"_/_\",\n \"User-Agent\": \"python-requests/1.2.3 CPython/3.3.2 Windows/8\",\n \"Accept-Encoding\": \"identity, gzip, deflate, compress\",\n \"Connection\": \"close\",\n \"Content-Type\": \"application/json\",\n \"Content-Length\": \"15\",\n \"Host\": \"httpbin.org\"\n },\n \"args\": {},\n \"files\": {},\n \"data\": \"{\\\"hi\\\": \\\"there\\\"}\",\n \"url\": \"http://httpbin.org/post\",\n \"form\": {},\n \"json\": {\n \"hi\": \"there\"\n },\n \"origin\": \"62.68.194.170\"\n}\n------------ slight variant, works fine\nimport requests\ndata=json.dumps({'hi': 'there'})\nresponse = requests.post(url = \"http://httpbin.org/post\",data=data,headers={'Content-Type': 'application/octet-stream'})\nprint(response.status_code)\nprint(response.text)\n\n200\n{\n \"data\": \"{\\\"hi\\\": \\\"there\\\"}\",\n \"json\": {\n \"hi\": \"there\"\n },\n \"url\": \"http://httpbin.org/post\",\n \"files\": {},\n \"args\": {},\n \"origin\": \"62.68.194.170\",\n \"form\": {},\n \"headers\": {\n \"Accept\": \"_/_\",\n \"Accept-Encoding\": \"identity, gzip, deflate, compress\",\n \"Content-Type\": \"application/octet-stream\",\n \"User-Agent\": \"python-requests/1.2.3 CPython/3.3.2 Windows/8\",\n \"Host\": \"httpbin.org\",\n \"Connection\": \"close\",\n \"Content-Length\": \"15\"\n }\n}\n\n------------ use a binary data file instead of json content\nimport requests\ndata = open('C:/text.txt', 'rb').read()\n#data=json.dumps({'hi': 'there'})\nresponse = requests.post(url = \"http://httpbin.org/post\",data=data,headers={'Content-Type': 'application/octet-stream'})\nprint(response.status_code)\nprint(response.text)\n\n200\n{\n \"headers\": {\n \"Accept\": \"_/_\",\n \"User-Agent\": \"python-requests/1.2.3 CPython/3.3.2 Windows/8\",\n \"Accept-Encoding\": \"identity, gzip, deflate, compress\",\n \"Connection\": \"close\",\n \"Content-Type\": \"application/x-www-form-urlencoded, application/octet-stream\",\n \"Content-Length\": \"9\",\n \"Host\": \"httpbin.org\"\n },\n \"args\": {},\n \"files\": {},\n \"data\": \"hi there!\",\n \"url\": \"http://httpbin.org/post\",\n \"form\": {},\n \"json\": null,\n \"origin\": \"62.68.194.170\"\n}\n",
"@sebastien-bratieres Sorry, did you run that using the 2.0 branch? Because when I run the equivalent test using the 2.0 branch, I continue not to get duplicated headers.\n",
"To the best of my knowledge, yes... but doubting is in order as I'm just\nstarting to use git.\nTo make sure, I have removed my previous version of requests, obtained an\nerror on import, downloaded the github zip of branch 2.0 (\nhttps://github.com/kennethreitz/requests/archive/2.0.zip ), ran setup.py\nfrom there, and ran my tests again, with the same result. The install\ndirectory is\nc:\\winpython-64bit-3.3.2.3\\python-3.3.2.amd64\\lib\\site-packages\\requests-1.2.3-py3.3.egg\n\nI also ran this:\n\nimport requests\nprint(sys.version)\nprint(requests.**file**)\n\n3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:06:53) [MSC v.1600 64 bit\n(AMD64)]\nC:\\WinPython-64bit-3.3.2.3\\python-3.3.2.amd64\\lib\\site-packages\\requests-1.2.3-py3.3.egg\\requests__init__.py\n\nAnything to do with the urllib version used in Python 3.3.2 ?\n\nI'm not a confirmed Python user, so if there's any extra test I could run,\nplease let me know.\n\nHTH\nSebastien\n\n2013/9/18 Cory Benfield [email protected]\n\n> @sebastien-bratieres https://github.com/sebastien-bratieres Sorry, did\n> you run that using the 2.0 branch? Because when I run the equivalent test\n> using the 2.0 branch, I continue not to get duplicated headers.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1608#issuecomment-24657611\n> .\n",
"Yeah, that zip file is wildly out of date. It simply doesn't appear to have anything we've merged into that 2.0 branch in it. Very odd.\n\nTry grabbing the git branch directly. If you have pip installed, you can use: `pip install https://github.com/facebook/python-sdk/zipball/master`. If not, clone the git repository and then check out the 2.0 branch before running `setup.py`. Let me know if you need help with that. =)\n",
"Yes, doing the proper git clone, obtaining branch 2.0, now rids me of the duplicate headers.\n\nApologies for the confusion and thanks for the help (now I can forget urllib and start using requests again !).\nSebastien\n",
"Not a problem at all, glad to be of help! Enjoy using Requests! :+1: \n"
] |
https://api.github.com/repos/psf/requests/issues/1607
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1607/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1607/comments
|
https://api.github.com/repos/psf/requests/issues/1607/events
|
https://github.com/psf/requests/issues/1607
| 19,604,816 |
MDU6SXNzdWUxOTYwNDgxNg==
| 1,607 |
CaseInsensitiveDict problem with python3 result in duplicated Content-Type/Length headers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/118673?v=4",
"events_url": "https://api.github.com/users/gawel/events{/privacy}",
"followers_url": "https://api.github.com/users/gawel/followers",
"following_url": "https://api.github.com/users/gawel/following{/other_user}",
"gists_url": "https://api.github.com/users/gawel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gawel",
"id": 118673,
"login": "gawel",
"node_id": "MDQ6VXNlcjExODY3Mw==",
"organizations_url": "https://api.github.com/users/gawel/orgs",
"received_events_url": "https://api.github.com/users/gawel/received_events",
"repos_url": "https://api.github.com/users/gawel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gawel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gawel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gawel",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-17T09:50:38Z
|
2021-09-09T01:22:26Z
|
2013-09-17T10:02:57Z
|
NONE
|
resolved
|
Headers are stored in a CaseInsensitiveDict but this class accept both bytes and str as key in python3. This result in duplicated headers when you want to provide Content-Type/Length by yourself.
When using python3 requests converts headers keys to bytes then set some headers with str as keys here https://github.com/kennethreitz/requests/blob/master/requests/models.py#L416 and here https://github.com/kennethreitz/requests/blob/master/requests/models.py#L422
This result in the fact that you can't provide those headers by yourself. Requests should always check if the header exist first (both with bytes and str key)
Or at least replace the existing key (bytes not str)
Example:
```
>>> from requests.structures import CaseInsensitiveDict
>>> d = CaseInsensitiveDict()
>>> d[b'Content-Type'] = 'text/html'
>>> 'Content-Type' in d
False
>>> d['Content-Type'] = 'application/json'
>>> d
CaseInsensitiveDict({b'Content-Type': 'text/html', 'Content-Type': 'application/json'})
>>>
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1607/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1607/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nWe've known about this bug for some time, and had a fix ready for a long time (see #1338). This fix is a breaking API change, so we wanted to hold off on making it. However, it has been merged into the upcoming 2.0 release. 2.0 is intended to be the next release, so this should be fixed in the wild fairly soon. =)\n"
] |
https://api.github.com/repos/psf/requests/issues/1606
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1606/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1606/comments
|
https://api.github.com/repos/psf/requests/issues/1606/events
|
https://github.com/psf/requests/pull/1606
| 19,597,727 |
MDExOlB1bGxSZXF1ZXN0ODM3NTc0Mw==
| 1,606 |
expose urllib3's assert_hostname argument (v1 for 2.0)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2726112?v=4",
"events_url": "https://api.github.com/users/borfig/events{/privacy}",
"followers_url": "https://api.github.com/users/borfig/followers",
"following_url": "https://api.github.com/users/borfig/following{/other_user}",
"gists_url": "https://api.github.com/users/borfig/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/borfig",
"id": 2726112,
"login": "borfig",
"node_id": "MDQ6VXNlcjI3MjYxMTI=",
"organizations_url": "https://api.github.com/users/borfig/orgs",
"received_events_url": "https://api.github.com/users/borfig/received_events",
"repos_url": "https://api.github.com/users/borfig/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/borfig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borfig/subscriptions",
"type": "User",
"url": "https://api.github.com/users/borfig",
"user_view_type": "public"
}
|
[
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
}
] |
closed
| false | null |
[] | null | 7 |
2013-09-17T06:42:27Z
|
2014-07-05T07:26:25Z
|
2013-09-24T17:44:02Z
|
NONE
| null |
This allows the user to control SSL CommonName verification.
For example,
``` python
requests.get('https://github.com/', assert_hostname='foo')
```
Will fail with an requests.exceptions.SSLError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1606/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1606/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1606.diff",
"html_url": "https://github.com/psf/requests/pull/1606",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1606.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1606"
}
| true |
[
"Hi @borfig! Thanks for doing this work!\n\nI have no idea if this'll get accepted or not. We're very reluctant to add new things to the Requests functional API at this stage, so this boils down to whether or not @kennethreitz believes this is a feature worth exposing at that level. I simply don't know what he'll decide. =)\n\nRegardless of what Kenneth decides, the patch itself appears to be in good shape, so if he wants the feature this patch is a good solid implementation of it.\n",
"Hi, you are welcome :)\n\nI have submitted the relevant urllib3 patch (shazow/urllib3@dfd9e0dc7ecbe4bb7abe723b5fd0e01688096d02) 3 months ago, but I still prefer to use requests.\n\nThe use-case of this feature is for testing purposes:\nthis allows us (@ravello) to test exact replicas of our production HTTPS service, without creating and signing certificates for each replica.\n",
"If this gets accepted, we could also add support for `assert_fingerprint`\n",
"And @t-8ch's suggestion is exactly why I'm kind of -1 on this. I would love extra conveniences for accessing some urllib3's extra features but I don't want us caught in parameter hell.\n\nWould it make sense to consolidate a bunch of these options into a dictionary? \n",
"I'm equally reluctant to having a dictionary of 'super magic urllib3 parameters'. If we're going to support something let's support it. Otherwise, it's still a Transport Adapter problem.\n",
"Once upon a time a `Verification`-object for urllib3 has been proposed, encapsulating all those options.\nUnfortunately I don't have time for the next two weeks, else I'd love to implement this.\nIf this doesn't have to be sooner and nobody beats me to it I'll take a shot then.\n",
"this should only exist in the transport adapter, if at all\n"
] |
https://api.github.com/repos/psf/requests/issues/1605
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1605/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1605/comments
|
https://api.github.com/repos/psf/requests/issues/1605/events
|
https://github.com/psf/requests/pull/1605
| 19,593,933 |
MDExOlB1bGxSZXF1ZXN0ODM3Mzg0OA==
| 1,605 |
expose urllib3's assert_hostname argument
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2726112?v=4",
"events_url": "https://api.github.com/users/borfig/events{/privacy}",
"followers_url": "https://api.github.com/users/borfig/followers",
"following_url": "https://api.github.com/users/borfig/following{/other_user}",
"gists_url": "https://api.github.com/users/borfig/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/borfig",
"id": 2726112,
"login": "borfig",
"node_id": "MDQ6VXNlcjI3MjYxMTI=",
"organizations_url": "https://api.github.com/users/borfig/orgs",
"received_events_url": "https://api.github.com/users/borfig/received_events",
"repos_url": "https://api.github.com/users/borfig/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/borfig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borfig/subscriptions",
"type": "User",
"url": "https://api.github.com/users/borfig",
"user_view_type": "public"
}
|
[] |
closed
| false | null |
[] | null | 4 |
2013-09-17T03:52:47Z
|
2014-07-20T01:32:50Z
|
2013-09-17T04:05:26Z
|
NONE
| null |
urllib3 has the assert_hostname argument to control verification of the Common Name of the server's SSL certificate
(shazow/urllib3@2493c9b3c0ce90b82209b93cad84296244e2763a), and this was also merged to requests (@2ed976ea7147a9d0c18998e02b16d691b6798a3e).
Therefore I would like to let the user use it.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2726112?v=4",
"events_url": "https://api.github.com/users/borfig/events{/privacy}",
"followers_url": "https://api.github.com/users/borfig/followers",
"following_url": "https://api.github.com/users/borfig/following{/other_user}",
"gists_url": "https://api.github.com/users/borfig/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/borfig",
"id": 2726112,
"login": "borfig",
"node_id": "MDQ6VXNlcjI3MjYxMTI=",
"organizations_url": "https://api.github.com/users/borfig/orgs",
"received_events_url": "https://api.github.com/users/borfig/received_events",
"repos_url": "https://api.github.com/users/borfig/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/borfig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borfig/subscriptions",
"type": "User",
"url": "https://api.github.com/users/borfig",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1605/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1605/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1605.diff",
"html_url": "https://github.com/psf/requests/pull/1605",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1605.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1605"
}
| true |
[
"1. We have been asking that all Pull Requests be pointed at the requests 2.0 branch\n2. This is a major API change and as such should only be considered for 2.0. I'm not adverse to exposing this API, but I'm also not convinced this is necessary. I don't think we expose everything `urllib3` can do and so I don't see us as being obligated to expose this.\n3. Regardless, @borfig if I were you I would close this, and open a new PR that instead chooses kennethreitz:2.0 as the base instead of kennethreitz:master.\n",
"You'll also want to rebase your commit onto the 2.0 branch.\n",
"ok. no problem.\n(when will 2.0 be released?)\n",
"By the way, in your README.rst you still ask to PR to master...\n"
] |
https://api.github.com/repos/psf/requests/issues/1604
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1604/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1604/comments
|
https://api.github.com/repos/psf/requests/issues/1604/events
|
https://github.com/psf/requests/issues/1604
| 19,523,766 |
MDU6SXNzdWUxOTUyMzc2Ng==
| 1,604 |
Response.text returns improperly decoded text (requests 1.2.3, python 2.7)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/129239?v=4",
"events_url": "https://api.github.com/users/lavr/events{/privacy}",
"followers_url": "https://api.github.com/users/lavr/followers",
"following_url": "https://api.github.com/users/lavr/following{/other_user}",
"gists_url": "https://api.github.com/users/lavr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lavr",
"id": 129239,
"login": "lavr",
"node_id": "MDQ6VXNlcjEyOTIzOQ==",
"organizations_url": "https://api.github.com/users/lavr/orgs",
"received_events_url": "https://api.github.com/users/lavr/received_events",
"repos_url": "https://api.github.com/users/lavr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lavr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lavr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lavr",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 24 |
2013-09-15T18:15:15Z
|
2018-12-26T15:46:15Z
|
2013-09-15T18:21:21Z
|
NONE
|
resolved
|
If http server returns Content-type: text/\* without encoding, Response.text always decode it as 'ISO-8859-1' text.
It may be valid in RFC2616/3.7.1, but this is wrong in real life in 2013.
I made example page with chinese text:
http://lavr.github.io/python-emails/tests/requests/some-utf8-text.html
All browsers renders this page properly.
But reguests.get returns invalid text.
And here is simple test with that url:
https://gist.github.com/lavr/6572927
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1604/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1604/timeline
| null |
completed
| null | null | false |
[
"Thanks for this @lavr!\n\nThis is a deliberate design decision for Requests. We're following the spec unless we find ourselves in a position where the specification diverges so wildly from real world behaviour that it becomes a problem (e.g. GET after 302 response to a POST).\n\nIf the upstream server knows what the correct encoding is, it should signal it. Otherwise, we're going to follow what the spec says. =)\n\nIf you think the spec default is a bad one, I highly encourage you to get involved with the RFC process for HTTP/2.0 in order to get this default changed. =)\n",
"What @Lukasa said + the fact that if the encoding retrieved from the headers is non-existent we rely on [charade](https://github.com/sigmavirus24/charade) to guess at the encoding. With so few characters, charade will not return anything definitive because it uses statistical data to **guess** at what the right encoding is.\n\nFrankly, the year makes no difference and does not change specification either.\n\nIf you know what encoding you're expecting you can also do the decoding yourself like so:\n\n``` python\ntext = str(r.content, '<ENCODING>', errors='replace')\n```\n\nThere is nothing wrong with requests as far as I'm concerned and this is not a bug in charade either. Since @Lukasa seems to agree with me, I'm closing this.\n",
"@lavr (/cc @sigmavirus24), even easier than that, you can simply provide the encoding yourself.\n\n``` pycon\n>>> r = requests.get('http://irresponsible-server/')\n>>> r.encoding = 'utf-8'\n```\n\nThen, proceed normally.\n",
"@kennethreitz that's disappointing. Why are we making that easy for people? =P\n",
"Absolutely :)\n",
"Mostly for Japanese websites. They all lie about their encoding.\n",
"@sigmavirus24\nplease note, that utils.get_encoding_from_headers always returns 'ISO-8859-1', and charade has no chance to be called. \nso bug is: we expect that charade is used to guess encoding, but it is not. \n",
"A patch above fixes a bug, but still follows RFC.\nPlease, consider to review it.\n",
"@lavr Sorry, we didn't make this very clear. We do _not_ expect charade to be called in this case. The RFC is very clear: if you don't specify a charset, and the MIME type is `text/*`, the encoding must be assumed to be ISO-8859-1. That means \"don't guess\". =)\n",
"@lavr: just set `r.encoding` to `None`, and it'll work as you expect (I think). \n",
"Or do `r.encoding = r.apparent_encoding`. \n",
"Even better.\n",
"On `r.encoding = None` and `r.encoding = r.apparent_encoding` we lost server charset information. \nTotally ignoring server header is not good solution, I think.\n\nRight solution is something like this:\n\n``` python\nr = requests.get(...)\nparams = cgi.parse_header(r.headers.get('content-type'))[0]\nserver_encoding = ('charset' in params) and params['charset'].strip(\"'\\\"\") or None\nr.encoding = server_encoding or r.apparent_encoding\ntext = r.text\n```\n\nLooks weird :(\n",
"Or do this:\n\n``` python\nr = requests.get(...)\n\nif r.encoding is None or r.encoding == 'ISO-8859-1':\n r.encoding = r.apparent_encoding\n```\n",
"I don't think so :)\n\nCondition `r.encoding is None` has no sense, because r.encoding can never be None for content-type=text/*.\n\n`r.encoding == 'ISO-8859-1'`... what does it mean ? Server sent charset='ISO-8859-1' or server sent no charset? If first, I shouldn't guess charset. \n",
"@lavr I was covering the non-text bases. You can rule out the `charset` possibility by using this condition instead:\n\n``` python\nr.encoding == 'ISO-8859-1' and not 'ISO-8859-1' in r.headers.get('Content-Type', '')\n```\n",
"@Lukasa \nWell, I can use this hack.\nAnd everybody in Eastern Europe and Asia can use it.\n\nBut what if we fix it in requests ? ;)\nWhat if requests can honestly set `enconding=None` on response without charset ?\n",
"As we've discussed many times, Requests is following the HTTP specification to the letter. The current behaviour is not wrong. =)\n",
"The fact that it is not helpful for your use case is a whole other story. =)\n",
"Alright, that's enough discussion on this. Thanks for the feedback.\n",
"Updated HTTP 1.1 obsoletes ISO-8859-1 default charset: http://tools.ietf.org/html/rfc7231#appendix-B\n",
"We're already tracking this in #2086. =)\n",
"To whom it may concern, here is a compatibility patch\r\n\r\ncreate file `requests_patch.py` with following code and import it, then the problem should be solved.\r\n\r\n```\r\n#!/usr/bin/env python\r\n# -*- coding: utf-8 -*-\r\nimport requests\r\nimport chardet\r\n\r\ndef monkey_patch():\r\n prop = requests.models.Response.content\r\n def content(self):\r\n _content = prop.fget(self)\r\n if self.encoding == 'ISO-8859-1':\r\n encodings = requests.utils.get_encodings_from_content(_content)\r\n if encodings:\r\n self.encoding = encodings[0]\r\n else:\r\n self.encoding = chardet.detect(_content)['encoding']\r\n\r\n if self.encoding:\r\n _content = _content.decode(self.encoding, 'replace').encode('utf8', 'replace')\r\n self._content = _content\r\n self.encoding = 'utf8'\r\n\r\n return _content\r\n requests.models.Response.content = property(content)\r\n\r\nmonkey_patch()\r\n\r\n\r\n```",
"> @lavr (/cc @sigmavirus24), even easier than that, you can simply provide the encoding yourself.\r\n> \r\n> ```\r\n> >>> r = requests.get('http://irresponsible-server/')\r\n> >>> r.encoding = 'utf-8'\r\n> ```\r\n> \r\n> Then, proceed normally.\r\n\r\nThanks for this! Any idea on how to do it in one line?\r\n"
] |
https://api.github.com/repos/psf/requests/issues/1603
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1603/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1603/comments
|
https://api.github.com/repos/psf/requests/issues/1603/events
|
https://github.com/psf/requests/issues/1603
| 19,516,949 |
MDU6SXNzdWUxOTUxNjk0OQ==
| 1,603 |
urllib3 was update to1.7, and support connection pooling.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/416141?v=4",
"events_url": "https://api.github.com/users/toontong/events{/privacy}",
"followers_url": "https://api.github.com/users/toontong/followers",
"following_url": "https://api.github.com/users/toontong/following{/other_user}",
"gists_url": "https://api.github.com/users/toontong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/toontong",
"id": 416141,
"login": "toontong",
"node_id": "MDQ6VXNlcjQxNjE0MQ==",
"organizations_url": "https://api.github.com/users/toontong/orgs",
"received_events_url": "https://api.github.com/users/toontong/received_events",
"repos_url": "https://api.github.com/users/toontong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/toontong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/toontong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/toontong",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-09-15T09:20:05Z
|
2021-09-09T01:22:26Z
|
2013-09-15T09:21:21Z
|
NONE
|
resolved
|
urllib3 was update to1.7, and support connection pooling.
you should update the request inline package->urllib3 (version was dev) to out inline.
just import from python-site-packages, not import from request.packages.
did you change the inline packge urllib3?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1603/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1603/timeline
| null |
completed
| null | null | false |
[
"Intentional design decision. Thanks, though.\n",
"could you explain why used this design ? I want to fork and change it, but worry bugs.\n",
"Dependency management is still far from perfect in Python. If we vendor `urllib3` we can control and understand exactly which version is used and we can more easily debug user issues. Relying on pip would result in quite a few headaches for us. Also you should take note that `urllib3` itself vendors all of its dependencies.\n"
] |
https://api.github.com/repos/psf/requests/issues/1602
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1602/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1602/comments
|
https://api.github.com/repos/psf/requests/issues/1602/events
|
https://github.com/psf/requests/pull/1602
| 19,485,361 |
MDExOlB1bGxSZXF1ZXN0ODMyOTU2Ng==
| 1,602 |
Handle multiple qop types
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
},
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
}
] |
closed
| true | null |
[] | null | 3 |
2013-09-14T03:33:08Z
|
2021-09-08T23:06:16Z
|
2013-09-24T17:46:16Z
|
CONTRIBUTOR
|
resolved
|
In Digest Access Authentication there are two possible values (four if you count the not-present and both cases) for authentication. We were narrowly handling one of the four cases. Now we handle two.
Fixes #1601.
As for logic: I reordered the branches because if `qop is None` then we can not call `split` on it. So if we handle that case first, we shouldn't hit any exceptions. :)
I think I'll also be sending a PR to handle the `auth-int` `qop` value for DigestAuth. Is there a reason we never did do it?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1602/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1602/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1602.diff",
"html_url": "https://github.com/psf/requests/pull/1602",
"merged_at": "2013-09-24T17:46:16Z",
"patch_url": "https://github.com/psf/requests/pull/1602.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1602"
}
| true |
[
"As a side note: the Travis builds will continue to fail on this branch until someone (:eyes: @kennethreitz) deploys the latest version of httpbin. :wink:\n",
"Presumably because it's not commonly used: note that #1601 is the first issue we've had that mentions it.\n\nThis PR is awesome, as always I'm :+1:. =D\n",
"@Lukasa yeah that makes perfect sense actually (about `auth-int`). It [`auth-int`] looks (at a glance) like it is a variation on the `auth` algorithm. I would be perfectly willing to punt on this but at some point it looks like it was planned to be added: https://github.com/sigmavirus24/requests/commit/22e31b4b737c2a3b61b3ab4fccd534b2eee65a87#L0R126\n\nRegardless #1601 will be fixed by this PR and since we haven't seen many complaints I would guess that `auth-int` isn't as widely used.\n"
] |
https://api.github.com/repos/psf/requests/issues/1601
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1601/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1601/comments
|
https://api.github.com/repos/psf/requests/issues/1601/events
|
https://github.com/psf/requests/issues/1601
| 19,484,747 |
MDU6SXNzdWUxOTQ4NDc0Nw==
| 1,601 |
qop problem on HTTPDigestAuth
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1127463?v=4",
"events_url": "https://api.github.com/users/tarzanjw/events{/privacy}",
"followers_url": "https://api.github.com/users/tarzanjw/followers",
"following_url": "https://api.github.com/users/tarzanjw/following{/other_user}",
"gists_url": "https://api.github.com/users/tarzanjw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tarzanjw",
"id": 1127463,
"login": "tarzanjw",
"node_id": "MDQ6VXNlcjExMjc0NjM=",
"organizations_url": "https://api.github.com/users/tarzanjw/orgs",
"received_events_url": "https://api.github.com/users/tarzanjw/received_events",
"repos_url": "https://api.github.com/users/tarzanjw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tarzanjw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tarzanjw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tarzanjw",
"user_view_type": "public"
}
|
[] |
closed
| true |
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
] | null | 7 |
2013-09-14T02:31:04Z
|
2021-09-09T01:22:23Z
|
2013-09-24T17:46:16Z
|
NONE
|
resolved
|
When Server responses 401 with qop in WWW-Authenticate header is : "auth,auth-int". Requests package send "None" as Authorization header. It's wrong.
```
www-authenticate: ...qop=auth,auth-int...
```
Requests package does correctly for:
```
www-authenticate: ...qop=auth...
```
from debuging, file <code>requests/auth.py</code>, line 109:
``` python
if qop == 'auth':
if nonce == self.last_nonce:
self.nonce_count += 1
else:
self.nonce_count = 1
ncvalue = '%08x' % self.nonce_count
s = str(self.nonce_count).encode('utf-8')
s += nonce.encode('utf-8')
s += time.ctime().encode('utf-8')
s += os.urandom(8)
cnonce = (hashlib.sha1(s).hexdigest()[:16])
noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, hash_utf8(A2))
respdig = KD(hash_utf8(A1), noncebit)
elif qop is None:
respdig = KD(hash_utf8(A1), "%s:%s" % (nonce, hash_utf8(A2)))
else:
# XXX handle auth-int.
return None
```
I saw that when you parse headers for qop argument, you did not split it into some option.
Did I misunderstand or something else?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1601/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1601/timeline
| null |
completed
| null | null | false |
[
"Nice catch! Frankly I'm not certain how we should be handling the case with `qop=auth,auth-int` but I know where to look. :)\n\nThanks for reporting this! :+1: If it turns out not to be a bug on our side, I'll let you know.\n",
"I really made me in some troubles when I connected with others, but fortunately I can force the serve side to response qop=auth instead of qop=auth,auth-int. However, after that I did a check into document, qop=auth,auth-int is legal.\n\nThank you for very quick response.\n",
"Right, i was fairly certain it would be permitted. That is not what I'm checking. I want to see what the RFC says we should do and it doesn't seem to specify. It seems that this only means that we can respond with either the `auth` algorithm or the `auth-int` algorithm (which we don't currently handle). The fix for this is actually fairly trivial. I'll send a pull request for it in a couple minutes.\n",
"Yeah, when I wrote my RESTful client by PHP (we use both PHP & Python for our services) I ignored auth-int too. But I think you should give a warning in log when Server side specify 'auth-int' only (as you do not handle it now), it will be easier for developers to debug.\n",
"I may take a crack at implementing `auth-init` but not tonight. Tonight I catch up on sleep.\n\nThanks again for reporting this! :cake:\n",
"Thank you very much.\n\nI currently install requests through pip, could you push your new update to PyPI?\n",
"This will be in 2.0. That doesn't have a set release date though it should be out \"soon\".\n"
] |
https://api.github.com/repos/psf/requests/issues/1600
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1600/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1600/comments
|
https://api.github.com/repos/psf/requests/issues/1600/events
|
https://github.com/psf/requests/issues/1600
| 19,466,468 |
MDU6SXNzdWUxOTQ2NjQ2OA==
| 1,600 |
Request hook
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/120501?v=4",
"events_url": "https://api.github.com/users/charlax/events{/privacy}",
"followers_url": "https://api.github.com/users/charlax/followers",
"following_url": "https://api.github.com/users/charlax/following{/other_user}",
"gists_url": "https://api.github.com/users/charlax/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/charlax",
"id": 120501,
"login": "charlax",
"node_id": "MDQ6VXNlcjEyMDUwMQ==",
"organizations_url": "https://api.github.com/users/charlax/orgs",
"received_events_url": "https://api.github.com/users/charlax/received_events",
"repos_url": "https://api.github.com/users/charlax/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/charlax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charlax/subscriptions",
"type": "User",
"url": "https://api.github.com/users/charlax",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 4 |
2013-09-13T17:47:44Z
|
2021-09-09T01:22:27Z
|
2013-09-13T22:28:02Z
|
NONE
|
resolved
|
Hey,
How do you feel about adding a `request` hook?
My use case would be to print the requested URL with its parameters and headers. It's not possible with the `response` hook.
Let me know what you think. If that sounds like a good idea, I'll implement it and send you a PR.
Best,
Chx
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1600/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1600/timeline
| null |
completed
| null | null | false |
[
"We had that and removed it intentionally. If you want to print what is being sent you can print them from the `request` attribute on the `Response` object, e.g.,\n\n``` python\nr = requests.get('https://httpbin.org/get')\nprint r.request.url\nprint r.request.headers\n```\n",
"Nice! Should this be added to the doc?\n",
"Pretty sure it is in the docs that the request object has those properties and that the Response has the request property. Whether it is in (or belongs in) the quick start docs is probably what you mean. I'm pretty sure this is in the advanced section but can't easily context switch on my phone. I can check tomorrow or Sunday (when I get a chance).\n",
"Found it: http://docs.python-requests.org/en/latest/user/advanced/?highlight=request%20object#request-and-response-objects. \n"
] |
https://api.github.com/repos/psf/requests/issues/1599
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1599/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1599/comments
|
https://api.github.com/repos/psf/requests/issues/1599/events
|
https://github.com/psf/requests/issues/1599
| 19,452,708 |
MDU6SXNzdWUxOTQ1MjcwOA==
| 1,599 |
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'error' in adapters.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/496754?v=4",
"events_url": "https://api.github.com/users/pigmej/events{/privacy}",
"followers_url": "https://api.github.com/users/pigmej/followers",
"following_url": "https://api.github.com/users/pigmej/following{/other_user}",
"gists_url": "https://api.github.com/users/pigmej/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pigmej",
"id": 496754,
"login": "pigmej",
"node_id": "MDQ6VXNlcjQ5Njc1NA==",
"organizations_url": "https://api.github.com/users/pigmej/orgs",
"received_events_url": "https://api.github.com/users/pigmej/received_events",
"repos_url": "https://api.github.com/users/pigmej/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pigmej/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pigmej/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pigmej",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 12 |
2013-09-13T14:34:18Z
|
2021-09-08T23:00:54Z
|
2014-02-03T10:56:57Z
|
NONE
|
resolved
|
```
File "/something/lib/python2.7/site-packages/requests/api.py", line 88, in post
File "/something/lib/python2.7/site-packages/requests/api.py", line 44, in request
File "/something/lib/python2.7/site-packages/requests/sessions.py", line 335, in request
File "/something/lib/python2.7/site-packages/requests/sessions.py", line 438, in send
File "/something/lib/python2.7/site-packages/requests/adapters.py", line 323, in send
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'error'
```
Seems to happen very rare.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1599/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1599/timeline
| null |
completed
| null | null | false |
[
"That seems like a weird exception. That line is `except socket.error`. Out of interest, do you ever touch `requests.adapters.socket`?\n",
"I did not.\n\nIf the line says socket.error, then probably somewhere socket name is overriden...\n",
"Yeah, that's my concern. It's worth checking in your application.\n",
"The thing is, I have not a single `socket` word directly in that cli. And it happens time to time.\n",
"So it's always going to be transient, because you'll only hit it when you actually throw an exception. If you want to see what's happening, try replacing the series of `except` blocks around that point with the following:\n\n``` python\nexcept Exception, e:\n print \"Current value of socket: %s\" % socket\n raise\n```\n\nThat should raise the original exception (so you can see what you're actually hitting) and also provide the value of `socket` at that time.\n",
"@Lukasa you don't have to explain to me such obvious things, really ;-)\n\nThe problem is that it happens very rare, when it happened it was Connection Reset probably. Also it happened from daemonized thread (daemon=True). The real problem is that it happened once a milion reqs or so. I wonder if it's not a GC \"problem\" in python or something.\n",
"Heh, sorry. ;) I've found it better to just be excessively helpful all the time, even when it means saying something fairly obvious. =D\n\nCould be a GC problem but it's hard to imagine how that's invalidating the socket module we're holding on to. That statement is just totally benign. I'm not entirely sure what your next good step would be.\n",
"np ;-)\n\n> I'm not entirely sure what your next good step would be.\n\nSo do I, that's why I created the issue right now.\n",
"Garbage collection does not collect modules or module bindings to names to the best of my knowledge. I think reading [this section of the `threading` docs](http://docs.python.org/2/library/threading.html#importing-in-threaded-code) may be worth reading. I have to wonder when socket is being imported (before or after the thread is created and main execution has finished). I would normally expect a `NameError` but if importing causes `socket` to be bound to `None` then that may explain your issue. In other words, I'm tempted to believe this is not a bug in requests but rather a timing issue in your code.\n\nEdit requests/adapters to place `print('This is the value of socket: %s' % socket)` immediately after `import socket`. This will probably provide a clue.\n",
"Closing due to inactivity.\n",
"This issue started happening to me when shutting down a Docker container running Tornado. \n\n```\nGracefully stopping... (press Ctrl+C again to force)\nStopping rethinktags_rethinkdb_1...\nException in thread Thread-1 (most likely raised during interpreter shutdown):\nTraceback (most recent call last):\n File \"/code/build/docker-compose/out00-PYZ.pyz/threading\", line 552, in __bootstrap_inner\n File \"/code/build/docker-compose/out00-PYZ.pyz/threading\", line 505, in run\n File \"/code/build/docker-compose/out00-PYZ.pyz/compose.cli.multiplexer\", line 41, in _enqueue_output\n File \"/code/build/docker-compose/out00-PYZ.pyz/compose.cli.log_printer\", line 62, in _make_log_generator\n File \"/code/build/docker-compose/out00-PYZ.pyz/compose.container\", line 140, in wait\n File \"/code/build/docker-compose/out00-PYZ.pyz/docker.client\", line 918, in wait\n File \"/code/build/docker-compose/out00-PYZ.pyz/docker.client\", line 79, in _post\n File \"/code/build/docker-compose/out00-PYZ.pyz/requests.sessions\", line 425, in post\n File \"/code/build/docker-compose/out00-PYZ.pyz/requests.sessions\", line 383, in request\n File \"/code/build/docker-compose/out00-PYZ.pyz/requests.sessions\", line 486, in send\n File \"/code/build/docker-compose/out00-PYZ.pyz/requests.adapters\", line 374, in send\n<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'error'\n```\n",
"@dalanmiller Can you follow @sigmavirus24's advice from earlier in the thread, please?\n"
] |
https://api.github.com/repos/psf/requests/issues/1598
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1598/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1598/comments
|
https://api.github.com/repos/psf/requests/issues/1598/events
|
https://github.com/psf/requests/pull/1598
| 19,446,742 |
MDExOlB1bGxSZXF1ZXN0ODMxMDU4Nw==
| 1,598 |
[2.0] Python default arguments are hard
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 2 |
2013-09-13T13:13:55Z
|
2021-09-08T23:06:14Z
|
2013-09-13T13:17:41Z
|
MEMBER
|
resolved
|
Turns out we were doing this wrong. Kindly pointed out by @redspider. Bizarrely, we're half doing this right, but just didn't fix the whole thing up. This should. =D
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1598/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1598/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1598.diff",
"html_url": "https://github.com/psf/requests/pull/1598",
"merged_at": "2013-09-13T13:17:41Z",
"patch_url": "https://github.com/psf/requests/pull/1598.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1598"
}
| true |
[
"ouch\n",
":cake:\n"
] |
https://api.github.com/repos/psf/requests/issues/1597
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1597/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1597/comments
|
https://api.github.com/repos/psf/requests/issues/1597/events
|
https://github.com/psf/requests/issues/1597
| 19,432,496 |
MDU6SXNzdWUxOTQzMjQ5Ng==
| 1,597 |
Use of mutable defaults in Request class
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/63004?v=4",
"events_url": "https://api.github.com/users/redspider/events{/privacy}",
"followers_url": "https://api.github.com/users/redspider/followers",
"following_url": "https://api.github.com/users/redspider/following{/other_user}",
"gists_url": "https://api.github.com/users/redspider/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/redspider",
"id": 63004,
"login": "redspider",
"node_id": "MDQ6VXNlcjYzMDA0",
"organizations_url": "https://api.github.com/users/redspider/orgs",
"received_events_url": "https://api.github.com/users/redspider/received_events",
"repos_url": "https://api.github.com/users/redspider/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/redspider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/redspider/subscriptions",
"type": "User",
"url": "https://api.github.com/users/redspider",
"user_view_type": "public"
}
|
[] |
closed
| true |
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
] | null | 2 |
2013-09-13T06:30:20Z
|
2021-09-09T01:22:27Z
|
2013-09-13T13:20:22Z
|
NONE
|
resolved
|
Incorrect use of mutable default dictionary in the Request class can lead to incorrect behaviour in advanced usage.
The mutable defaults are specified at:
https://github.com/kennethreitz/requests/blob/master/requests/models.py#L189
and should be None, not dict(). In order to maintain interface consistency it may be necessary to change the following lines to ensure data is set to a dict() if data is None, otherwise it will change to an empty list in the default case.
A demonstration of the flaw is:
```
>>> import requests
>>> s = requests.Session()
>>> r = requests.Request('GET','http://google.com/')
>>> r.data
{}
>>> r.data['foo'] = 'cat'
>>> r2 = requests.Request('GET','http://google.com/')
>>> r2.data
{'foo': 'cat'}
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1597/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1597/timeline
| null |
completed
| null | null | false |
[
"Heh, I briefly got totally misled by the appearance of `Session()` in your example. =D\n\nYeah, that absolutely looks wrong. We notably do this for everything else except those two parameters. I'll fix this up today. Thanks for raising this issue! :cake:\n",
"Resolved by #1598. Thanks for pointing this out! =D :cookie:\n"
] |
https://api.github.com/repos/psf/requests/issues/1596
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1596/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1596/comments
|
https://api.github.com/repos/psf/requests/issues/1596/events
|
https://github.com/psf/requests/issues/1596
| 19,424,266 |
MDU6SXNzdWUxOTQyNDI2Ng==
| 1,596 |
Automatically detect authentication
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/836426?v=4",
"events_url": "https://api.github.com/users/dpursehouse/events{/privacy}",
"followers_url": "https://api.github.com/users/dpursehouse/followers",
"following_url": "https://api.github.com/users/dpursehouse/following{/other_user}",
"gists_url": "https://api.github.com/users/dpursehouse/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dpursehouse",
"id": 836426,
"login": "dpursehouse",
"node_id": "MDQ6VXNlcjgzNjQyNg==",
"organizations_url": "https://api.github.com/users/dpursehouse/orgs",
"received_events_url": "https://api.github.com/users/dpursehouse/received_events",
"repos_url": "https://api.github.com/users/dpursehouse/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dpursehouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dpursehouse/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dpursehouse",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-09-13T00:52:10Z
|
2021-09-09T01:22:27Z
|
2013-09-13T01:24:01Z
|
CONTRIBUTOR
|
resolved
|
Would it be feasible for requests to check the `WWW-Authenticate` response header and send the appropriate authentication?
I'm trying to build a REST API client for a web service, but found that some of the servers where the service runs require basic auth, while others require digest. I want to, if possible, make it so the user doesn't need to care which one it is.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/836426?v=4",
"events_url": "https://api.github.com/users/dpursehouse/events{/privacy}",
"followers_url": "https://api.github.com/users/dpursehouse/followers",
"following_url": "https://api.github.com/users/dpursehouse/following{/other_user}",
"gists_url": "https://api.github.com/users/dpursehouse/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dpursehouse",
"id": 836426,
"login": "dpursehouse",
"node_id": "MDQ6VXNlcjgzNjQyNg==",
"organizations_url": "https://api.github.com/users/dpursehouse/orgs",
"received_events_url": "https://api.github.com/users/dpursehouse/received_events",
"repos_url": "https://api.github.com/users/dpursehouse/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dpursehouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dpursehouse/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dpursehouse",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1596/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1596/timeline
| null |
completed
| null | null | false |
[
"We do not perform authentication for anyone automatically and there's a difference in how servers and APIs respond. The GitHub API does not normally return the header so it would be useless there. Several other prominent APIs do the same and for good reason. Beyond that you can do this for your user without having requests do it. In fact you have better knowledge of their needs than we do and we already meet our users needs. Is it something useful that might be appreciated by users? Yes. Are those users your typical requests users? Not entirely. Maybe some of them are but certainly not enough for me to like or approve of this idea. I'm :-1: on this personally. \n",
"OK, fair enough. I'll close this and handle it myself.\n"
] |
https://api.github.com/repos/psf/requests/issues/1595
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1595/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1595/comments
|
https://api.github.com/repos/psf/requests/issues/1595/events
|
https://github.com/psf/requests/issues/1595
| 19,414,930 |
MDU6SXNzdWUxOTQxNDkzMA==
| 1,595 |
ujson option
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2837532?v=4",
"events_url": "https://api.github.com/users/woozyking/events{/privacy}",
"followers_url": "https://api.github.com/users/woozyking/followers",
"following_url": "https://api.github.com/users/woozyking/following{/other_user}",
"gists_url": "https://api.github.com/users/woozyking/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/woozyking",
"id": 2837532,
"login": "woozyking",
"node_id": "MDQ6VXNlcjI4Mzc1MzI=",
"organizations_url": "https://api.github.com/users/woozyking/orgs",
"received_events_url": "https://api.github.com/users/woozyking/received_events",
"repos_url": "https://api.github.com/users/woozyking/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/woozyking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woozyking/subscriptions",
"type": "User",
"url": "https://api.github.com/users/woozyking",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 8 |
2013-09-12T21:02:59Z
|
2021-09-05T00:06:48Z
|
2013-09-12T21:26:05Z
|
NONE
|
resolved
|
Hi,
Is it possible to use ujson as an optional JSON encoder/decoder for requests? Or, if ujson satisfies all the use cases of requests, replace simplejson with ujson?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1595/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1595/timeline
| null |
completed
| null | null | false |
[
"You absolutely can. The easiest way to do it is to monkeypatch:\n\n``` python\nimport requests\nimport ujson\n\nrequests.models.json = ujson\n```\n\nThat _should_ cause all further calls to `Response.json()` to use ujson. As that's the only place in Requests that uses simplejson, that should cover everything.\n",
"So by looking at `Response.json()`, I reckon it's safe to use `ujson` as default for `requests`?\n",
"Certainly it will be safe to override that behaviour. For obvious reasons we won't change this in the core library, but I highly encourage you to use `ujson` if it suits your purpose better. =)\n",
"Fair :)\n\nThank you @Lukasa !\n",
"My pleasure, I hope you enjoy using Requests! =D Let us know if you have any problems. :star:\n",
"@Lukasa - would you consider using ujson by default if it is installed?\n\nSomething like:\n\n``` python:\n# in compat.py\ntry:\n import ujson as json\nexcept ImportError:\n try:\n import simplejson as json\n except ImportError:\n import json\n```\n",
"Nope. =)\n\nI see no reason for Requests to favour ujson over any other third-party JSON decoder. We do nothing very complicated with JSON decoding, so replacing the decoder we use either via monkeypatching or via doing the decoding yourself is totally safe. With that in mind, there's no good reason to move away from the standard library in Requests proper.\n",
"For those finding this issue circa-2019, please not that I believe the current way of doing this would instead be:\r\n\r\n```python\r\nimport requests\r\nimport ujson\r\n\r\nrequests.models.complexjson = ujson\r\n```\r\n\r\nI've seen plenty of examples of code on Github that monkey-patches `requests.models.json`, which does not appear to accomplish the stated goal, since that attribute of `models` does not exist and is not used in the modern `requests` code base."
] |
https://api.github.com/repos/psf/requests/issues/1594
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1594/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1594/comments
|
https://api.github.com/repos/psf/requests/issues/1594/events
|
https://github.com/psf/requests/issues/1594
| 19,366,692 |
MDU6SXNzdWUxOTM2NjY5Mg==
| 1,594 |
Plumb through new urllib3 Timeout API
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
},
{
"color": "eb6420",
"default": false,
"description": null,
"id": 44501256,
"name": "Breaking API Change",
"node_id": "MDU6TGFiZWw0NDUwMTI1Ng==",
"url": "https://api.github.com/repos/psf/requests/labels/Breaking%20API%20Change"
}
] |
closed
| true |
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
] | null | 4 |
2013-09-12T07:54:00Z
|
2021-09-04T00:06:22Z
|
2013-09-24T18:12:02Z
|
MEMBER
|
resolved
|
From shazow/urllib3#231.
We may not necessarily want to expose the new API in its direct urllib3 form. In particular, I'm strongly in favour of keeping the current timeout API as the default usage.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1594/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1594/timeline
| null |
completed
| null | null | false |
[
"I did some investigation on this today.\n\nIn principle we've got total freedom to expose any or all of the urllib3 `Timeout` class-based API. In practice, [this section of code](https://github.com/kennethreitz/requests/blob/master/requests/adapters.py#L297-L321) is a bit tricky for two reasons. First, we use `timeout` here as an argument to the queue, which _must_ be a float. Second, we don't in any sense respect timeouts for any other part of this flow.\n\nIf we leave timeouts exactly as they are now, there's no problem here and everything is fine. I don't think that's a good way to handle it though. I'll try and think about how to extend the API to behave better on chunked requests.\n",
"Looking at urllib3, it only exposes `read_timeout` and `connect_timeout`: nothing for writing data. We can limit ourselves to that as well, which makes this easy, or we can add a timeout for chunked uploads. No strong preference here.\n",
"The BDFL handled this, so I'm closing.\n",
"I have meet the same problems, and I use the version 2.10.0. But request has hangs for long time.\r\n\r\n\r\nthis is my request code, and it maybe is related with urllib3\r\n"
] |
https://api.github.com/repos/psf/requests/issues/1593
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1593/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1593/comments
|
https://api.github.com/repos/psf/requests/issues/1593/events
|
https://github.com/psf/requests/issues/1593
| 19,365,476 |
MDU6SXNzdWUxOTM2NTQ3Ng==
| 1,593 |
Max retries exceeded exception on stale pooled connections
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-09-12T07:17:14Z
|
2021-09-09T01:22:28Z
|
2013-09-12T08:33:52Z
|
NONE
|
resolved
|
The application code uses the same requests.Session object to download bunch of files. In rare cases, downloading fails with the following trace (here, there is one successful download and one failed for the same Session object):
<pre>
2013-09-08 22:57:38,553 INFO root: Download 'http://node-testing.sugarlabs.org//packages/OLPC/11.3.1/ghostscript-8.71-16.fc14.i686.rpm' package
2013-09-08 22:57:38,754 DEBUG requests.packages.urllib3.connectionpool: "GET //packages/OLPC/11.3.1/ghostscript-8.71-16.fc14.i686.rpm HTTP/1.1" 200 None
2013-09-08 22:58:23,276 INFO root: Download 'http://node-testing.sugarlabs.org//packages/OLPC/11.3.1/poppler-data-0.4.4-1.fc14.noarch.rpm' package
2013-09-08 22:58:23,442 ERROR root: Failed to install
Traceback (most recent call last):
File "/usr/share/PackageKit/helpers/presolve/presolveBackend.py", line 65, in _install_packages
presolve.install(backend.split_package_id(i)[0])
File "/usr/share/PackageKit/helpers/presolve/presolve.py", line 133, in install
_install_bundles(_load_bundles(package))
File "/usr/share/PackageKit/helpers/presolve/presolve.py", line 291, in _load_bundles
_fetcher.download(url, bundle)
File "/usr/lib/python2.7/site-packages/sugar_network/toolkit/http.py", line 161, in download
reply = self.request('GET', path, allow_redirects=True)
File "/usr/lib/python2.7/site-packages/sugar_network/toolkit/http.py", line 206, in request
headers=headers, params=params, **kwargs)
File "/usr/lib/python2.7/site-packages/sugar_network/toolkit/../lib/requests/requests/sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/site-packages/sugar_network/toolkit/../lib/requests/requests/sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/sugar_network/toolkit/../lib/requests/requests/adapters.py", line 327, in send
raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host='node-testing.sugarlabs.org', port=80): Max retries exceeded with url: //packages/OLPC/11.3.1/poppler-data-0.4.4-1.fc14.noarch.rpm (Caused by <class 'httplib.BadStatusLine'>: '')
</pre>
The server side was not heavy loaded and runing wget for the same urls was not capable to reproduce the issue. But in logs, I found that requests library generates bunch of entries like
<pre>
Resetting dropped connection..
</pre>
So, the guess was that requests tries to reuse pooled http connection which is arranged to be closed on server side or so. After the applying the fix
<pre>
diff --git a/requests/packages/urllib3/connectionpool.py b/requests/packages/urllib3/connectionpool.py
index 4e851f0..c7dfa05 100644
--- a/requests/packages/urllib3/connectionpool.py
+++ b/requests/packages/urllib3/connectionpool.py
@@ -331,7 +331,7 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
def urlopen(self, method, url, body=None, headers=None, retries=3,
redirect=True, assert_same_host=True, timeout=_Default,
- pool_timeout=None, release_conn=None, **response_kw):
+ pool_timeout=None, release_conn=None, force_new=False, **response_kw):
"""
Get a connection from the pool and perform an HTTP request. This is the
lowest level call for making a request, so you'll need to specify all
@@ -420,7 +420,7 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
try:
# Request a connection from the queue
- conn = self._get_conn(timeout=pool_timeout)
+ conn = self._new_conn() if force_new else self._get_conn(timeout=pool_timeout)
# Make the request on the httplib connection object
httplib_response = self._make_request(conn, method, url,
@@ -471,7 +471,10 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
err = e
if retries == 0:
- raise MaxRetryError(self, url, e)
+ if force_new:
+ raise MaxRetryError(self, url, e)
+ force_new = True
+ retries = 1
finally:
if release_conn:
@@ -487,7 +490,7 @@ class HTTPConnectionPool(ConnectionPool, RequestMethods):
return self.urlopen(method, url, body, headers, retries - 1,
redirect, assert_same_host,
timeout=timeout, pool_timeout=pool_timeout,
- release_conn=release_conn, **response_kw)
+ release_conn=release_conn, force_new=force_new, **response_kw)
# Handle redirect?
redirect_location = redirect and response.get_redirect_location()
</pre>
the issue cannot be reproduced any more.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1593/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1593/timeline
| null |
completed
| null | null | false |
[
"Hi @alsroot, thanks for raising this issue!\n\nThat looks like a nasty problem, and thanks so much for investigating it. However, your changes are in urllib3. This is actually a [separate module](https://github.com/shazow/urllib3) that we vendor a copy of into Requests. Any bugs in urllib3 should be fixed over there, and we'll grab the fixes when we prepare a new release. This means you should open this issue on that repository. =) Andrey will be delighted to have it.\n\nThanks so much for investigating this, and I'm looking forward to seeing a fix! :cake:\n",
"Created an upstream ticket, https://github.com/shazow/urllib3/issues/245. So, I guess this one can be closed as a dup.\n",
"Thanks so much @alsroot!\n"
] |
https://api.github.com/repos/psf/requests/issues/1592
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1592/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1592/comments
|
https://api.github.com/repos/psf/requests/issues/1592/events
|
https://github.com/psf/requests/pull/1592
| 19,364,273 |
MDExOlB1bGxSZXF1ZXN0ODI2ODYxNQ==
| 1,592 |
remove extra mention of dangerous `r.raw`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/128982?v=4",
"events_url": "https://api.github.com/users/homm/events{/privacy}",
"followers_url": "https://api.github.com/users/homm/followers",
"following_url": "https://api.github.com/users/homm/following{/other_user}",
"gists_url": "https://api.github.com/users/homm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/homm",
"id": 128982,
"login": "homm",
"node_id": "MDQ6VXNlcjEyODk4Mg==",
"organizations_url": "https://api.github.com/users/homm/orgs",
"received_events_url": "https://api.github.com/users/homm/received_events",
"repos_url": "https://api.github.com/users/homm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/homm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/homm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/homm",
"user_view_type": "public"
}
|
[
{
"color": "e11d21",
"default": false,
"description": null,
"id": 44501305,
"name": "Not Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTMwNQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 8 |
2013-09-12T06:35:39Z
|
2021-09-08T21:01:11Z
|
2013-09-12T09:29:16Z
|
CONTRIBUTOR
|
resolved
|
Difference between `r.raw` and `r.iter_content` not clearly described in all places where `r.raw` mentioned. This is very dangerous, because difference not only in calling format. I purpose leave `r.raw` only when we are talking about sockets or encoded data.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1592/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1592/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1592.diff",
"html_url": "https://github.com/psf/requests/pull/1592",
"merged_at": "2013-09-12T09:29:16Z",
"patch_url": "https://github.com/psf/requests/pull/1592.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1592"
}
| true |
[
"@homm Thanks for this! Unfortunately, I don't agree with most of these documentation changes. My notes, in the order the change appears in the diff:\n1. `stream` (or more accurately `prefetch`) was always required for `Response.iter_content`, but not always required for `Response.raw`. The portion of the documentation you're changing is about the changes introduced in the move to 1.x, and so correctly reflects the change in behaviour. This should not be changed.\n2. I'm happy with this one. =)\n3. Whitespace change, always fine.\n4. As (3).\n5. We should not be hiding `Response.raw` from people. This portion of the documentation is clear about what `Response.raw` is, so I see no reason to believe that it's dangerous. =)\n\nIf you strongly disagaree with any of those points, please let me know. Otherwise, if you remove changes 1 and 5 I'll happily merge this. =)\n",
"I understand about first, I'll bring it back. But I strongly disagree about fifth. Now it looks like `r.raw.read()` is common way to get content. As you can guess, I had the bug there. In my case I read compressed data from raw response and save it to disk.\n\nWhat you think about this version?\n",
"I think the new version is much better. Change it to read \"Alternatively, you can read the undecoded body...\" and I'll be delighted. =D\n",
"done\n",
"Beautiful, thanks so much @homm! Would you like to add yourself to the AUTHORS file as well?\n",
"Awesome! Thanks so much for your work, keep it coming! :cake: :cookie: :cocktail:\n",
"@Lukasa I accidentally use unicode — instead of - in authors list. Sorry. Fix it please.\n",
"Heh, so you did. Fixed. =)\n"
] |
https://api.github.com/repos/psf/requests/issues/1591
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1591/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1591/comments
|
https://api.github.com/repos/psf/requests/issues/1591/events
|
https://github.com/psf/requests/pull/1591
| 19,324,206 |
MDExOlB1bGxSZXF1ZXN0ODI0NjE5MQ==
| 1,591 |
minor typo: Fix requests spelling
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/500628?v=4",
"events_url": "https://api.github.com/users/sayanchowdhury/events{/privacy}",
"followers_url": "https://api.github.com/users/sayanchowdhury/followers",
"following_url": "https://api.github.com/users/sayanchowdhury/following{/other_user}",
"gists_url": "https://api.github.com/users/sayanchowdhury/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayanchowdhury",
"id": 500628,
"login": "sayanchowdhury",
"node_id": "MDQ6VXNlcjUwMDYyOA==",
"organizations_url": "https://api.github.com/users/sayanchowdhury/orgs",
"received_events_url": "https://api.github.com/users/sayanchowdhury/received_events",
"repos_url": "https://api.github.com/users/sayanchowdhury/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayanchowdhury/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayanchowdhury/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayanchowdhury",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-11T14:53:26Z
|
2021-09-08T21:01:10Z
|
2013-09-11T15:16:29Z
|
CONTRIBUTOR
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1591/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1591/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1591.diff",
"html_url": "https://github.com/psf/requests/pull/1591",
"merged_at": "2013-09-11T15:16:29Z",
"patch_url": "https://github.com/psf/requests/pull/1591.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1591"
}
| true |
[
"This is awesome, thank you so much! :cake:\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1590
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1590/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1590/comments
|
https://api.github.com/repos/psf/requests/issues/1590/events
|
https://github.com/psf/requests/issues/1590
| 19,318,248 |
MDU6SXNzdWUxOTMxODI0OA==
| 1,590 |
Set https proxies and redirect with a wrong location.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5434831?v=4",
"events_url": "https://api.github.com/users/julianzhoucn/events{/privacy}",
"followers_url": "https://api.github.com/users/julianzhoucn/followers",
"following_url": "https://api.github.com/users/julianzhoucn/following{/other_user}",
"gists_url": "https://api.github.com/users/julianzhoucn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julianzhoucn",
"id": 5434831,
"login": "julianzhoucn",
"node_id": "MDQ6VXNlcjU0MzQ4MzE=",
"organizations_url": "https://api.github.com/users/julianzhoucn/orgs",
"received_events_url": "https://api.github.com/users/julianzhoucn/received_events",
"repos_url": "https://api.github.com/users/julianzhoucn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julianzhoucn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julianzhoucn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julianzhoucn",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 6 |
2013-09-11T13:13:06Z
|
2021-09-09T01:22:28Z
|
2013-09-12T17:47:52Z
|
NONE
|
resolved
|
Some of my code are as follows:
```
import requests
#my auth deamon
proxies={'https':'http://10.204.133.76:8080'}
response=requests.get('https://www.redhat.com',proxies=proxies,allow_redirects=False,verify=False)
```
And response.headers is:
```
CaseInsensitiveDict({'content-length': '0', 'set-cookie': '_zk_sc_tc1=1; HttpOnly, _zk_sc_tc2=1; expires=Wed, 18-Sep-2013 11:58:37 GMT; Path=/; Domain=.redhat.com; HttpOnly', 'connection': 'Keep-Alive', 'location': 'https://auto-scanner-133-76.tmicss.com:443/auth?forward=http%3A%2F%2Fwww.redhat.comhttps%3A%2F%2Fwww.redhat.com%2F', 'date': 'Wed, 11 Sep 2013 11:58:37 GMT', 'p3p': 'CP="NOI ADM DEV PSAi COM NAV OUR OTR STP IND DEM"', 'content-type': 'text/html; charset=UTF-8'})
```
And the location in the headers is not same as the Browser behavior. And the right 'location' value is 'https://auto-scanner-133-76.tmicss.com:443/auth?forward=https%3A%2F%2Fwww.redhat.com%2F'
Is this an issue of requests?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1590/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1590/timeline
| null |
completed
| null | null | false |
[
"Hmm. It looks like your proxy is attaching that header. Out of interest, can you print the Host headers from both outgoing requests? (In requests, you do that by printing `r.request.headers['Host']`).\n",
"Hi Lukasa. Thanks for your comments. When I get the headers of requests as follows:\nIn[5]:response.request.headers\nOut[5]: CaseInsensitiveDict({'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '_/_', 'User-Agent': 'python-requests/1.2.2 CPython/2.7.2 Linux/2.6.32-220.el6.x86_64'})\nAnd no Host headers in the requests.\nAny comments will be helpful. Thanks a lot.\n",
"Uh, yes, sorry, I forgot that we set the Host header lower down. You'd need Wireshark/tcpdump to get the correct headers. Are you familiar with tcpdump or Wireshark?\n",
"I try it with the help of Wireshark, and found follow question:\nimport requests\nurl = \"https://www.redhat.com\"\nproxies = {\"https\": \"http://10.204.133.76:80\"}\nheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0'}\nt = requests.get(url, allow_redirects=False, verify=False, proxies=proxies, headers=headers)\nprint t.headers\n\nFrom it, it seems that requests use 'GET' other than 'CONNECT' to deal with the https proxy.\n\nAnd when I use the urllib3 moudle, \nimport urllib3\nproxy = urllib3.ProxyManager('https://10.204.133.76:80')\nr = proxy.request('GET', 'https://www.redhat.com/')\nprint r.getheaders()\n\n\nIt seems right. \nI don't know, now requests can support https proxy or not?\n",
"New urllib3 supports https proxy over http/connect. This is available in release 1.7. It would be good if requests' copy of urllib3 was updated too.\n",
"Requests does not support HTTPS proxies in the current 1.2.3 release. The upcoming 2.0 release of Requests will contain the improved proxy support added to urllib3, including the CONNECT verb. If you would like to test, you can check the 2.0 branch of this repository, which already contains that fix (and many others).\n\nIt looks like the 2.0 release will fix your problem!\n"
] |
https://api.github.com/repos/psf/requests/issues/1589
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1589/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1589/comments
|
https://api.github.com/repos/psf/requests/issues/1589/events
|
https://github.com/psf/requests/pull/1589
| 19,280,556 |
MDExOlB1bGxSZXF1ZXN0ODIyMjU0MA==
| 1,589 |
Make sure content is checked when setting encoding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/300306?v=4",
"events_url": "https://api.github.com/users/jdennis/events{/privacy}",
"followers_url": "https://api.github.com/users/jdennis/followers",
"following_url": "https://api.github.com/users/jdennis/following{/other_user}",
"gists_url": "https://api.github.com/users/jdennis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jdennis",
"id": 300306,
"login": "jdennis",
"node_id": "MDQ6VXNlcjMwMDMwNg==",
"organizations_url": "https://api.github.com/users/jdennis/orgs",
"received_events_url": "https://api.github.com/users/jdennis/received_events",
"repos_url": "https://api.github.com/users/jdennis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jdennis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdennis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jdennis",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 15 |
2013-09-10T19:46:09Z
|
2021-09-08T23:11:07Z
|
2013-09-10T20:52:30Z
|
NONE
|
resolved
|
HTML pages which declared their charset encoding only in their content
had the wrong encoding applied because the content was never checked.
Currently only the request headers are checked for the charset
encoding, if absent the apparent_encoding heuristic is applied. But
the W3.org doc says one should check the header first for a charset
declaration, then if that's absent check the meta tags in the content
for a charset encoding declaration. It also says if no charset
encoding declaration is found one should assume UTF-8, not ISO-8859-1
(a bad recommendation from the early days of the web).
This patch does the following:
- Removes the default ISO-8859-1 from get_encoding_from_headers(),
it's wrong for 2 reasons, 1) get_encoding_from_headers() should
return None if no encoding was found in the header, otherwise how
can you tell it was absent in the headers? 2) It's the wrong default
for the contemporary web. Also because get_encoding_from_headers()
always returned an encoding any subsequent logic failed to execute.
- The Request text property now does a 4 stage check for encoding in
this priority order:
1) encoding declared in the headers
2) encoding declared in the content (selects the highest priority
encoding if more than one encoding is declared in the content)
3) apply apparent_encoding hueristic (includes BOM)
4) if none of the above default to UTF-8
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1589/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1589/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1589.diff",
"html_url": "https://github.com/psf/requests/pull/1589",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1589.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1589"
}
| true |
[
"somehow I didn't get the issue and pull request linked together, the issue is 1588\n",
"Thanks for this @jdennis! It's great work.\n\nUnfortunately, this pull request won't be accepted in its current form. This is for a few reasons:\n1. Requests used to check the content header of HTML pages for charsets in older versions. We explicitly removed this functionality because Requests is a HTTP library, not a HTML library. We should not assume that returned data will be HTML (often it won't be). This fact means that the notion of checking the content will not be accepted into Requests in any form, sadly.\n2. The ISO-8859-1 default applies not only to HTML but any form of text encoding delivered by HTTP where no other encoding is specified. This behaviour is given in [RFC 2616](http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7.1) (aka the HTTP/1.1 spec), and we describe our compliance with it [here](http://docs.python-requests.org/en/latest/user/advanced/#compliance).\n We implemented this because it was causing genuine confusion with older websites and web servers. Choosing UTF-8 as a default is just as arbitrary a choice as ISO-8859-1, with the added disadvantage of being directly against the relevant specification. Until RFC 2616 is superseded (it will be soon), changing this behaviour would be unwise.\n (It's worth noting that `get_encoding_from_headers()` does not always return an encoding. It _will_ always return an encoding with correctly `Content-Type`d HTML, but there are plenty of other things accessed via HTTP.)\n\nWith that said, this is an excellent Pull Request and thank you so much for opening it. =) Unfortunately, it's just not in line with how we want the library to function. I'm sorry we can't accept it, and please do keep contributing! :cake:\n",
"Fair enough. How about if the content-type header is text/html then examine the content?\n\nThe problem I'm trying to solve is that when I using requests and use the text attribute I'm getting garbage. Here's the scenario:\n\nHeader has content-type=text/html, but there is no charset specifier. In the content is this:\n\n```\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n```\n\nNote the utf-8 specifier.\n\nBut get_encoding_from_headers() which is invoked in the text property decides the encoding is ISO-8859-1 even though there is nothing to indicate this is the encoding. It this passes this bogus encoding to unicode and returns that as the text, which produces corrupted text.\n\nIf requests is a HTTP only library only then why is there a text property that operates on the raw content, that's inconsistent with that goal.\n\nIt seems to me one of a few things should be going on here. If a user want raw HTTP they should be using the content property, not the text property. If one uses the text property then the content should be checked to see if its html (actually it should check for other text types as well). What it should NOT do is apply the wrong encoding. Make sense?\n",
"opps ... I omitted the meta tag which is in the content, sorry. it is ...\n\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n",
"Silly me, I guess this text box doesn't escape HTML, maybe this will show up\n\nmeta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\"\n",
"@jdennis I edited your first comment to make the meta tag appear. =)\n\nSo I'm totally sympathetic to your position here. I'll respond to a few of your points:\n1. If the `Content-Type` header is `text/html`, I think we'd be significantly more justified in searching for the `meta` tag. If you open a PR that does that it has a much better chance of being accepted! I'd love to see it. =)\n2. It's important to be clear on the role of the `response.text` property. `response.text` is not limited to working only with HTML. Instead, its purpose is for anything that has a character encoding. This can include HTML, but needn't: for instance, it applies to JSON and XML as well. For that reason, `response.text` has no right to interrogate the body. All it does is assume the encoding that the HTTP states, unless the HTTP does not state one.\n\nIt's worth noting that if you are worried about having problems with HTML, and you know that's what you're fetching, you can use this flow:\n\n``` python\nimport requests\n\nr = requests.get('https://lukasa.co.uk/')\nif r.encoding == 'ISO-8859-1':\n r.encoding = requests.utils.get_encodings_from_content(r.content)\n```\n\nAt that point, `r.text` will work correctly.\n\nWith that said, I'm prepared to believe that we can make some useful extensions to the encodings flow. For instance, JSON should _always_ be UTF-8, so we could special-case this logic to enforce that. Similarly, for specific MIME types (I'm thinking `text/html`, `application/xhtml+xml`, `application/xml` and `application/json` at the moment) we could provide some special case logic. In the HTML, XHTML and XML cases, that could use `get_encodings_from_content`. We will still not set UTF-8 as the fallback default, as that continues to be unwise. =)\n\nDoes this sound like an acceptable compromise?\n",
"You make good points. Here are my thoughts.\n\n1) I agree the text property should be aware of the various content-types which have text content. As you point out different content-types have different rules for determining encoding, xml being a good example. I think the text property should make an attempt to interpret encoding specifiers bases on the header content-type. FWIW I do recall looking at how encoding was handled with JSON and my recollection is there was something funky about it, but I forget the details.\n\n2) I continue to believe get_encoding_from_header() should return None if no charset is specified. It's not correct to return information that's not present. The caller of get_encoding_from_header() can make it's own determination as to what to do if the header did not supply an encoding. JSON is a great example of this issue, with JSON if there is no encoding specified you're supposed to assume UTF-8 (you argue for a different default for HTML, ISO-8859-1), how can one function know the default to apply in a given context? (Unless you pass a default parameter, but it's still better to know if the value was specified or not instead of getting back a value and not knowing if it was specified or came from a default).\n\n3) Your example of a workflow with the existing code is not correct. You have to know if the header declared an encoding and you have to know if the content declared an encoding, if they both declared it the header value takes precedence. Also get_encodings_from_content() returns a list which may be empty. \n\nA lot of these issues are covered in these two documents.\n\nhttp://www.w3.org/International/questions/qa-html-encoding-declarations/\nhttp://www.w3.org/International/tutorials/tutorial-char-enc/\n\nI'm not sure that get_encodings_from_content is correct anyway. It's not taking into account the content-type and the idea of N number of of possible encodings for the entire conten (and trying to iterate over them until one succeeds) is dubious.\n\nIf you would like I'll code up a possible solution that addresses the concerns above but I don't want to invest the time unless you would be serious about incorporating it. Fair enough? Comments?\n",
"I actually need to backtrack on the JSON encoding: we have logic for it in `Response.json()`. On balance, I think that belongs there unless we add more specific behaviour to `Response.text`, in which case it should be moved. The relevant funky details are that you can use any of the UTF- encodings, I think. Don't hold me to that though.\n1. You and I disagree on the definition of 'information that is present'. =) I find RFC 2616's statement to be clear and unambiguous: if you serve, for example, `text/plain` with no charset specified, the headers actually do say to use ISO-8859-1. This is because RFC 2616 defines what they say, and that definition clearly states that `text/X` means ISO-8859-1 unless explicitly overridden in the `Content-Type` header.\n \n We should note here that I'm not arguing that HTML has a default of ISO-8859-1, I'm saying that _any_ MIME type beginning `text/` should have that default. HTML is a notable example but it's far from the only one.\n \n As for what the default should be in a given context, that context is defined by the `Content-Type` header, which is what we're examining anyway. No big deal. =)\n2. Re: My sample workflow above. I believe it to be correct(-ish) within the parameters of the problem. The only time that it isn't correct is if the headers actually _do_ specify ISO-8859-1. I didn't handle that case because I wrote the code in 10 seconds. =)\n\nMy proposal is to add the following logic (in Python, but not directly related to any part of the Requests code):\n\n``` python\nencoding = encoding_from_headers()\nif encoding:\n return encoding\n\nif ('text/html' in content_type) or ('application/xhtml+xml' in content_type) or ('application/xml' in content_type):\n encoding = encoding_from_content()\nelif ('application/json' in content_type):\n encoding = 'utf-8'\n\nif encoding:\n return encoding\n\nreturn encoding_from_charade()\n```\n\nDoes this seem like a sensible set of logic to you?\n\nFinal note: I can't guarantee that a pull request that I'm happy with will get incorporated. Requests is \"one man one vote\": Kenneth is the man, he has the vote. I'm already tempted to say that the entire discussion above is an overreach, and that Kenneth will believe that Requests simply should stop caring about `Content-Type` beyond whether it has a `charset` value (that is, removing the ISO-8859-1 default and not replacing it with anything else).\n\nIn fact, let's pose him that exact question (there's no way he has time to read the entire discussion above). I'll also get Ian's opinion:\n\n**BDFL Question:**\n@kennethreitz: Currently Requests will use `ISO-8859-1` as the default encoding for anything with `Content-Type` set to `text/<something>` and without a `charset` declaration (as specified by RFC 2616). This can cause problems with HTML that uses `<meta>` tags to declare a non-ISO-8859-1 encoding. We can do one of three things:\n1. Be smarter. If the `Content-Type` is `text/html` or one of a similar set of families, we can search for a relevant `<meta>` tag.\n2. Stay the same.\n3. Remove the ISO-8859-1 default and stop pretending we know about `Content-Type`s at all.\n\nPreferences? @sigmavirus24, I'd like your opinion too. =)\n",
"All intentional design decisions. #2\n",
"Re point 1 in the previous comment. You're failing to respect the W3 rules I pointed to. If the header does not specify the encoding but the content does then one uses the content encoding, however if the header specifies encoding it takes precedence over any encoding specified in the content, if neither the header nor the content specifies encoding then use the default. If get_encoding_from_header() always returns a value irregardless of whether it's present or not then how does one implement the precedence logic required by W3 rules? How do you know you're supposed to use the content encoding as opposed to the heading encoding? Does that make sense now?\n\nAlso with regards to your proposal that all users of the Requests library should implement their own logic to handle correct encoding as opposed to having the logic in the Requests library does not seem to be very friendly and the source of a lot of bugs. You're asking people to understand what has proven to be obscure logic that is often implemented incorrectly (in fact most programmers usually punt on handling encoding because don't understand it). If the boiler plate code you're asking others to paste into their code has a defect or limitation they won't benefit from any upgrade to the Requests library. It seems to me this logic really belongs somewhere in the library so \"things work as I expect without any extra effort\". \n",
"@jdennis I am most definitely ignoring the W3C rules. =) That's because, as mentioned earlier, Requests is _not_ a HTML library, it's a HTTP library. The W3C does not make the HTTP specs, the IETF do, and they have been very clear about what the correct behaviour is here.\n\nThe same rationale applies regarding having this logic in the core library. The idea that things should 'work as I expect' (aka the principle of least surprise) is great, but only the things the library actually _claims_ to do should work as you expect. Given that Requests is explicitly a HTTP library, not a HTML library, you should only assume that the HTTP behaviour of the library works as you expect it to. Requests' documentation is clear on what will happen regarding encodings. From the [section on response content](http://docs.python-requests.org/en/latest/user/quickstart/#response-content):\n\n> When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property...\n\nWe don't claim to parse HTML to find the right encoding, and we don't even claim we'll get it right. We say we'll \"make educated guesses based on the HTTP headers\", and that's all we do. =)\n\nFinally, we're not asking all Requests users to implement this logic. We're asking the ones who need it to implement it. By and large Requests does not help users with getting their data into a form that is useful to them. The only exception I can think of is `Response.json()`, and we only add that because it's almost no work for us and overwhelmingly the most common use-case for Requests. =)\n",
"You're letting your focus on HTTP only cloud your thinking. You can't implement any content specific rules if the HTTP \"content container\" lies to you. If the HTTP container does not correctly report the containers metadata (e.g. header attributes) you can't use the HTTP container attributes to implement content specific logic. get_encoding_from_header() should never supply a value that was not in the header. It's the caller of this routine which needs to interpret the absence of the encoding attribute and if it so chooses supply a default or take some other action. In the current implementation supplying the default is occurring in the wrong location.\n\nBefore we go any further we have to at a minimum agree on this change, otherwise everything that follows collapses.\n",
"Does the container lie? If no encoding is specified in `Content-Type` then `Response.encoding` is `None`. You can test for that condition before reading the content, and then choose to do however you please.\n\nAdditionally, the headers are available, unchanged, for the user to work with as they choose. I simply don't believe Requests is impeding this behaviour at all. =)\n",
"Maybe I have a misunderstanding of how the code works but in adapters.py:build_response() it does this:\n\nresponse.encoding = get_encoding_from_headers(response.headers)\n\nalmost immediately after creating the Response object.\n\nSince get_encoding_from_headers() never returns None then Response.encoding will never be None.\n\nOr do I have a misunderstanding of the logic flow?\n",
"No, you're right, I temporarily lost all grasp on sanity. (Though it's worth noting that `get_encoding_from_headers` _can_ return `None`, just not in the correctly Content-Type'd HTML case).\n\nIt seems to me the correct logic for users who need to do this, while following the spirit of the W3C guidelines, is to always check the HTML body for a `meta` tag. If one is present, even if that disagrees with the headers, that's what I'd follow (it's more likely to be right). Only if that's not present would I concern myself with what's in the HTTP headers. (At least, that's what I'd do if I were writing a HTML library.)\n"
] |
https://api.github.com/repos/psf/requests/issues/1588
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1588/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1588/comments
|
https://api.github.com/repos/psf/requests/issues/1588/events
|
https://github.com/psf/requests/issues/1588
| 19,280,294 |
MDU6SXNzdWUxOTI4MDI5NA==
| 1,588 |
Wrong encoding used if charset only specified in content
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/300306?v=4",
"events_url": "https://api.github.com/users/jdennis/events{/privacy}",
"followers_url": "https://api.github.com/users/jdennis/followers",
"following_url": "https://api.github.com/users/jdennis/following{/other_user}",
"gists_url": "https://api.github.com/users/jdennis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jdennis",
"id": 300306,
"login": "jdennis",
"node_id": "MDQ6VXNlcjMwMDMwNg==",
"organizations_url": "https://api.github.com/users/jdennis/orgs",
"received_events_url": "https://api.github.com/users/jdennis/received_events",
"repos_url": "https://api.github.com/users/jdennis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jdennis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdennis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jdennis",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-10T19:40:58Z
|
2021-09-08T23:10:44Z
|
2013-09-10T20:53:33Z
|
NONE
|
resolved
|
HTML pages which declared their charset encoding only in their content
had the wrong encoding applied because the content was never checked.
Currently only the request headers are checked for the charset
encoding, if absent the apparent_encoding heuristic is applied. But
the W3.org doc says one should check the header first for a charset
declaration, then if that's absent check the meta tags in the content
for a charset encoding declaration. It also says if no charset
encoding declaration is found one should assume UTF-8, not ISO-8859-1
(a bad recommendation from the early days of the web).
I have a patch (pull request), more details in the commit comment.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1588/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1588/timeline
| null |
completed
| null | null | false |
[
"All relevant discussion is in #1589, so I'm closing this to centralise the discussion there. =)\n"
] |
https://api.github.com/repos/psf/requests/issues/1587
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1587/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1587/comments
|
https://api.github.com/repos/psf/requests/issues/1587/events
|
https://github.com/psf/requests/pull/1587
| 19,255,461 |
MDExOlB1bGxSZXF1ZXN0ODIwODY1MA==
| 1,587 |
Fixed persistence spelling
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/311929?v=4",
"events_url": "https://api.github.com/users/kracekumar/events{/privacy}",
"followers_url": "https://api.github.com/users/kracekumar/followers",
"following_url": "https://api.github.com/users/kracekumar/following{/other_user}",
"gists_url": "https://api.github.com/users/kracekumar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kracekumar",
"id": 311929,
"login": "kracekumar",
"node_id": "MDQ6VXNlcjMxMTkyOQ==",
"organizations_url": "https://api.github.com/users/kracekumar/orgs",
"received_events_url": "https://api.github.com/users/kracekumar/received_events",
"repos_url": "https://api.github.com/users/kracekumar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kracekumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kracekumar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kracekumar",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-10T12:22:37Z
|
2021-09-08T21:01:10Z
|
2013-09-10T13:43:04Z
|
CONTRIBUTOR
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1587/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1587/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1587.diff",
"html_url": "https://github.com/psf/requests/pull/1587",
"merged_at": "2013-09-10T13:43:04Z",
"patch_url": "https://github.com/psf/requests/pull/1587.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1587"
}
| true |
[
"Thanks for this! :cake:\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1586
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1586/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1586/comments
|
https://api.github.com/repos/psf/requests/issues/1586/events
|
https://github.com/psf/requests/issues/1586
| 19,200,817 |
MDU6SXNzdWUxOTIwMDgxNw==
| 1,586 |
deactivate automatic gzip decompression ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/898264?v=4",
"events_url": "https://api.github.com/users/uweschmitt/events{/privacy}",
"followers_url": "https://api.github.com/users/uweschmitt/followers",
"following_url": "https://api.github.com/users/uweschmitt/following{/other_user}",
"gists_url": "https://api.github.com/users/uweschmitt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/uweschmitt",
"id": 898264,
"login": "uweschmitt",
"node_id": "MDQ6VXNlcjg5ODI2NA==",
"organizations_url": "https://api.github.com/users/uweschmitt/orgs",
"received_events_url": "https://api.github.com/users/uweschmitt/received_events",
"repos_url": "https://api.github.com/users/uweschmitt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/uweschmitt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uweschmitt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/uweschmitt",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 7 |
2013-09-09T14:47:07Z
|
2021-08-27T00:08:24Z
|
2013-09-11T07:34:56Z
|
NONE
|
resolved
|
I had some trouble to download a tar.gz file from a file server with a http interface, because I was not aware of the automatic decompression feature. Would it be possible to add an extra parameter which switches this feature off ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/898264?v=4",
"events_url": "https://api.github.com/users/uweschmitt/events{/privacy}",
"followers_url": "https://api.github.com/users/uweschmitt/followers",
"following_url": "https://api.github.com/users/uweschmitt/following{/other_user}",
"gists_url": "https://api.github.com/users/uweschmitt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/uweschmitt",
"id": 898264,
"login": "uweschmitt",
"node_id": "MDQ6VXNlcjg5ODI2NA==",
"organizations_url": "https://api.github.com/users/uweschmitt/orgs",
"received_events_url": "https://api.github.com/users/uweschmitt/received_events",
"repos_url": "https://api.github.com/users/uweschmitt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/uweschmitt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uweschmitt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/uweschmitt",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1586/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1586/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nQuickly, before I dive into this: the correct solution when Requests is getting in your way like this is to use `Response.raw`. =) \n\nBefore we get into discussing this feature request further, I'm interested to know how this problem bit you. We should only decompress the file if the file is served with a `Content-Encoding` header set to `gzip`. My best guess is that a `.tar.gz` file should be served with no `Content-Encoding` header and `Content-Type: x-gzip`. Do we know if this is a common behaviour with gzipped raw files, or if this was an unusual behaviour from your webserver? /cc @sigmavirus24 \n",
"I wrote the webserver myself using bottle, and bottle sets the `Content-Encoding` header to `gzip` automatically, so it was my fault to blame `requests` and I should further investigate what bottle is doing.\n",
"Well it may not be persuasive enough but _python.org_ is doing this: try downloading [http://www.python.org/ftp/python/3.4.0/Python-3.4.0b2.tgz](http://www.python.org/ftp/python/3.4.0/Python-3.4.0b2.tgz) with Requests, and actually you get `Python-3.4.0b2.tar` with ~4x size of its `content-length` :(\n",
"Once again, when downloading files that are `gzipped` and you want to keep the compression, you should read directly from `r.raw()`.\n\nI'd argue that strictly python.org should be serving that file with no Content-Encoding and a Content-Type of application/x-gzip, but that's just me.\n",
"@yydonny why wouldn't you use streaming on a file like that anyway? Or are you just trying to argue a point? Passing `stream=True` will not decompress the file for you unless you use `iter_content` (or `iter_lines`). @Lukasa is 100% correct that you should be using the response's `raw` method. It is documented and it was the prior conclusion of this issue. For 98% of our users, the decompression is exactly what they want. For the 2% use case we have built in considerations but those users need to be considerate enough to read the documentation first.\n\nArguing that we should abandon what we're doing because there are servers that do the wrong thing on the web is like saying certain countries should start devaluing its currency because other nations do the same thing. That approach to economic \"development\" is as harmful to the citizens as deactivating automatic gzip decompression would be to the 98% of our users.\n",
"> @yydonny why wouldn't you use streaming on a file like that anyway? Or are you just trying to argue a point? Passing `stream=True` will not decompress the file for you unless you use `iter_content` (or `iter_lines`). @Lukasa is 100% correct that you should be using the response's `raw` method. It is documented and it was the prior conclusion of this issue. For 98% of our users, the decompression is exactly what they want. For the 2% use case we have built in considerations but those users need to be considerate enough to read the documentation first.\r\n> \r\n> Arguing that we should abandon what we're doing because there are servers that do the wrong thing on the web is like saying certain countries should start devaluing its currency because other nations do the same thing. That approach to economic \"development\" is as harmful to the citizens as deactivating automatic gzip decompression would be to the 98% of our users.\r\n\r\nNot to abandon automatic decompressing, but to introduce the option to disable it (by default the decompression should be enabled, of course). I have gzipped `StreamingHttpResponse` with `response.iter_content()` over it, trying to figure out how to not have it decompressed by `requests`.",
"This issue is over 7 years old and has a solution. Reviving it is not productive. Further, the library is in a rather strict feature-freeze so adding a new API where there's already a solution is not going to happen"
] |
https://api.github.com/repos/psf/requests/issues/1585
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1585/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1585/comments
|
https://api.github.com/repos/psf/requests/issues/1585/events
|
https://github.com/psf/requests/pull/1585
| 19,192,427 |
MDExOlB1bGxSZXF1ZXN0ODE3NjAwMw==
| 1,585 |
Use values() when the keys are not being used
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/156810?v=4",
"events_url": "https://api.github.com/users/rvoicilas/events{/privacy}",
"followers_url": "https://api.github.com/users/rvoicilas/followers",
"following_url": "https://api.github.com/users/rvoicilas/following{/other_user}",
"gists_url": "https://api.github.com/users/rvoicilas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rvoicilas",
"id": 156810,
"login": "rvoicilas",
"node_id": "MDQ6VXNlcjE1NjgxMA==",
"organizations_url": "https://api.github.com/users/rvoicilas/orgs",
"received_events_url": "https://api.github.com/users/rvoicilas/received_events",
"repos_url": "https://api.github.com/users/rvoicilas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rvoicilas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rvoicilas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rvoicilas",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 2 |
2013-09-09T12:29:57Z
|
2021-09-08T23:05:14Z
|
2013-09-24T17:45:46Z
|
CONTRIBUTOR
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1585/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1585/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1585.diff",
"html_url": "https://github.com/psf/requests/pull/1585",
"merged_at": "2013-09-24T17:45:46Z",
"patch_url": "https://github.com/psf/requests/pull/1585.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1585"
}
| true |
[
"This looks good to me! +1 :cake:\n",
":+1: :cake:\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1584
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1584/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1584/comments
|
https://api.github.com/repos/psf/requests/issues/1584/events
|
https://github.com/psf/requests/issues/1584
| 19,179,705 |
MDU6SXNzdWUxOTE3OTcwNQ==
| 1,584 |
POST a Multipart-Encoded File with streaming
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30210?v=4",
"events_url": "https://api.github.com/users/robi-wan/events{/privacy}",
"followers_url": "https://api.github.com/users/robi-wan/followers",
"following_url": "https://api.github.com/users/robi-wan/following{/other_user}",
"gists_url": "https://api.github.com/users/robi-wan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/robi-wan",
"id": 30210,
"login": "robi-wan",
"node_id": "MDQ6VXNlcjMwMjEw",
"organizations_url": "https://api.github.com/users/robi-wan/orgs",
"received_events_url": "https://api.github.com/users/robi-wan/received_events",
"repos_url": "https://api.github.com/users/robi-wan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/robi-wan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robi-wan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/robi-wan",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 12 |
2013-09-09T07:11:46Z
|
2021-09-07T00:06:25Z
|
2014-02-03T10:50:13Z
|
NONE
|
resolved
|
We have a web application which accepts (large) files together with some meta data.
We build a form for uploading these kind of files - and then we added a REST-Interface to this form.
I build an upload task in our fabfile which essentialliy does this:
``` python
with open(filename, 'rb') as f:
response = requests.post(url, data=values, files={'file': f})
```
It seems that that for multipart-encoded POST requests streaming does not work because I get this error (since our files exceeded 128 MB file size):
``` python
Upload artifact 'artifact.tar.gz' (group, SNAPSHOT) running on http://<ip> on behalf of user 'fabric'.
2aeaa77d0ac917a28e15ec73fe92e060 *artifact.tar.gz
Traceback (most recent call last):
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\fabric\main.py", line 743, in main
*args, **kwargs
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\fabric\tasks.py", line 405, in execute
results['<local-only>'] = task.run(*args, **new_kwargs)
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\fabric\tasks.py", line 171, in run
return self.wrapped(*args, **kwargs)
File "C:\development\work\avatar_herbsting_2013\fabfile.py", line 480, in upload
response = upload_artifact(**data)
File "C:\development\work\avatar_herbsting_2013\scripts\fabfile\deploy.py", line 106, in upload_artifact
response = requests.post(url, data=values, files={'file': f})
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\requests\api.py", line 88, in post
return request('post', url, data=data, **kwargs)
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\requests\api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\requests\sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\requests\sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "C:\Documents and Settings\Administrator\.virtualenvs\avatar\lib\site-packages\requests\adapters.py", line 327, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='<ip>', port=80): Max retries exceeded with url: /deliver/upload/ (Caused by <class 'socket.error'>: [Errno 10055] An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full)
```
Happens on Windows XP SP3 32bit with 2 GB RAM and Windows Server 2003 R2 SP2 with 3 GB RAM.
Python 2.7.5 32bit.
requests 1.2.3
Full code (filename contains the path to the large file I want to upload):
``` python
def upload_artifact(address, filename, version, group, username):
"""Upload an artifact.
"""
path = 'deliver/upload/'
url = urlparse.urljoin(address, path)
# get id for group
url_group_id = urlparse.urljoin(address, 'deliver/groupidbyname/?groupname={}'.format(group))
response = requests.get(url_group_id)
group_id = response.text
# upload file
values = {'md5hash': md5sum(filename),
'group': group_id,
'version': version,
'username': username,
}
with open(filename, 'rb') as f:
response = requests.post(url, data=values, files={'file': f})
return response
```
Is there a way to enable streaming of the large file for this case?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1584/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1584/timeline
| null |
completed
| null | null | false |
[
"Hi @robi-wan, thanks for raising this issue!\n\nFirstly, I'll address the question you actually asked. Currently Requests does not support streaming multipart-encoded files. If you want to do this you'll need to provide a file-like-object wrapper for your file that does the multipart encoding itself, and then pass that to the `data` object as described [here](http://docs.python-requests.org/en/latest/user/advanced/#streaming-uploads).\n\nSecondly, I'll address your actual problem. Your specific error is the Winsock error WSAENOBUFS. It should not be easily possible to hit this error in Requests because we use blocking sockets, which ought to block until there is sufficient buffer space available. You don't appear to be running out of memory in your process, so I don't think the file size itself has anything to do with this problem.\n\nI'm going to take an educated guess and say that you're running out of ephemeral ports. By default, Windows only exposes 5000 ephemeral ports: sufficiently many long-running uploads could exhaust the supply and cause this error. Does that sound possible in your case? If so, take a look [here](http://support.microsoft.com/kb/196271).\n",
"Hi @Lukasa and @robi-wan I'm delighted to see this question and already find it answered by you. I happen to just have hit the same issue with missing streaming functionality for multipart file uploads.\n\nI suggest to you to look at the _poster_ module - I have used this instead for realizing the upload in a script that otherwise uses requests. I have implemented the whole upload request with this other module and urllib2, however it might as well be possible to use it to prepare the file-like `data` argument for requests.\n\nThe support for this kind of streaming uploads was a prime reason for going with requests, therefore I was disappointed when encountering the `NotImplementedError` that is thrown by `PreparedRequest.prepare_body` when the `files` as well as the `data` argument is provided. This could be made clearer in the documentation.\n",
"I agree that we could better document this behaviour. =)\n\nI'm also wondering whether it's worth having a semi-official Requests-y way of better handling complex file behaviours.\n",
"Hi @Lukasa thanks for your quick response. I read the Microsoft Knowledge Base Article and tried the suggested solution without success.\n\n@avallen In the meantime I found _poster_ and used it for solving this problem:\n\n``` python\ndef upload_artifact(address, filename, version, group, username):\n \"\"\"Upload an artifact.\n \"\"\"\n path = 'deliver/upload/'\n url = urlparse.urljoin(address, path)\n\n # get id for group\n url_group_id = urlparse.urljoin(address, 'deliver/groupidbyname/?groupname={}'.format(group))\n response = requests.get(url_group_id)\n group_id = response.text\n\n # upload file\n values = {'md5hash': md5sum(filename),\n 'group': group_id,\n 'version': version,\n 'username': username,\n 'file': open(filename, 'rb')\n }\n\n # Register the streaming http handlers with urllib2\n poster.streaminghttp.register_openers()\n\n # Start the multipart/form-data encoding of the file filename.\n # headers contains the necessary Content-Type and Content-Length\n # datagen is a generator object that yields the encoded parameters\n datagen, headers = poster.encode.multipart_encode(values)\n # Create the Request object\n request = urllib2.Request(url, datagen, headers)\n resp = None\n try:\n # Actually do the request, and get the response\n resp = urllib2.urlopen(request)\n except urllib2.HTTPError as error:\n print(error)\n print(error.fp.read())\n\n return resp\n```\n\nThis works... but I would like to use _requests_ for task like this.\n",
"It's difficult to answer that question because we don't know what you're trying to do. What are you hoping to achieve?\n",
"@BernardoLima how is your question relevant to this issue? Your question should be asked on [StackOverflow](http://stackoverflow.com/questions/tagged/python-requests). I have a strong suspicion as to why your Post is not working. I'll answer you either on StackOverfow or by email (if you choose to email me privately). \n",
"@sigmavirus24 sorry, you're right, I will email you, if you don't mind.\nThank you very much.\n",
"I don't mind. That's why I suggested it. :wink: \n",
"I think the semi-official way to do this is to use [sigmavirus24/requests-toolbelt](https://github.com/sigmavirus24/requests-toolbelt), so I'm going to close this now. =)\n",
"\"semi-official\" is very accurate.\n",
"Maybe it's faster then Django, but docs ...\r\n```\r\nfrom vibora import Vibora, JsonResponse\r\nTraceback (most recent call last):\r\n. . .\r\nImportError: cannot import name 'JsonResponse'\r\n```\r\nAnd Websockets same! Sad",
"@Cyxapic wrong repo =)"
] |
https://api.github.com/repos/psf/requests/issues/1583
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1583/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1583/comments
|
https://api.github.com/repos/psf/requests/issues/1583/events
|
https://github.com/psf/requests/issues/1583
| 19,179,016 |
MDU6SXNzdWUxOTE3OTAxNg==
| 1,583 |
Add decode_content as a parameter to requests()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2334429?v=4",
"events_url": "https://api.github.com/users/klahnakoski/events{/privacy}",
"followers_url": "https://api.github.com/users/klahnakoski/followers",
"following_url": "https://api.github.com/users/klahnakoski/following{/other_user}",
"gists_url": "https://api.github.com/users/klahnakoski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klahnakoski",
"id": 2334429,
"login": "klahnakoski",
"node_id": "MDQ6VXNlcjIzMzQ0Mjk=",
"organizations_url": "https://api.github.com/users/klahnakoski/orgs",
"received_events_url": "https://api.github.com/users/klahnakoski/received_events",
"repos_url": "https://api.github.com/users/klahnakoski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klahnakoski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klahnakoski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klahnakoski",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 4 |
2013-09-09T06:44:58Z
|
2021-09-09T00:28:14Z
|
2013-09-09T12:30:51Z
|
NONE
|
resolved
|
I would like to use Requests with Flask to make a proxy. I must deal with some large zipped responses (and I do not care about the content).
Please add decode_content as a parameter to requests() so I can return the bytes directly back to the original caller. Specifically, I want to use this with stream=True and pass the iter_content() generator back to Flask for delivery to the original caller.
Thanks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2334429?v=4",
"events_url": "https://api.github.com/users/klahnakoski/events{/privacy}",
"followers_url": "https://api.github.com/users/klahnakoski/followers",
"following_url": "https://api.github.com/users/klahnakoski/following{/other_user}",
"gists_url": "https://api.github.com/users/klahnakoski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klahnakoski",
"id": 2334429,
"login": "klahnakoski",
"node_id": "MDQ6VXNlcjIzMzQ0Mjk=",
"organizations_url": "https://api.github.com/users/klahnakoski/orgs",
"received_events_url": "https://api.github.com/users/klahnakoski/received_events",
"repos_url": "https://api.github.com/users/klahnakoski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klahnakoski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klahnakoski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klahnakoski",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1583/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1583/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue!\n\nThis is exactly what we expose the [raw response](http://docs.python-requests.org/en/latest/user/quickstart/#raw-response-content) for. In vanilla Requests this response object is a urllib3 Response object, which contains a [`stream()` method](http://urllib3.readthedocs.org/en/latest/helpers.html#urllib3.response.HTTPResponse.stream).\n\nThe expected Requests way of handling this would be to make the request as usual with `stream=True`, and then pass `Response.raw.stream(decode_content=False)` to your Flask application. =)\n",
"@Lukasa is this documented anywhere? \n",
"@sigmavirus24 Not as a single flow, no. It's not clear to me that it should be part of the formal documentation though: perhaps another blog post?\n",
"Documenting under Advanced Usage - Streaming Requests (http://docs.python-requests.org/en/latest/user/advanced/#streaming-requests) as just another example will hint to the reader they can use Response.raw.stream.\n\nThank you\n"
] |
https://api.github.com/repos/psf/requests/issues/1582
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1582/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1582/comments
|
https://api.github.com/repos/psf/requests/issues/1582/events
|
https://github.com/psf/requests/issues/1582
| 19,150,544 |
MDU6SXNzdWUxOTE1MDU0NA==
| 1,582 |
Allow custom authentication (in particular NTLM) to proxies
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/105314?v=4",
"events_url": "https://api.github.com/users/hickford/events{/privacy}",
"followers_url": "https://api.github.com/users/hickford/followers",
"following_url": "https://api.github.com/users/hickford/following{/other_user}",
"gists_url": "https://api.github.com/users/hickford/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hickford",
"id": 105314,
"login": "hickford",
"node_id": "MDQ6VXNlcjEwNTMxNA==",
"organizations_url": "https://api.github.com/users/hickford/orgs",
"received_events_url": "https://api.github.com/users/hickford/received_events",
"repos_url": "https://api.github.com/users/hickford/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hickford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hickford/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hickford",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 12 |
2013-09-07T19:38:19Z
|
2021-09-08T19:00:28Z
|
2013-09-21T15:22:58Z
|
NONE
|
resolved
|
Requests can do many kinds of authentication—basic, digest, OAuth1. Even better, [the great API](http://docs.python-requests.org/en/latest/user/authentication/) allows users to specify a custom authentication class
```
import requests
from requests_ntlm import HttpNtlmAuth
requests.get("http://ntlm_protected_site.com",auth=HttpNtlmAuth('domain\\username','password'))
```
However for _proxies_, the only authentication presently supported is Basic. The API doesn't allow you to specify other or custom classes. http://docs.python-requests.org/en/latest/user/advanced/#proxies
> To use HTTP Basic Auth with your proxy, use the http://user:password@host/ syntax:
Please could you expand the API to allow custom authentication for proxies too?
This would be mega useful. NTLM proxies are common in Windows corporate networks. Python development can't get off the ground in my office because the package manager (and everything else) falls over at the proxy. Examples
- https://github.com/pypa/pip/issues/1182
- https://github.com/jkbr/httpie/issues/159
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1582/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1582/timeline
| null |
completed
| null | null | false |
[
"It's not obvious what the syntax should look like. This is the current API (string based)\n\n```\nproxies = {\n \"http\": \"http://10.10.1.10:3128\",\n \"https\": \"http://user:pass10.10.1.10:1080\", # basic authentication\n}\nrequests.get(\"http://example.org\", proxies=proxies)\n```\n\nWill want to preserve that, but also allow user to specify an HtmlAuth object if they want to. \n\nHow about, define a new Proxy class, which is permissible as a value in the `proxies` dictionary\n\n```\nProxy(url=\"http://10.10.1.10:3128\", auth=HttpNtlmAuth('domain\\\\username','password'))\n```\n",
"Does proxying happen in `requests` or does this need a fix upstream in https://github.com/shazow/urllib3 first?\n\nEdit: Urllib3 only does Basic auth. Asked there if there's any plan to support custom auth. I don't know if it would be possible since AuthBase is in Requests. https://github.com/shazow/urllib3/issues/242\n",
"This is a good idea @matt-hickford. I see three ways of doing this:\n1. Use a `Proxy` class in the `Proxies` dictionary. Definitely do-able, but adds quite a bit of complication into the API. Possible. +0\n2. Use proxy-type Auth handlers. Not good. -1\n3. Use Transport Adapters.\n\nI'm generally in favour of TAs. In particular, Transport Adapters are really the only thing that knows anything about HTTPS over proxy (the CONNECT verb), which needs to be differently handled from other kinds of messages. To that end, it seems more natural to provide proxy authentication solutions at the Transport Adapter level.\n\nPotentially TAs could take pluggable auth modules, just like individual requests? /cc @sigmavirus24 \n",
"I would think Transport Adapters would take Proxy Adapters which handle using proxies in much the same way Transport Adapters handle the protocols (even though by default we only handle two). That said, Proxy Adapters could contain all the logic for pluggable auth \"modules\". This would then factor out the Proxy logic from the Transport Adapters and make things far more testable as well and allow people to configure crazy proxies however they like.\n",
"I don't think we want to be adding a further Transport Adapter layer. Best to just say 'a Transport Adapter knows what to do'.\n",
"Until urllib3 has this as an option there's not much we can do, so I'll close this until that time.\n",
"Cool I've offered a bounty on the upstream issue https://github.com/shazow/urllib3/issues/242\n\nYou could label this 'blocked' or something\n",
"Hello gents. I apologize to wake the sleeping giant. However Ive been trying to track down a viable workaround to request https: with NTLM proxy authentication and have come up short. @Lukasa et al I appreciate all the work to date. As of what is currently available it doesn't look like this is supported in urllib3, requests-ntlm or request. What does one do? Any suggestions?\n",
"I believe requests-ntlm does have support for this in the code: have you tried it?\n",
"Thanks for the reply. I have tried requests-ntlm if i have my creds correct this is what I get.\n\n```\nimport request\nfrom requests_ntlm import HttpNtlmAuth\nsession = requests.Sessions()\nsession.auth = HttpNtlmAuth(\"domain\\\\myuser\",\"mypassword\")\nsession.get(\"https://httpbin.org/get\")\n```\n\nConnectionError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /get (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',)))\n",
"@Lukasa. From what I can tell requests-ntlm will let you do NTLM auth with a proxy when using HTTP, but not with HTTPS. Am I missing something? Thanks again for your time.\n",
"Ah, yes, @ryandebruyn that's correct. Unfortunately, httplib makes it very difficult to authenticate to proxies when setting up a TLS tunnel because for any non-200 response to the CONNECT it will throw an exception and lose the response data. We can in principle work around that but it's _extremely_ difficult to do and potentially breaks quite a few behaviours.\n\nSadly, this is just something that is very, very difficult to do.\n"
] |
https://api.github.com/repos/psf/requests/issues/1581
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1581/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1581/comments
|
https://api.github.com/repos/psf/requests/issues/1581/events
|
https://github.com/psf/requests/issues/1581
| 19,142,108 |
MDU6SXNzdWUxOTE0MjEwOA==
| 1,581 |
Python 3 issue with urllib3 (local) import
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4925587?v=4",
"events_url": "https://api.github.com/users/barseghyanartur/events{/privacy}",
"followers_url": "https://api.github.com/users/barseghyanartur/followers",
"following_url": "https://api.github.com/users/barseghyanartur/following{/other_user}",
"gists_url": "https://api.github.com/users/barseghyanartur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/barseghyanartur",
"id": 4925587,
"login": "barseghyanartur",
"node_id": "MDQ6VXNlcjQ5MjU1ODc=",
"organizations_url": "https://api.github.com/users/barseghyanartur/orgs",
"received_events_url": "https://api.github.com/users/barseghyanartur/received_events",
"repos_url": "https://api.github.com/users/barseghyanartur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/barseghyanartur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/barseghyanartur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/barseghyanartur",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 25 |
2013-09-07T10:13:46Z
|
2018-02-26T19:43:50Z
|
2013-09-08T18:43:36Z
|
NONE
|
resolved
|
The problem with broken import occurs when Python 3.3.0 (virtualenv, Ubuntu 12.04 LTS).
In Python 2.7.4 (virtualenv, same machine) everything runs smoothly.
The traceback:
```
File "/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests-1.2.3-py3.3.egg/requests/packages/__init__.py", line 3, in <module>
from . import urllib3
ImportError: cannot import name urllib3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4925587?v=4",
"events_url": "https://api.github.com/users/barseghyanartur/events{/privacy}",
"followers_url": "https://api.github.com/users/barseghyanartur/followers",
"following_url": "https://api.github.com/users/barseghyanartur/following{/other_user}",
"gists_url": "https://api.github.com/users/barseghyanartur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/barseghyanartur",
"id": 4925587,
"login": "barseghyanartur",
"node_id": "MDQ6VXNlcjQ5MjU1ODc=",
"organizations_url": "https://api.github.com/users/barseghyanartur/orgs",
"received_events_url": "https://api.github.com/users/barseghyanartur/received_events",
"repos_url": "https://api.github.com/users/barseghyanartur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/barseghyanartur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/barseghyanartur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/barseghyanartur",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1581/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1581/timeline
| null |
completed
| null | null | false |
[
"Weird, this looks like your install is broken. Try reinstalling Requests in the same virtualenv?\n",
"It seems that the following causes the problem.\n\n# Directory structure\n\n├── src\n│ └── myapp\n│ │ ├── test_requests.py\n│ │ └── utils.py\n└── test_request.py\n\n# `test_requests.py` contents\n\nBoth `src.myapp.test_requests` and `test_requests` modules have the following content:\n\nimport requests\n\n# OK tests\n\npython test_requests.py\n\n# Fail tests\n\npython src/myapp/test_requests.py\n\n# Solution\n\nUse absolute imports?\n\nOtherwise, I have to rename my `utils.py` package which now resides in `src/myapp/` to something else. Then it would work.\n",
"Uh, isn't your problem about importing `urllib3`, not `utils`?\n",
"No, actually, I'm trying to import the requests, but it seems to be impossible if the package, in which the import actually takes place, already has a module named `utils`.\n\nI though it's quite clear from the directory structure I have given as an example.\n\nI think it's barely requests problem.\n",
"Oh, I agree that it certainly could be a problem. =) It's just not the problem identified by your traceback, which complains about failing to import urllib3. =)\n",
"Ah, that. Yep, could be more detailed indeed. The more detailed traceback follows:\n\n```\nTraceback (most recent call last):\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/starbase-0.2-py3.3.egg/starbase/client/http/__init__.py\", line 9, in <module>\n import requests\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/__init__.py\", line 58, in <module>\n from . import utils\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/utils.py\", line 23, in <module>\n from .compat import parse_http_list as _parse_list_header\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/compat.py\", line 7, in <module>\n from .packages import charade as chardet\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/packages/__init__.py\", line 3, in <module>\n from . import urllib3\nImportError: cannot import name urllib3\n```\n\nThus, the urllib3 comes as last, but the problem is causes by relative import of `utils` module.\n",
"See, I think `utils` is definitely not your problem. The import of `utils` correctly finds the Requests version. Can you print the files in `home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/packages`?\n",
"```\n├── charade\n│ ├── big5freq.py\n│ ├── big5prober.py\n│ ├── chardistribution.py\n│ ├── charsetgroupprober.py\n│ ├── charsetprober.py\n│ ├── codingstatemachine.py\n│ ├── compat.py\n│ ├── constants.py\n│ ├── cp949prober.py\n│ ├── escprober.py\n│ ├── escsm.py\n│ ├── eucjpprober.py\n│ ├── euckrfreq.py\n│ ├── euckrprober.py\n│ ├── euctwfreq.py\n│ ├── euctwprober.py\n│ ├── gb2312freq.py\n│ ├── gb2312prober.py\n│ ├── hebrewprober.py\n│ ├── __init__.py\n│ ├── jisfreq.py\n│ ├── jpcntx.py\n│ ├── langbulgarianmodel.py\n│ ├── langcyrillicmodel.py\n│ ├── langgreekmodel.py\n│ ├── langhebrewmodel.py\n│ ├── langhungarianmodel.py\n│ ├── langthaimodel.py\n│ ├── latin1prober.py\n│ ├── mbcharsetprober.py\n│ ├── mbcsgroupprober.py\n│ ├── mbcssm.py\n│ ├── __pycache__\n│ │ ├── big5freq.cpython-33.pyc\n│ │ ├── big5prober.cpython-33.pyc\n│ │ ├── chardistribution.cpython-33.pyc\n│ │ ├── charsetgroupprober.cpython-33.pyc\n│ │ ├── charsetprober.cpython-33.pyc\n│ │ ├── codingstatemachine.cpython-33.pyc\n│ │ ├── compat.cpython-33.pyc\n│ │ ├── constants.cpython-33.pyc\n│ │ ├── cp949prober.cpython-33.pyc\n│ │ ├── escprober.cpython-33.pyc\n│ │ ├── escsm.cpython-33.pyc\n│ │ ├── eucjpprober.cpython-33.pyc\n│ │ ├── euckrfreq.cpython-33.pyc\n│ │ ├── euckrprober.cpython-33.pyc\n│ │ ├── euctwfreq.cpython-33.pyc\n│ │ ├── euctwprober.cpython-33.pyc\n│ │ ├── gb2312freq.cpython-33.pyc\n│ │ ├── gb2312prober.cpython-33.pyc\n│ │ ├── hebrewprober.cpython-33.pyc\n│ │ ├── __init__.cpython-33.pyc\n│ │ ├── jisfreq.cpython-33.pyc\n│ │ ├── jpcntx.cpython-33.pyc\n│ │ ├── langbulgarianmodel.cpython-33.pyc\n│ │ ├── langcyrillicmodel.cpython-33.pyc\n│ │ ├── langgreekmodel.cpython-33.pyc\n│ │ ├── langhebrewmodel.cpython-33.pyc\n│ │ ├── langhungarianmodel.cpython-33.pyc\n│ │ ├── langthaimodel.cpython-33.pyc\n│ │ ├── latin1prober.cpython-33.pyc\n│ │ ├── mbcharsetprober.cpython-33.pyc\n│ │ ├── mbcsgroupprober.cpython-33.pyc\n│ │ ├── mbcssm.cpython-33.pyc\n│ │ ├── sbcharsetprober.cpython-33.pyc\n│ │ ├── sbcsgroupprober.cpython-33.pyc\n│ │ ├── sjisprober.cpython-33.pyc\n│ │ ├── universaldetector.cpython-33.pyc\n│ │ └── utf8prober.cpython-33.pyc\n│ ├── sbcharsetprober.py\n│ ├── sbcsgroupprober.py\n│ ├── sjisprober.py\n│ ├── universaldetector.py\n│ └── utf8prober.py\n├── __init__.py\n├── __pycache__\n│ └── __init__.cpython-33.pyc\n└── urllib3\n ├── _collections.py\n ├── connectionpool.py\n ├── contrib\n │ ├── __init__.py\n │ ├── ntlmpool.py\n │ ├── __pycache__\n │ │ ├── __init__.cpython-33.pyc\n │ │ └── pyopenssl.cpython-33.pyc\n │ └── pyopenssl.py\n ├── exceptions.py\n ├── filepost.py\n ├── __init__.py\n ├── packages\n │ ├── __init__.py\n │ ├── ordered_dict.py\n │ ├── __pycache__\n │ │ ├── __init__.cpython-33.pyc\n │ │ ├── ordered_dict.cpython-33.pyc\n │ │ └── six.cpython-33.pyc\n │ ├── six.py\n │ └── ssl_match_hostname\n │ ├── __init__.py\n │ └── __pycache__\n │ └── __init__.cpython-33.pyc\n ├── poolmanager.py\n ├── __pycache__\n │ ├── _collections.cpython-33.pyc\n │ ├── connectionpool.cpython-33.pyc\n │ ├── exceptions.cpython-33.pyc\n │ ├── filepost.cpython-33.pyc\n │ ├── __init__.cpython-33.pyc\n │ ├── poolmanager.cpython-33.pyc\n │ ├── request.cpython-33.pyc\n │ ├── response.cpython-33.pyc\n │ └── util.cpython-33.pyc\n ├── request.py\n ├── response.py\n └── util.py\n\n11 directories, 107 files\n```\n",
"That's perplexing. Can you try uninstalling and reinstalling Requests?\n",
"Reinstalling requests doesn't solve the problem.\n",
"From where and how are you installing it?\n",
"Btw, when installing the requests the following problem occurs.\n\nRunning setup.py install for requests\n\n```\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/packages/urllib3/contrib/ntlmpool.py\", line 38\n \"\"\"\n ^\nSyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 130-132: truncated \\uXXXX escape\n```\n\nLike any other package, I install the requests using pip install in virtualenv.\n",
"Hmm. I'm a little confused, because I can cleanly import Requests 1.2.3 in 3.3.2. I wonder if the interpreter changed its behaviour.\n",
"It only affects Python 3.3.0. Earlier versions, including 2.6.8, 2.7.3 and 3.2.3 are not affected.\n",
"Does it affect later versions of 3.3?\n",
"Just installed the most recent version 3.3.2.\n\nThe full traceback:\n\n```\nTraceback (most recent call last):\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/starbase-0.2-py3.3.egg/starbase/client/http/__init__.py\", line 10, in <module>\n import requests\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/__init__.py\", line 58, in <module>\n from . import utils\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/utils.py\", line 23, in <module>\n from .compat import parse_http_list as _parse_list_header\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/compat.py\", line 7, in <module>\n from .packages import charade as chardet\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/packages/__init__.py\", line 3, in <module>\n from . import urllib3\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/packages/urllib3/__init__.py\", line 16, in <module>\n from .connectionpool import (\n File \"/home/me/.virtualenvs/starbase3/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py\", line 18, in <module>\n from httplib import HTTPConnection, HTTPException\nImportError: No module named 'httplib'\n```\n",
"Btw, 3.2.3 - same problem.\n",
"I think this error is caused by the directory [`http`](https://github.com/barseghyanartur/starbase/tree/master/src/starbase/client/http) you have in your project.\nThere is builtin package `http` which urllib3 needs.\nBut instead it gets yours which doesn't work.\n",
"t-8ch: Damn, you're right! `httplib` has been renamed to `http` in Python 3 and thus - the problem. Thanks, man.\n",
"@t-8ch is so :metal:\n",
"Damn straight he is.\n",
"Thanks guys. This issue already has come up on a few occasions.\nI'm in favour of a section `Troubleshooting` in the manual.\nIt should at least contain:\n- Import errors due to own `http` module\n- Weird server behaviour due to SSL version\n- SSL proxy errors due to `https://` in proxy URL.\n\nI'm sure you know some more :smile:\n",
"I just downgraded to an old version of urllib3 (from 1.22) and it fixed the problem. The issue came up for me on a fresh install of elasticsearch python client. Fix:\r\n\r\n pip install urllib3==1.15",
"@radzhome You should really try to find the root cause of the issue.\r\nDowngrading and being stuck on this old version may expose you to security issues and will make it harder to fix errors you encounter.",
"@radzhome usefull tips, with this I can install awx"
] |
https://api.github.com/repos/psf/requests/issues/1580
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1580/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1580/comments
|
https://api.github.com/repos/psf/requests/issues/1580/events
|
https://github.com/psf/requests/issues/1580
| 19,139,552 |
MDU6SXNzdWUxOTEzOTU1Mg==
| 1,580 |
websocket support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/634283?v=4",
"events_url": "https://api.github.com/users/vivekhub/events{/privacy}",
"followers_url": "https://api.github.com/users/vivekhub/followers",
"following_url": "https://api.github.com/users/vivekhub/following{/other_user}",
"gists_url": "https://api.github.com/users/vivekhub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vivekhub",
"id": 634283,
"login": "vivekhub",
"node_id": "MDQ6VXNlcjYzNDI4Mw==",
"organizations_url": "https://api.github.com/users/vivekhub/orgs",
"received_events_url": "https://api.github.com/users/vivekhub/received_events",
"repos_url": "https://api.github.com/users/vivekhub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vivekhub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vivekhub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vivekhub",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 10 |
2013-09-07T07:21:47Z
|
2021-09-09T01:22:28Z
|
2013-09-07T14:05:24Z
|
NONE
|
resolved
|
Hello:
i was trying to figure out if there is support for websockets in requests. Have been trying to find out but the documentation does not talk about it. Can someone help ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1580/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1580/timeline
| null |
completed
| null | null | false |
[
"Hi Vivek \n\nI believe this has been asked several times on StackOverflow. The answer is that we do not support websockets. We explicitly support HTTP/1.1 which is RFC 2675 (or something around that). It's unlikely we will support websockets in the near future too.\n",
"Websockets are not related to HTTP in any way.\n",
"So you folks dont mind if I fork and not contribute it back.\n",
"Forking won't help you in the slightest but you should read the license to answer your question. \n",
"@vivekhub In case you are interested you can start a project like `requests-websocket` similar to [requests-oauth](https://github.com/requests/requests-oauthlib). `Requests` doesn't contain `socks` support in code base, idea of this is to have library core as small as possible but extensible. \n",
"Good point let me first get my project done and then will figure out how to\nopen-source it\nOn Sep 7, 2013 11:07 PM, \"kracekumar\" [email protected] wrote:\n\n> @vivekhub https://github.com/vivekhub In case you are interested you\n> can start a project like requests-websocket similar to requests-oauthhttps://github.com/requests/requests-oauthlib.\n> Requests doesn't contain socks support in code base, idea of this is to\n> have library core as small as possible but extensible.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1580#issuecomment-24005894\n> .\n",
"@vivekhub how familiar are you with ws4py? If you look at that library you'll recognize that requests and its API are not very suitable for websockets.\n",
"Also, this issue is closed, so everyone please stop commenting here.\n",
"@sigmavirus24 thanks for the pointer to ws4py looks interesting. With my limited understanding of both the codebases it sounds like while I can start with request to upgrade a connection to a Websocket I would have to do heart surgery to get hold of the actual TCP connection which would not be the right thing to do is your concern am I right?\n",
"@vivekhub requests uses urllib3. urllib3 uses httplib for a large portion of its work and I wouldn't know where to begin getting a TCP connection out of there. I think your plan to use requests is just untenable. If you want to attempt to build an API for websockets then you're best bet is to use ws4py but I'm going to have to warn you that the requests API really seems far less than suitable for websockets.\n"
] |
https://api.github.com/repos/psf/requests/issues/1579
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1579/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1579/comments
|
https://api.github.com/repos/psf/requests/issues/1579/events
|
https://github.com/psf/requests/pull/1579
| 19,137,005 |
MDExOlB1bGxSZXF1ZXN0ODE1NDk1Ng==
| 1,579 |
Clarify timeout behavior in quickstart doc
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7129?v=4",
"events_url": "https://api.github.com/users/yang/events{/privacy}",
"followers_url": "https://api.github.com/users/yang/followers",
"following_url": "https://api.github.com/users/yang/following{/other_user}",
"gists_url": "https://api.github.com/users/yang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yang",
"id": 7129,
"login": "yang",
"node_id": "MDQ6VXNlcjcxMjk=",
"organizations_url": "https://api.github.com/users/yang/orgs",
"received_events_url": "https://api.github.com/users/yang/received_events",
"repos_url": "https://api.github.com/users/yang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yang",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-09-07T02:28:00Z
|
2021-09-08T21:01:10Z
|
2013-09-08T15:05:00Z
|
CONTRIBUTOR
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1579/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1579/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1579.diff",
"html_url": "https://github.com/psf/requests/pull/1579",
"merged_at": "2013-09-08T15:05:00Z",
"patch_url": "https://github.com/psf/requests/pull/1579.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1579"
}
| true |
[
":+1:\n\nPlease feel free to send more PRs like this. :cake:\n",
"This is excellent @yang, thank you so much! :cake:\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1578
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1578/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1578/comments
|
https://api.github.com/repos/psf/requests/issues/1578/events
|
https://github.com/psf/requests/pull/1578
| 18,980,371 |
MDExOlB1bGxSZXF1ZXN0ODA3NzcwMA==
| 1,578 |
Adding myself to the list of contributors.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10311?v=4",
"events_url": "https://api.github.com/users/jparise/events{/privacy}",
"followers_url": "https://api.github.com/users/jparise/followers",
"following_url": "https://api.github.com/users/jparise/following{/other_user}",
"gists_url": "https://api.github.com/users/jparise/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jparise",
"id": 10311,
"login": "jparise",
"node_id": "MDQ6VXNlcjEwMzEx",
"organizations_url": "https://api.github.com/users/jparise/orgs",
"received_events_url": "https://api.github.com/users/jparise/received_events",
"repos_url": "https://api.github.com/users/jparise/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jparise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jparise/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jparise",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-04T16:01:07Z
|
2021-09-08T21:01:09Z
|
2013-09-05T06:44:28Z
|
CONTRIBUTOR
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1578/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1578/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1578.diff",
"html_url": "https://github.com/psf/requests/pull/1578",
"merged_at": "2013-09-05T06:44:28Z",
"patch_url": "https://github.com/psf/requests/pull/1578.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1578"
}
| true |
[
"Well deserved @jparise, thanks so much for your work! :grin:\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1577
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1577/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1577/comments
|
https://api.github.com/repos/psf/requests/issues/1577/events
|
https://github.com/psf/requests/issues/1577
| 18,973,622 |
MDU6SXNzdWUxODk3MzYyMg==
| 1,577 |
Requests not timing out
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1452980?v=4",
"events_url": "https://api.github.com/users/laruellef/events{/privacy}",
"followers_url": "https://api.github.com/users/laruellef/followers",
"following_url": "https://api.github.com/users/laruellef/following{/other_user}",
"gists_url": "https://api.github.com/users/laruellef/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/laruellef",
"id": 1452980,
"login": "laruellef",
"node_id": "MDQ6VXNlcjE0NTI5ODA=",
"organizations_url": "https://api.github.com/users/laruellef/orgs",
"received_events_url": "https://api.github.com/users/laruellef/received_events",
"repos_url": "https://api.github.com/users/laruellef/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/laruellef/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laruellef/subscriptions",
"type": "User",
"url": "https://api.github.com/users/laruellef",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 14 |
2013-09-04T14:11:39Z
|
2018-11-24T13:51:16Z
|
2013-09-05T08:16:44Z
|
NONE
|
resolved
|
Got a strange case of failing timeout on a particular url, when trying in browser, seems like this page never completes loading...
The expected behavior would be for requests to timeout, however it is not happening:
this times out fine:
requests.get("http://www.iva.net/blog/", timeout=0.1)
but this doesn't:
requests.get("http://www.iva.net/blog/", timeout=1)
I'm on python 2.7.5 and requests 1.2.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1577/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1577/timeline
| null |
completed
| null | null | false |
[
"This is expected behaviour\nhttp://docs.python-requests.org/en/latest/user/quickstart/#timeouts\n\n> Note: timeout only affects the connection process itself, not the downloading of the response body.\n\nThe connection is established successfully, but the page just keeps sending the response body indefinitely. \n",
"Thanks for raising this issue @laruellef!\n\nThe `timeout` parameter does not work the way most people seem to expect it to. Whether that is actually a bug is up for discussion. =)\n\nThe actual behaviour is that `timeout` is set as the socket timeout. This timeout applies to each individual blocking socket operation, _not_ to the connection as a whole. This means you only trigger the timeout if connecting takes longer than a second, or if any of the page responses take more than a second to download.\n",
"Cool, tks for the prompt response,\nif this is expected behavior,\nhow can I catch and recover from this behavior in code?\ncoz at the moment, my script is indefinitely stuck :-(\n",
"You have a few options:\n1. Run a timer on another thread which will kill the operation if it takes too long.\n2. [Stream](http://docs.python-requests.org/en/latest/user/advanced/#streaming-requests) the download. This allows you to read the response one chunk at a time, and you can combine this with a manual timeout to abandon the download midway through.\n\nI much more strongly favour (2) than (1). =) Let me know if you need help with 2 and I can show you some sample code.\n",
"Yes, sample code would be wonderful,\nI however am using requests.get(url, timeout=90, verify=False, stream=False)\nI've found that setting stream to False solves dangling connection problems, esp. on servers where many requests are being processed in parallel.\nSo, I'd much rather keep stream=False if possible\n",
"If you set `stream=False`, Requests will endeavour to download the entire Request body before it returns, which will fail. =) What specific dangling connection problems do you have? (Additionally, `stream=False` is the default behaviour, so you shouldn't need to explicitly set it).\n\nAs for sample code, try:\n\n``` python\nimport time\nimport requests\n\nurl = \"http://www.iva.net/blog/\"\ntimeout = 90\nbody = []\n\nstart = time.time()\nr = requests.get(url, verify=False, stream=True)\n\nfor chunk in r.iter_content(1024): # Adjust this value to provide more or less granularity.\n body.append(chunk)\n\n if time.time() > (start + timeout):\n break # You can set an error value here as well if you want.\n\ncontent = b''.join(body)\n```\n\nOn my machine this downloads roughly 60MB of data before breaking out. Don't make the mistake I did and print the whole thing out. It takes a while. :grin:\n",
"Tks.\nHa! didn't know stream=False was the default, I was having strange errors that I couldn't explain, and I was suspecting that lots of connections were being left open, when I set stream=False, the problems went away,\nI understand this isn't very helpful from your standpoint, but it's the honest truth... ;-)\n\nHow about creating another timeout param to address this case,\nso peeps don't need the sample code you just provided... ;-)\n",
"@laruellef Creating another timeout parameter is something I'm considering. It suffers from creating two parameters that sound similar but do different things. If we redefined the current timeout parameter that'll break a _lot_ of working code, which would be bad. But most importantly, we cannot easily redefine the timeout parameter without setting `stream=True` as the default (I think).\n",
"So, @laruellef, it looks like @kevinburke is doing some work on the `urllib3` side to add better timeout control. When that gets sorted we'll probably try to plumb it through to Requests. I think waiting for that issue (shazow/urllib3#231) to be resolved is the correct next step here.\n\nThanks for raising this, and keep track of the `urllib3` issue!\n",
"Using streaming mode with iter_content doesn't seem to solve the problem though. At least not my problem. I got a socket that is stuck in ESTABLISHED state, but it is reading no more data. The socket has blocked a celery worker for over 72 hours now (only got around to checking the worker this morning.)\n\nAs far as I can tell the workaround suggested above would not help in this case, since no data is available for reading iter_content would block indefinitely.\n\nEdit: socket.setdefaulttimeout fixes the problem, which is weird, I was under the impression that the timeout parameter would do the same.\n",
"@blubber - sorry for a blast from the past. I'm experiencing stuck sockets in ESTABLISHED although the other side closed the socket, and timeout (new implementation apparently) is in place.\nYou mentioned `socket.setdefaulttimeout`. Where exactly did you change it?\n",
"Anywhere is fine, it's a global setting. \n",
"what is the final solution?\r\n\r\nreplace requests with urllib3 if you want to set timeout?",
"@sunshusunshf no that's not the final solution. If you want help with your code, though, go to [StackOverflow](https://stackoverflow.com)"
] |
https://api.github.com/repos/psf/requests/issues/1576
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1576/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1576/comments
|
https://api.github.com/repos/psf/requests/issues/1576/events
|
https://github.com/psf/requests/issues/1576
| 18,953,379 |
MDU6SXNzdWUxODk1MzM3OQ==
| 1,576 |
Better Error Message When Server Uses an Older Version of SSL?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/279060?v=4",
"events_url": "https://api.github.com/users/karanlyons/events{/privacy}",
"followers_url": "https://api.github.com/users/karanlyons/followers",
"following_url": "https://api.github.com/users/karanlyons/following{/other_user}",
"gists_url": "https://api.github.com/users/karanlyons/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/karanlyons",
"id": 279060,
"login": "karanlyons",
"node_id": "MDQ6VXNlcjI3OTA2MA==",
"organizations_url": "https://api.github.com/users/karanlyons/orgs",
"received_events_url": "https://api.github.com/users/karanlyons/received_events",
"repos_url": "https://api.github.com/users/karanlyons/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/karanlyons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karanlyons/subscriptions",
"type": "User",
"url": "https://api.github.com/users/karanlyons",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-04T05:00:12Z
|
2021-09-09T01:22:28Z
|
2013-09-04T05:18:45Z
|
NONE
|
resolved
|
My bad, guys, this was a dupe of #1567.
Is there a way we can make a suggestion about what the issue might be in the event of a reset SSL connection? Something that clues the dev into trying an older SSL version?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1576/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1576/timeline
| null |
completed
| null | null | false |
[
"Thanks for this @karanlyons! As far as I can see, this is difficult to do in this particular case. The server's response to the TLS handshake was to close the connection, which is exactly the error we report. Unfortunately, it's not clear whether that TCP RST segment has anything to do with the TLS negotiation. For instance, if the server doesn't support TLS at all you're likely to get similar behaviour.\n\nDespite my pessimism regarding this, we do currently have a feature request open about handing this behaviour, in #1570. I'm going to close this issue in favour of centralising discussion there. =)\n\nThanks again for raising this issue!\n"
] |
https://api.github.com/repos/psf/requests/issues/1575
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1575/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1575/comments
|
https://api.github.com/repos/psf/requests/issues/1575/events
|
https://github.com/psf/requests/pull/1575
| 18,944,968 |
MDExOlB1bGxSZXF1ZXN0ODA1ODczNw==
| 1,575 |
Improved content encoding detection.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10311?v=4",
"events_url": "https://api.github.com/users/jparise/events{/privacy}",
"followers_url": "https://api.github.com/users/jparise/followers",
"following_url": "https://api.github.com/users/jparise/following{/other_user}",
"gists_url": "https://api.github.com/users/jparise/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jparise",
"id": 10311,
"login": "jparise",
"node_id": "MDQ6VXNlcjEwMzEx",
"organizations_url": "https://api.github.com/users/jparise/orgs",
"received_events_url": "https://api.github.com/users/jparise/received_events",
"repos_url": "https://api.github.com/users/jparise/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jparise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jparise/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jparise",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-03T23:21:51Z
|
2021-09-08T23:08:12Z
|
2013-09-04T02:42:43Z
|
CONTRIBUTOR
|
resolved
|
`get_encodings_from_content()` can now detect HTML in-document content
encoding declarations in the following formats:
- HTML5
- HTML4
- XHTML 1.x served with text/html MIME type
- XHTML 1.x served as XML
Ref: http://www.w3.org/International/questions/qa-html-encoding-declarations
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1575/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1575/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1575.diff",
"html_url": "https://github.com/psf/requests/pull/1575",
"merged_at": "2013-09-04T02:42:43Z",
"patch_url": "https://github.com/psf/requests/pull/1575.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1575"
}
| true |
[
"Thanks so much!\n"
] |
https://api.github.com/repos/psf/requests/issues/1574
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1574/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1574/comments
|
https://api.github.com/repos/psf/requests/issues/1574/events
|
https://github.com/psf/requests/issues/1574
| 18,912,211 |
MDU6SXNzdWUxODkxMjIxMQ==
| 1,574 |
Authentication with large POSTs not working (with aggressive servers)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/669080?v=4",
"events_url": "https://api.github.com/users/ntx-/events{/privacy}",
"followers_url": "https://api.github.com/users/ntx-/followers",
"following_url": "https://api.github.com/users/ntx-/following{/other_user}",
"gists_url": "https://api.github.com/users/ntx-/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ntx-",
"id": 669080,
"login": "ntx-",
"node_id": "MDQ6VXNlcjY2OTA4MA==",
"organizations_url": "https://api.github.com/users/ntx-/orgs",
"received_events_url": "https://api.github.com/users/ntx-/received_events",
"repos_url": "https://api.github.com/users/ntx-/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ntx-/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntx-/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ntx-",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-03T14:44:02Z
|
2021-09-09T01:22:22Z
|
2013-09-24T17:51:19Z
|
NONE
|
resolved
|
When sending a large (multipart) POST request to a server requiring digest access authentication the "401 Unauthorized" response might be received and the connection terminated before the HTTPConnection.request() in HTTPConnectionPool._make_request() has returned causing the underlying socket routines to fail with EPIPE.
The resulting exception is not caught and the subsequent handling of the 401 response is not triggered causing the whole POST to fail with the following exception:
```
Traceback (most recent call last):
<my code>
File "<requests>/api.py", line 88, in post
return request('post', url, data=data, **kwargs)
File "<requests>/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "<requests>/sessions.py", line 344, in request
resp = self.send(prep, **send_kwargs)
File "<requests>/sessions.py", line 447, in send
r = adapter.send(request, **kwargs)
File "<requests>/adapters.py", line 318, in send
raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host=<host>, port=80): Max retries exceeded with url: <url> (Caused by <class 'socket.error'>: [Errno 32] Broken pipe)
```
The same issue is also present when implementing essentially the same thing with urllib2.
I'm not sure if this is standard behavior for servers requiring digest authentication (have only tested the affected server so far) or if this is due to a special safe-guard against DoS attacks.
For testing I used Requests 1.2.0 and Python 2.7.4 with Ubuntu 13.04 inside VirtualBox, and I was unable to reproduce the issue on the host Windows 8.
PS. I temporarily worked around the issue by suppressing the specific exception (socket.error with EPIPE) inside Requests and letting _make_request() attempt to read the response. This worked as expected and allowed the 401 handling to kick in. But I'm sure there are some cases where this leads to undesired behavior.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1574/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1574/timeline
| null |
completed
| null | null | false |
[
"@ntx Thanks for raising this issue! I agree, this is a fairly graceless way to fail. I think the correct place for this fix is in [urllib3](https://github.com/shazow/urllib3), which implements our connection-layer logic. A simple exception handler should suffice over there. Can I recommend you open an issue on that repository?\n"
] |
https://api.github.com/repos/psf/requests/issues/1573
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1573/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1573/comments
|
https://api.github.com/repos/psf/requests/issues/1573/events
|
https://github.com/psf/requests/issues/1573
| 18,907,279 |
MDU6SXNzdWUxODkwNzI3OQ==
| 1,573 |
Specify password for SSL client side certificate
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/109361?v=4",
"events_url": "https://api.github.com/users/botondus/events{/privacy}",
"followers_url": "https://api.github.com/users/botondus/followers",
"following_url": "https://api.github.com/users/botondus/following{/other_user}",
"gists_url": "https://api.github.com/users/botondus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/botondus",
"id": 109361,
"login": "botondus",
"node_id": "MDQ6VXNlcjEwOTM2MQ==",
"organizations_url": "https://api.github.com/users/botondus/orgs",
"received_events_url": "https://api.github.com/users/botondus/received_events",
"repos_url": "https://api.github.com/users/botondus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/botondus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/botondus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/botondus",
"user_view_type": "public"
}
|
[
{
"color": "e102d8",
"default": false,
"description": null,
"id": 117745,
"name": "Planned",
"node_id": "MDU6TGFiZWwxMTc3NDU=",
"url": "https://api.github.com/repos/psf/requests/labels/Planned"
},
{
"color": "fad8c7",
"default": false,
"description": null,
"id": 136616769,
"name": "Documentation",
"node_id": "MDU6TGFiZWwxMzY2MTY3Njk=",
"url": "https://api.github.com/repos/psf/requests/labels/Documentation"
}
] |
closed
| true | null |
[] | null | 122 |
2013-09-03T13:14:44Z
|
2021-11-28T05:40:38Z
|
2021-11-28T05:40:24Z
|
NONE
|
resolved
|
As far as I know currently it's not possible to specify the password for the client side certificate you're using for authentication.
This is a bit of a problem because you typically always want to password protect your .pem file which contains the private key. `openssl` won't even let you create one without a password.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18519037?v=4",
"events_url": "https://api.github.com/users/sethmlarson/events{/privacy}",
"followers_url": "https://api.github.com/users/sethmlarson/followers",
"following_url": "https://api.github.com/users/sethmlarson/following{/other_user}",
"gists_url": "https://api.github.com/users/sethmlarson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sethmlarson",
"id": 18519037,
"login": "sethmlarson",
"node_id": "MDQ6VXNlcjE4NTE5MDM3",
"organizations_url": "https://api.github.com/users/sethmlarson/orgs",
"received_events_url": "https://api.github.com/users/sethmlarson/received_events",
"repos_url": "https://api.github.com/users/sethmlarson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sethmlarson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sethmlarson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sethmlarson",
"user_view_type": "public"
}
|
{
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/psf/requests/issues/1573/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1573/timeline
| null |
completed
| null | null | false |
[
"Something like:\n\n`requests.get('https://kennethreitz.com', cert='server.pem', cert_pw='my_password')`\n",
"Pretty sure you're supposed to use the `cert` param for that: `cert=('server.pem', 'my_password')`\n",
"@sigmavirus24 \nThe tuple is for `(certificate, key)`. Currently there is no support for encrypted keyfiles.\nThe [stdlib](file:///usr/share/doc/python/html/library/ssl.html#ssl.SSLContext.load_cert_chain) only got support for those in version 3.3.\n",
"Heh, @t-8ch, you accidentally linked to a file on your local FS. ;) [Correct link](http://docs.python.org/3.3/library/ssl.html#ssl.SSLContext.load_cert_chain).\n",
"Quite right @t-8ch. This is why I should never answer issues from the bus. :/\n",
"So the current consensus is we don't support this. How much work is it likely to be to add support in non-3.3 versions of Python?\n",
"How hard would it be to throw an error on this condition? I just ran into this silly problem and it took two hours to figure out, it would be nice if it would throw an error, it currently just sits there looping. Thanks for the awesome library!\n",
"Wait, it sits where looping? Where in execution do we fail? Can you print the traceback from where we loop?\n",
"It seems to hang right here:\n\nr = requests.get(url,\n auth=headeroauth,\n cert=self.cert_tuple,\n headers=headers,\n timeout=10,\n verify=True)\n\nI tried turning the timeout out up or down to no avail, but I imagine it knows well before the timeout it can't use the cert. Thanks!\n",
"Ah, sorry, I wasn't clear. I meant to let it hang and then kill it with Ctrl + C so that python throws a KeyboardInterrupt exception, then to see where we are in the traceback. I want to know where in Requests the execution halts.\n",
"What's happening (or at least what I've seen in many cases) is that OpenSSL, upon being given a password-protected certificate, will prompt the user for a password. It shows up in no logs (because the prompt is directly printed), and it doesn't time out because it's waiting for a user to press enter.\n\nNeedless to say, it's cubmersome, dangerous behavior when the code is running on a server (because it'll hang your worker with no option for recovery other than killing the process).\n\nIs there a way to make requests raise an exception in that case instead of prompting for a password, or is that completely out of your control and in OpenSSL's hands?\n",
"@maxnoel I'm pretty sure this is in OpenSSL's hands but if you can answer @Lukasa's question (the last comment on this issue) it would be very helpful in giving a definite answer regarding if there was anything we can do to help.\n",
"You can confirm OpenSSL is blocking on stdin for the passphrase from the interactive python prompt:\n\n```\n>>> r = requests.get(\"https://foo.example.com/api/user/bill\", cert=(\"client.crt\", \"client.key\"))\nEnter PEM pass phrase:\n>>>\n```\n\nIf you're running from a backgrounded process, I assume OpenSSL will block waiting on that input.\n",
"That's correct. Is there anything requests can do to prevent that from happening? Raising an exception when no password is given would be far more useful than prompting for stuff on stdin (especially in a non-interactive program).\n",
"I'm afraid that I don't know of any way. @reaperhulk?\n",
"There are ways to stop OpenSSL from doing this, but I'm not sure if they're exposed by pyOpenSSL. Where does requests call pyopenssl to load the client cert? I can dig a bit.\n",
"@reaperhulk It's done from in urllib3, [here](https://github.com/shazow/urllib3/blob/master/urllib3/contrib/pyopenssl.py#L252-L260).\n",
"We also do something very similar for the stdlib, which will be a whole separate problem.\n",
"So we can do this with PyOpenSSL using a patch like [this](https://github.com/reaperhulk/urllib3/commit/c6d1e7f5aa76c0522d9d84e9b9ad757b7f4d8a22). In the stdlib version, we need to use `load_cert_chain` with a password.\n",
"Has this problem been solved? I'm currently running into this while trying to connect to an Apache server.\n",
"It has not.\n",
"What about PKCS#12 formatted (and encrypted) containers which could contain a client cert/key? Would this fall under the same feature request?\n",
"@mikelupo Yup.\n",
"@telam @mikelupo \nI have the same problem and Googled a lot, finally, I solved it by using pycurl.\nIn my situation, I use openssl to convert my .pfx file to .pem file which contains both cert & key(encrypted with pass phrase), then invoke the following code.\n\n``` python\nimport pycurl\nimport StringIO\n\nb = StringIO.StringIO()\nc = pycurl.Curl()\nurl = \"https://example.com\"\nc.setopt(pycurl.URL, url)\nc.setopt(pycurl.WRITEFUNCTION, b.write)\nc.setopt(pycurl.CAINFO, \"/path/cacert.pem\")\nc.setopt(pycurl.SSLKEY, \"/path/key_file.pem\")\nc.setopt(pycurl.SSLCERT, \"/path/cert_file.pem\")\nc.setopt(pycurl.SSLKEYPASSWD, \"your pass phrase\")\nc.perform()\nc.close()\nresponse_body = b.getvalue()\n```\n\nBTW, for security, it's better to not do hardcode for `pass phrase`\n",
"Of course. That said, the problem isn't really that a pass phrase is required -- it's that OpenSSL makes your program hang while waiting for someone to type a passphrase in stdin, even in the case of a non-interactive, GUI or remote program.\n\nWhen a passphrase is required and none is provided, an exception should be raised instead.\n",
"if you use a default passphrase of '' for the key, openssl won't hang.\nit'll return a bad password text. you can immediately alter your py flow\nto then notify the user without that apparant stall\n",
"any plan to add this feature \n",
"We want to add it, but we have no schedule to add it at this time.\n",
"@botondus I think I found a simpler way to achieve this with request library. I am documenting this for other people who are facing the issue.\n\nI assume that you have a .p12 certificate and a passphrase for the key.\n\n### Generate certificate and private key.\n\n``` sh\n// Generate the certificate file.\nopenssl pkcs12 -in /path/to/p12cert -nokeys -out certificate.pem\n// Generate private key with passpharse, First enter the password provided with the key and then an arbitrary PEM password //(say: 1234) \nopenssl pkcs12 -in /path/to/p12cert -nocerts -out privkey.pem\n```\n\nWell, we are not done yet and we need to generate the key that doesn't require the PEM password every time it needs to talk to the server.\n\n### Generate key without passphrase.\n\n``` sh\n// Running this command will prompt for the pem password(1234), on providing which we will obtain the plainkey.pem\nopenssl rsa -in privkey.pem -out plainkey.pem\n```\n\nNow, you will have `certificate.pem` and `plainkey.pem`, both of the files required to talk to the API using requests.\n\nHere is an example request using these cert and keys.\n\n``` python\nimport requests\nurl = 'https://exampleurl.com'\nheaders = {\n 'header1': '1214141414',\n 'header2': 'adad-1223-122'\n }\nresponse = requests.get(url, headers=headers, cert=('~/certificate.pem', '~/plainkey.pem'), verify=True)\nprint response.json()\n```\n\nHope this helps:\n\ncc @kennethreitz @Lukasa @sigmavirus24 \n",
"I have heard through the grapevine that Amazon does exactly this, internally.\n"
] |
https://api.github.com/repos/psf/requests/issues/1572
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1572/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1572/comments
|
https://api.github.com/repos/psf/requests/issues/1572/events
|
https://github.com/psf/requests/issues/1572
| 18,898,078 |
MDU6SXNzdWUxODg5ODA3OA==
| 1,572 |
urllib3 exceptions passing through requests API
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/410452?v=4",
"events_url": "https://api.github.com/users/sylvinus/events{/privacy}",
"followers_url": "https://api.github.com/users/sylvinus/followers",
"following_url": "https://api.github.com/users/sylvinus/following{/other_user}",
"gists_url": "https://api.github.com/users/sylvinus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sylvinus",
"id": 410452,
"login": "sylvinus",
"node_id": "MDQ6VXNlcjQxMDQ1Mg==",
"organizations_url": "https://api.github.com/users/sylvinus/orgs",
"received_events_url": "https://api.github.com/users/sylvinus/received_events",
"repos_url": "https://api.github.com/users/sylvinus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sylvinus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sylvinus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sylvinus",
"user_view_type": "public"
}
|
[
{
"color": "e10c02",
"default": false,
"description": null,
"id": 117744,
"name": "Bug",
"node_id": "MDU6TGFiZWwxMTc3NDQ=",
"url": "https://api.github.com/repos/psf/requests/labels/Bug"
},
{
"color": "e102d8",
"default": false,
"description": null,
"id": 117745,
"name": "Planned",
"node_id": "MDU6TGFiZWwxMTc3NDU=",
"url": "https://api.github.com/repos/psf/requests/labels/Planned"
},
{
"color": "0b02e1",
"default": false,
"description": null,
"id": 191274,
"name": "Contributor Friendly",
"node_id": "MDU6TGFiZWwxOTEyNzQ=",
"url": "https://api.github.com/repos/psf/requests/labels/Contributor%20Friendly"
}
] |
closed
| true |
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
] |
{
"closed_at": null,
"closed_issues": 29,
"created_at": "2013-11-17T11:29:34Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
},
"description": null,
"due_on": null,
"html_url": "https://github.com/psf/requests/milestone/20",
"id": 487518,
"labels_url": "https://api.github.com/repos/psf/requests/milestones/20/labels",
"node_id": "MDk6TWlsZXN0b25lNDg3NTE4",
"number": 20,
"open_issues": 12,
"state": "open",
"title": "3.0.0",
"updated_at": "2024-05-19T18:43:00Z",
"url": "https://api.github.com/repos/psf/requests/milestones/20"
}
| 13 |
2013-09-03T09:36:19Z
|
2021-09-08T22:00:45Z
|
2015-10-05T14:09:46Z
|
NONE
|
resolved
|
I don't know if it's a design goal of requests to hide urllib3's exceptions and wrap them around requests.exceptions types.
(If it's not IMHO it should be, but that's another discussion)
If it is, I have at least two of them passing through that I have to catch in addition to requests' exceptions. They are requests.packages.urllib3.exceptions.DecodeError and requests.packages.urllib3.exceptions.TimeoutError (this one I get when a proxy timeouts)
Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1572/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1572/timeline
| null |
completed
| null | null | false |
[
"I definitely agree with you and would agree that these should be wrapped.\n\nCould you give us stack-traces so we can find where they're bleeding through?\n",
"Sorry I don't have stack traces readily available :/\n",
"No worries. I have ideas as to where the DecodeError might be coming from but I'm not certain where the TimeoutError could be coming from.\n\nIf you run into them again, please save us the stack traces. =) Thanks for reporting them. (We'll never know what we're missing until someone tells us.)\n",
"`TimeoutError` is almost certainly being raised from either [`HTTPConnectionPool.urlopen()`](https://github.com/kennethreitz/requests/blob/master/requests/adapters.py#L282-L293) or from [`HTTPConnection.putrequest()`](https://github.com/kennethreitz/requests/blob/master/requests/adapters.py#L301). Adding a new clause to [here](https://github.com/kennethreitz/requests/blob/master/requests/adapters.py#L323-L335) should cover us.\n",
"Actually, that can't be right, we should be catching and rethrowing as a Requests `Timeout` exception in that block. Hmm, I'll do another spin through the code to see if I can see the problem.\n",
"Yeah, a quick search of the `urllib3` code reveals that the only place that `TimeoutError`s are thrown is from `HTTPConnectionPool.urlopen()`. These should not be leaking. We really need a stack trace to track this down.\n",
"I've added a few logs to get the traces if they happen again. What may have confused me for the TimeoutError is that requests' Timeout actually wraps the urllib3's TimeoutError and we were logging the content of the error as well. \n\nSo DecodeError was definitely being thrown but probably not TimeoutError, sorry for the confusion. I'll report here it I ever see it happening now that we're watching for it.\n\nThanks for the help!\n",
"I also got urllib3 exceptions passing through when use Session in several threads, trace:\n\n```\n......\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 347, in get\n return self.request('GET', url, **kwargs)\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 335, in request\n resp = self.send(prep, **send_kwargs)\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 438, in send\n r = adapter.send(request, **kwargs)\n File \"C:\\Python27\\lib\\site-packages\\requests\\adapters.py\", line 292, in send\n timeout=timeout\n File \"C:\\Python27\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 423, in url\nopen\n conn = self._get_conn(timeout=pool_timeout)\n File \"C:\\Python27\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 224, in _ge\nt_conn\n raise ClosedPoolError(self, \"Pool is closed.\")\nClosedPoolError: HTTPConnectionPool(host='......', port=80): Pool is closed.\n```\n",
"Ah, we should rewrap that `ClosedPoolError` too.\n",
"But it's still the summer... How can any pool be closed? :smirk_cat: \n\nBut yes :+1:\n",
"I've added a fix for the `ClosedPoolError` to #1475. Which apparently broke in the last month for no adequately understandable reason.\n",
"If it's still needed, here is the traceback of DecodeError I got using proxy on requests 2.0.0:\n\n```\nTraceback (most recent call last):\n File \"/home/krat/Projects/Grubhub/source/Pit/pit/web.py\", line 52, in request\n response = session.request(method, url, **kw)\n File \"/home/krat/.virtualenvs/grubhub/local/lib/python2.7/site-packages/requests/sessions.py\", line 357, in request\n resp = self.send(prep, **send_kwargs)\n File \"/home/krat/.virtualenvs/grubhub/local/lib/python2.7/site-packages/requests/sessions.py\", line 460, in send\n r = adapter.send(request, **kwargs)\n File \"/home/krat/.virtualenvs/grubhub/local/lib/python2.7/site-packages/requests/adapters.py\", line 367, in send\n r.content\n File \"/home/krat/.virtualenvs/grubhub/local/lib/python2.7/site-packages/requests/models.py\", line 633, in content\n self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()\n File \"/home/krat/.virtualenvs/grubhub/local/lib/python2.7/site-packages/requests/models.py\", line 572, in generate\n decode_content=True):\n File \"/home/krat/.virtualenvs/grubhub/local/lib/python2.7/site-packages/requests/packages/urllib3/response.py\", line 225, in stream\n data = self.read(amt=amt, decode_content=decode_content)\n File \"/home/krat/.virtualenvs/grubhub/local/lib/python2.7/site-packages/requests/packages/urllib3/response.py\", line 193, in read\n e)\nDecodeError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing: incorrect header check',))\n```\n",
"Slightly different to the above, but urllib3's LocationParseError leaks through which could probably do with being wrapped in InvalidURL.\n\n```\nTraceback (most recent call last):\n File \"/home/oliver/wc/trunk/mtmCore/python/asagent/samplers/net/web.py\", line 255, in process_url\n resp = self.request(self.params.httpverb, url, data=data)\n File \"/home/oliver/wc/trunk/mtmCore/python/asagent/samplers/net/web.py\", line 320, in request\n verb, url, data=data))\n File \"abilisoft/requests/opt/abilisoft.com/thirdparty/requests/lib/python2.7/site-packages/requests/sessions.py\", line 286, in prepare_request\n File \"abilisoft/requests/opt/abilisoft.com/thirdparty/requests/lib/python2.7/site-packages/requests/models.py\", line 286, in prepare\n File \"abilisoft/requests/opt/abilisoft.com/thirdparty/requests/lib/python2.7/site-packages/requests/models.py\", line 333, in prepare_url\n File \"abilisoft/requests/opt/abilisoft.com/thirdparty/requests/lib/python2.7/site-packages/requests/packages/urllib3/util.py\", line 397, in parse_url\nLocationParseError: Failed to parse: Failed to parse: fe80::5054:ff:fe5a:fc0\n```\n"
] |
https://api.github.com/repos/psf/requests/issues/1571
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1571/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1571/comments
|
https://api.github.com/repos/psf/requests/issues/1571/events
|
https://github.com/psf/requests/issues/1571
| 18,890,272 |
MDU6SXNzdWUxODg5MDI3Mg==
| 1,571 |
Support for HTTPS "CONNECT" method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3726484?v=4",
"events_url": "https://api.github.com/users/czardoz/events{/privacy}",
"followers_url": "https://api.github.com/users/czardoz/followers",
"following_url": "https://api.github.com/users/czardoz/following{/other_user}",
"gists_url": "https://api.github.com/users/czardoz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/czardoz",
"id": 3726484,
"login": "czardoz",
"node_id": "MDQ6VXNlcjM3MjY0ODQ=",
"organizations_url": "https://api.github.com/users/czardoz/orgs",
"received_events_url": "https://api.github.com/users/czardoz/received_events",
"repos_url": "https://api.github.com/users/czardoz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/czardoz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czardoz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/czardoz",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-09-03T05:17:10Z
|
2021-09-09T00:28:31Z
|
2013-09-03T05:23:22Z
|
NONE
|
resolved
|
This is a Wireshark capture of Firefox using an HTTPS proxy:
```
CONNECT ssl.gstatic.com:443 HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0
Proxy-Connection: keep-alive
Connection: keep-alive
Host: ssl.gstatic.com
HTTP/1.1 200 Connection Established
Content-Length: 0
<cert exchange, and other stuff>
```
Same URL, using requests:
```
GET https://sni.velox.ch/ HTTP/1.1
Host: sni.velox.ch
Accept-Encoding: gzip, deflate, compress
Accept: */*
User-Agent: Mozilla/5.0
```
Here the proxy closed the connection, because the `https://` scheme isn't recognized by the proxy. The expected behavior is that a "CONNECT" request is sent to the proxy.
Python code used:
```
import requests
proxies = {'https': 'http://127.0.0.1:8000/'}
user_agent = {'User-Agent': 'Mozilla/5.0'}
resp = requests.get("https://sni.velox.ch/", proxies=proxies, headers=user_agent)
print resp.status_code
print resp.content
```
Traceback:
```
Traceback (most recent call last):
File "test_client.py", line 25, in <module>
resp = requests.get("https://sni.velox.ch/", proxies=proxies, headers=user_agent)
File "/home/aniket/venv/py2sni/local/lib/python2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/home/aniket/venv/py2sni/local/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/home/aniket/venv/py2sni/local/lib/python2.7/site-packages/requests/sessions.py", line 360, in request
resp = self.send(prep, **send_kwargs)
File "/home/aniket/venv/py2sni/local/lib/python2.7/site-packages/requests/sessions.py", line 463, in send
r = adapter.send(request, **kwargs)
File "/home/aniket/venv/py2sni/local/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: https://sni.velox.ch/ (Caused by <class 'httplib.BadStatusLine'>: '')
```
Requests version: 1.2.3
Python version: 2.7.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1571/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1571/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue, @czardoz!\n\nThis is a longstanding issue that we've known about for a very long time. A quick GitHub search for \"proxy https\" reveals 118 issues that have been raised at one point or another about this problem. Hell, I even [wrote a blog post about it](https://lukasa.co.uk/2013/07/Python_Requests_And_Proxies/).\n\nYou'll be pleased to know that thanks to the sterling work of a few people over at urllib3, we merged a fix for this (as #1515) into our branch for the next release. This should be fixed as part of Requests 2.0.\n\nThanks again!\n",
"Ah, good stuff, thanks :)\n"
] |
https://api.github.com/repos/psf/requests/issues/1570
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1570/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1570/comments
|
https://api.github.com/repos/psf/requests/issues/1570/events
|
https://github.com/psf/requests/issues/1570
| 18,875,346 |
MDU6SXNzdWUxODg3NTM0Ng==
| 1,570 |
Handle multiple TLS versions out of the box
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
[
{
"color": "02e10c",
"default": false,
"description": null,
"id": 76800,
"name": "Feature Request",
"node_id": "MDU6TGFiZWw3NjgwMA==",
"url": "https://api.github.com/repos/psf/requests/labels/Feature%20Request"
},
{
"color": "fbca04",
"default": false,
"description": null,
"id": 44501249,
"name": "Needs BDFL Input",
"node_id": "MDU6TGFiZWw0NDUwMTI0OQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input"
}
] |
closed
| true | null |
[] | null | 8 |
2013-09-02T18:03:45Z
|
2021-09-08T23:06:11Z
|
2015-01-18T20:29:44Z
|
CONTRIBUTOR
|
resolved
|
There are occasional cases where we could tell a server that we also support a lesser version of TLS when connecting to them and negotiate to a lower version if necessary.
This might reduce the noise we get on the issue tracker about making requests that result in SSLErrors. A great deal of the discussion around this has already taken place over on https://github.com/kennethreitz/requests/issues/1567
I'm still trying to discern how much work would be necessary to support this but I can not image it being too much.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1570/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1570/timeline
| null |
completed
| null | null | false |
[
"My understanding is that this is not how TLS works. From [Technet](http://technet.microsoft.com/en-us/library/cc785811.aspx):\n\n> **Client Hello**. The client initiates a session by sending a Client Hello message to the server. The Client Hello message contains:\n> \n> > **Version Number**. The client sends the version number corresponding to the highest version it supports. Version 2 is used for SSL 2.0, version 3 for SSL 3.0, and version 3.1 for TLS. Although the IETF RFC for TLS is TLS version 1.0, the protocol uses 3.1 in the version field to indicate that it is a higher level (newer and with more functionality) than SSL 3.0.\n\nSo we can't do this as part of a handshake. Instead, we'd have to make the initial request, fail (in some nonspecific way) and then reattempt the handshake using a lower version. That seems like a slightly crazy thing to do, especially as the number one reason for failing a handshake has got to be attempting a TLS connection to a server that doesn't support it at all.\n\nUnless I'm wrong about any of that, I'm -1 on this. =)\n",
"This is totally unrelated, but we also need a label that says \"Needs Minion Input\"\n",
"@sigmavirus24 That was not a `SSLError` but a `socket.error` which doesn't really indicate a SSL problem.\n",
"@t-8ch maybe I wasn't clear but I never said that issue was caused by this, just that discussion about negotiating a different version could be found there. =)\n",
"Low hanging fruit, but could we just add a \"Maybe this server needs a lesser version of TLS? {link_to_docs}\" when an HTTPS connection is reset by peer? It doesn't solve the actual problem, but at least points the dev in potentially the right direction.\n\n(FWIW, I just happened to figure out what was going on after a bunch of debugging because I searched the issues for \"connection reset by peer\". Not _before_ I had opened a new issue, though.)\n",
"I wonder if this should be fixed in urllib3, and then caught and rethrown in Requests. My specific concern is that urllib3 has been wrapping these in `urllib3.exceptions.MaxRetriesError`s, which makes finding the original exception just that bit more of a pain. Presumably the `urllib3.VerifiedHTTPSConnection` object can catch this failure though.\n\nThat seems like the way to go to me.\n",
"In fact, I think that's the _right_ thing to do. I don't think we should renegotiate TLS for people without telling them: that feels like a step too far. Better to raise a `requests.SSLHandshakeError` and let the user handle this themselves.\n",
"@karanlyons could you submit a PR for that documentation? You have greater context on the issue than I do.\n"
] |
https://api.github.com/repos/psf/requests/issues/1569
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1569/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1569/comments
|
https://api.github.com/repos/psf/requests/issues/1569/events
|
https://github.com/psf/requests/pull/1569
| 18,868,502 |
MDExOlB1bGxSZXF1ZXN0ODAyMzA2OA==
| 1,569 |
remove ipv6 brackets from hostname for httplib
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/420144?v=4",
"events_url": "https://api.github.com/users/alekibango/events{/privacy}",
"followers_url": "https://api.github.com/users/alekibango/followers",
"following_url": "https://api.github.com/users/alekibango/following{/other_user}",
"gists_url": "https://api.github.com/users/alekibango/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alekibango",
"id": 420144,
"login": "alekibango",
"node_id": "MDQ6VXNlcjQyMDE0NA==",
"organizations_url": "https://api.github.com/users/alekibango/orgs",
"received_events_url": "https://api.github.com/users/alekibango/received_events",
"repos_url": "https://api.github.com/users/alekibango/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alekibango/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alekibango/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alekibango",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-02T14:48:38Z
|
2021-09-08T21:01:09Z
|
2013-09-02T15:02:28Z
|
NONE
|
resolved
|
fixes #1568, makes ipv6 actually work like this:
a = requests.get('http://[2001:718:1:4::2]/')
It might be good idea to use urrlib3 newer than
https://github.com/shazow/urllib3/commit/208ba74c796116bafc95d46dfcf4325449643ab6
as it fixes the problem too.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1569/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1569/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1569.diff",
"html_url": "https://github.com/psf/requests/pull/1569",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1569.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1569"
}
| true |
[
"Thanks for this! We'll take the updated version of urllib3 though. =) Look out for the next release of Requests, which should have that updated version.\n"
] |
https://api.github.com/repos/psf/requests/issues/1568
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1568/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1568/comments
|
https://api.github.com/repos/psf/requests/issues/1568/events
|
https://github.com/psf/requests/issues/1568
| 18,868,224 |
MDU6SXNzdWUxODg2ODIyNA==
| 1,568 |
using numeric ipv6 address needs updated urllib3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/420144?v=4",
"events_url": "https://api.github.com/users/alekibango/events{/privacy}",
"followers_url": "https://api.github.com/users/alekibango/followers",
"following_url": "https://api.github.com/users/alekibango/following{/other_user}",
"gists_url": "https://api.github.com/users/alekibango/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alekibango",
"id": 420144,
"login": "alekibango",
"node_id": "MDQ6VXNlcjQyMDE0NA==",
"organizations_url": "https://api.github.com/users/alekibango/orgs",
"received_events_url": "https://api.github.com/users/alekibango/received_events",
"repos_url": "https://api.github.com/users/alekibango/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alekibango/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alekibango/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alekibango",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-02T14:42:17Z
|
2021-09-09T01:22:29Z
|
2013-09-02T15:06:27Z
|
NONE
|
resolved
|
using numeric ipv6 fails, as the host contains [] in name, which makes lower calls using httplib raise exception (talking about max. retries and socket.gaierror).
we need to fix local urllib copy to use self.host.strip('[]') in ConnectionPool.**init** in connectionpool.py, or rather use urllib3 version newer than this:
https://github.com/shazow/urllib3/commit/208ba74c796116bafc95d46dfcf4325449643ab6
i will provide the small patch soon, but updating urllib3 might be better idea
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1568/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1568/timeline
| null |
completed
| null | null | false |
[
"Thanks for this @alekibango, and thanks for providing a PR!\n\nIt's worth noting that this was an issue we already knew about: see #1426. We work very closely with @shazow on his urllib3 project because we build Requests on top of it, and we regularly take newer versions of urllib3 with releases of Requests. I'm really glad you opened this issue, but in future it is worth quickly searching the GitHub issue tracker to see if someone else raised it before you. It would have saved you a little time! :smile:\n\nThanks again for this!\n"
] |
https://api.github.com/repos/psf/requests/issues/1567
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1567/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1567/comments
|
https://api.github.com/repos/psf/requests/issues/1567/events
|
https://github.com/psf/requests/issues/1567
| 18,843,099 |
MDU6SXNzdWUxODg0MzA5OQ==
| 1,567 |
GET request to "https://sso.queensu.ca" Connection reset by peer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1538657?v=4",
"events_url": "https://api.github.com/users/sidequestboy/events{/privacy}",
"followers_url": "https://api.github.com/users/sidequestboy/followers",
"following_url": "https://api.github.com/users/sidequestboy/following{/other_user}",
"gists_url": "https://api.github.com/users/sidequestboy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sidequestboy",
"id": 1538657,
"login": "sidequestboy",
"node_id": "MDQ6VXNlcjE1Mzg2NTc=",
"organizations_url": "https://api.github.com/users/sidequestboy/orgs",
"received_events_url": "https://api.github.com/users/sidequestboy/received_events",
"repos_url": "https://api.github.com/users/sidequestboy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sidequestboy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sidequestboy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sidequestboy",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 12 |
2013-09-01T21:39:39Z
|
2021-09-09T00:28:25Z
|
2013-09-02T17:32:09Z
|
NONE
|
resolved
|
with python 2.7.5
``` python
import requests
requests.__version__
'1.2.3'
requests.get("https://sso.queensu.ca")
```
fails with:
``` python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='sso.queensu.ca', port=443): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 104] Connection reset by peer)
```
The get request is successful (200) with Firefox, so I tried passing in kwarg `headers` with the headers of my browser to the same error (I thought it might have been a User-Agent issue). I can't find out how to get any more information than that Traceback.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1567/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1567/timeline
| null |
completed
| null | null | false |
[
"So I think this is an SSL problem, because I have no problems running this query on my Windows box. What OS are you using?\n\nAlso, let's ping @t-8ch for his expertise here. =)\n",
"@jameh can you inspect the headers that Firefox receives when you visit that site? I have an idea that this might be unrelated to SSL but I cannot be sure. Thanks in advance.\n",
"I can reproduce this on Arch x64 with both python2 and 3.\nIf I force the SSL version to be `TLSv1` like described on [@Lukasa s blog](http://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/) it works for me.\n",
"Problem is that the server simply doesn't respond at all to a TLSv1.2 handshake.\n",
"@t-8ch so is it then safe to say this is not an issue with requests?\n\nShould we turn this into a feature request for falling back on older versions of TLS? I would really rather we didn't given how broken some of them are. If we automatically fall back on them we'll avoid bug reports like this but we will be giving our users a somewhat false sense of security, no? I for one would rather deal with these bug reports than compromise our users' security.\n\nOn a related note, could you provide your method of detecting what the server will accept/respond to? I would rather not keep bugging you @t-8ch. I don't want to keep nagging you to figure out our TLS/SSL issues.\n",
"Yes, I don't think this is an issue with requests.\n\nFalling back to older versions of TLS shouldn't be much of an issue securitywise (if we stay in TLS land).\nOn the other hand broken servers respond in all imaginable ways to handshakes.\nIf they behave correctly they could simply negotiate a lower version of TLS or\nabort the handshake correctly which will result in an SSLError with a more or\nless understandable error message.\nIf they are broken it's down to pure guessing what an error means.\n(Servers reset the connection, send no certificate, time out and whatnot)\nI really doubt we want to do this :smile:\n\n@sigmavirus24\nThe best way is probably the utility `gnutls-cli` which is part of `gnutls`.\nOne can use it like this:\n\n```\ngnutls-cli --priority=NORMAL:-VERS-TLS-ALL:+VERS-TLS1.2 sso.queensu.ca\n```\n\nThis means: Use your normal settings, but only use TLSv1.2 as TLS version, then\nconnect to the given host (port 443 is the default).\n(Full docs are here: http://www.gnutls.org/manual/html_node/Priority-Strings.html, `openssl s_client` has similary functionality)\n",
"@t-8ch how complex is the re-negotiation process? I think the only good idea is to raise an exception when the server is doing something unexpected and re-negotiate only when it explicitly asks us to. If you don't have time to describe the process of negotiation, feel free to link the RFC.\n\nIn the meantime, I'm going to open an issue/feature request for this and close this issue. Thanks for opening @jameh\n",
"Thanks a lot @t-8ch and everyone. Yes I was on Arch Linux x64. Subclassing the HTTPAdapter class as shown on [your blog](http://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/) and indicating to use `ssl.PROTOCOL_TLSv1` got me my 200 response. :)\n",
"@sigmavirus24 I got it a bit wrong. There isn't an explicit negotiation of the TLS version. Instead the client sends a handshake with the highest version it supports the server then can respond with the version it wishes to use.\n\n[RFC 5246 Appendix E](https://tools.ietf.org/html/rfc5246#appendix-E):\n\n```\nA TLS 1.2 client who wishes to negotiate with such older servers will\nsend a normal TLS 1.2 ClientHello, containing { 3, 3 } (TLS 1.2) in\nClientHello.client_version. If the server does not support this\nversion, it will respond with a ServerHello containing an older\nversion number. If the client agrees to use this version, the\nnegotiation will proceed as appropriate for the negotiated protocol.\n```\n\nSo a server timing out is simply broken.\n",
"I think we aren't beholden to do anything smart when servers do something so totally wrong. =D\n",
"@Lukasa but we could support an older version of TLS.\n",
"We do. =D\n\nAs @t-8ch points out, there is an explicit negotiation for TLS version, and we support them all. The problem here was that the server utterly failed to perform that negotiation.\n"
] |
https://api.github.com/repos/psf/requests/issues/1566
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1566/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1566/comments
|
https://api.github.com/repos/psf/requests/issues/1566/events
|
https://github.com/psf/requests/pull/1566
| 18,840,438 |
MDExOlB1bGxSZXF1ZXN0ODAwOTc0NA==
| 1,566 |
Add link to Contributor Friendly tag in README
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-09-01T18:23:06Z
|
2021-09-08T23:07:12Z
|
2013-09-01T18:25:04Z
|
CONTRIBUTOR
|
resolved
|
Also add note that I'm willing to help anyone who is looking to submit a pull request
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1566/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1566/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1566.diff",
"html_url": "https://github.com/psf/requests/pull/1566",
"merged_at": "2013-09-01T18:25:04Z",
"patch_url": "https://github.com/psf/requests/pull/1566.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1566"
}
| true |
[
":chart_with_upwards_trend: \n"
] |
https://api.github.com/repos/psf/requests/issues/1565
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1565/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1565/comments
|
https://api.github.com/repos/psf/requests/issues/1565/events
|
https://github.com/psf/requests/pull/1565
| 18,839,445 |
MDExOlB1bGxSZXF1ZXN0ODAwOTM4NA==
| 1,565 |
Don't absolute import things we've vendored.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 5 |
2013-09-01T17:12:19Z
|
2021-09-08T23:07:35Z
|
2013-09-24T17:39:09Z
|
MEMBER
|
resolved
|
Pointed out by @dstufft in #1560.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1565/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1565/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1565.diff",
"html_url": "https://github.com/psf/requests/pull/1565",
"merged_at": "2013-09-24T17:39:09Z",
"patch_url": "https://github.com/psf/requests/pull/1565.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1565"
}
| true |
[
":+1:\n",
"Unrelated (noob) question to the pull request: Does this really make a difference? Is is better to use relative imports?\n",
"It makes it possible to use SNI when requests is vendored.\n",
"@aesptux That's an excellent question. =)\n\nAn absolute import here will search `sys.path` for a module called `requests`. That works brilliant in 99.9% of cases, which is where people have done `pip install requests`. It'll find the correct module (namely the one we're running from), and then correct find its submodules and grab the right data. All great. The time it doesn't work is when you either don't have `requests` in any of `sys.path`, or you have something else called `requests` ahead of the actual module.\n\nThe specific bug report that triggered this fix is the first one. @dstufft was building a python project that included Requests as part of its source code under a different name, to avoid clashing with a user install of Requests. He was therefore getting requests using something like `import vend_requests` or `import vend_requests as requests`. That works great for almost everything, because they were all using relative imports, so they didn't care about whether the project was actually called `requests`. This line, however, does care.\n\nTo double up the pain, the line is deliberately inside a `try...except` block that catches `ImportError`. That makes this bug particularly tough to find, as we swallow the exception that would have told you about it.\n",
"@dstufft @Lukasa Thanks for the clarification! :+1: \n"
] |
https://api.github.com/repos/psf/requests/issues/1564
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1564/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1564/comments
|
https://api.github.com/repos/psf/requests/issues/1564/events
|
https://github.com/psf/requests/issues/1564
| 18,816,629 |
MDU6SXNzdWUxODgxNjYyOQ==
| 1,564 |
Support of "Transfer-Encoding: gzip"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1447160?v=4",
"events_url": "https://api.github.com/users/jcea/events{/privacy}",
"followers_url": "https://api.github.com/users/jcea/followers",
"following_url": "https://api.github.com/users/jcea/following{/other_user}",
"gists_url": "https://api.github.com/users/jcea/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcea",
"id": 1447160,
"login": "jcea",
"node_id": "MDQ6VXNlcjE0NDcxNjA=",
"organizations_url": "https://api.github.com/users/jcea/orgs",
"received_events_url": "https://api.github.com/users/jcea/received_events",
"repos_url": "https://api.github.com/users/jcea/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcea/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcea",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-08-31T01:34:03Z
|
2021-09-09T01:22:30Z
|
2013-08-31T05:59:56Z
|
NONE
|
resolved
|
Currently, if REQUESTS get a "Transfer-Encoding: gzip" answer, it will provide the GZIPPED content instead of the uncompressed data (like "content-encoding: gzip").
I would suggest that REQUESTS should decompress those replies, and it should add "TE: gzip" header in the requests, by default, too.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1564/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1564/timeline
| null |
completed
| null | null | false |
[
"Some details: https://issues.apache.org/bugzilla/show_bug.cgi?id=52860#c5\n",
"Part of the rational for my argument against this is in the issue you provided. To quote \"Tim B\":\n\n> Few mainstream browsers support gzip transfer-encoding.\n\nrequests never attempts to mimic anything that isn't in a browser and isn't very common. On the other hand, we also strive to mimic curl(/libcurl) and the fact that curl supports it (according to Tim B in the same comment) would be a point in your favor except that Tim then goes on to say:\n\n> …however, there's no webserver that will serve this.\n\nSo I think it is safe to say that we can satisfy the largest number of our uses without it.\n\nTo not give you all of the details, however, would be dishonest of me. The Content-Encoding decompression occurs in `urrlib3` if I remember correctly. The correct place to request this would then be `urrlib3`. \n\nThat said, I'm leaving this open so @Lukasa and @kennethreitz can feel free to correct me if I have mistated anything above.\n",
"Thanks for raising this @jcea!\n\nThis is a 'nice to have' feature: if we can get it we'll be happy, but I don't view its absence as a bug or horribly missing feature. Given that this feature belongs in [`urllib3`](https://github.com/shazow/urllib3), the connection-handling library we use, I suggest you propose it over there. If it gets implemented (or indeed if you implement it), Requests should automatically use it as well. =)\n\nAgain, thanks for suggesting this!\n"
] |
https://api.github.com/repos/psf/requests/issues/1563
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1563/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1563/comments
|
https://api.github.com/repos/psf/requests/issues/1563/events
|
https://github.com/psf/requests/issues/1563
| 18,816,531 |
MDU6SXNzdWUxODgxNjUzMQ==
| 1,563 |
"requests.Session" should allow "timeout" attribute
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1447160?v=4",
"events_url": "https://api.github.com/users/jcea/events{/privacy}",
"followers_url": "https://api.github.com/users/jcea/followers",
"following_url": "https://api.github.com/users/jcea/following{/other_user}",
"gists_url": "https://api.github.com/users/jcea/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jcea",
"id": 1447160,
"login": "jcea",
"node_id": "MDQ6VXNlcjE0NDcxNjA=",
"organizations_url": "https://api.github.com/users/jcea/orgs",
"received_events_url": "https://api.github.com/users/jcea/received_events",
"repos_url": "https://api.github.com/users/jcea/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jcea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcea/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jcea",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-08-31T01:25:22Z
|
2021-09-09T00:01:07Z
|
2013-08-31T01:51:39Z
|
NONE
|
resolved
|
Currently, you can set stuff like headers, authentication, etc in a "requests.Session" instance, to be used as default values for requests under that session. I would like to set a default timeout for the session too, to be used as the default timeout for requests under that session.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/psf/requests/issues/1563/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1563/timeline
| null |
completed
| null | null | false |
[
"A timeout attribute on a Session is not a logical thing. Consider that the Session can have cookies which are represented (when sent and received from the server) as headers as well as authentication which can happen on a Session. A timeout on a Session, however, would mean that after the value assigned to that attribute the Session would stop functioning. You instead want a timeout on a Request and there is no good way to specify that. This feature has been requested frequently before (see: #1130 for example) and we are not going to add it. Attributes should be logically associative.\n",
"Indeed, we used to have this functionality, but I removed it intentially.\n\nThanks for the suggestion, though! It does make a lot of sense.\n"
] |
https://api.github.com/repos/psf/requests/issues/1562
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1562/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1562/comments
|
https://api.github.com/repos/psf/requests/issues/1562/events
|
https://github.com/psf/requests/pull/1562
| 18,807,666 |
MDExOlB1bGxSZXF1ZXN0Nzk5NTU0Nw==
| 1,562 |
Include docs in source package
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/355646?v=4",
"events_url": "https://api.github.com/users/llonchj/events{/privacy}",
"followers_url": "https://api.github.com/users/llonchj/followers",
"following_url": "https://api.github.com/users/llonchj/following{/other_user}",
"gists_url": "https://api.github.com/users/llonchj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/llonchj",
"id": 355646,
"login": "llonchj",
"node_id": "MDQ6VXNlcjM1NTY0Ng==",
"organizations_url": "https://api.github.com/users/llonchj/orgs",
"received_events_url": "https://api.github.com/users/llonchj/received_events",
"repos_url": "https://api.github.com/users/llonchj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/llonchj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llonchj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/llonchj",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 4 |
2013-08-30T20:17:16Z
|
2021-09-08T21:01:08Z
|
2013-08-31T01:30:19Z
|
NONE
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1562/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1562/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1562.diff",
"html_url": "https://github.com/psf/requests/pull/1562",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1562.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1562"
}
| true |
[
"HI @llonchj, thanks for opening this PR! What's the rationale for doing this? I'm not 100% sold on this being a good idea.\n",
"I agree with you, that has several points of view. There's no PEP about. To\nme makes sense than a source distribution includes documentation.\n\nKindly close the PR if not 100% sure.\n\n2013/8/31 Cory Benfield [email protected]\n\n> HI @llonchj https://github.com/llonchj, thanks for opening this PR!\n> What's the rationale for doing this? I'm not 100% sold on this being a good\n> idea.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/pull/1562#issuecomment-23590816\n> .\n",
"@Lionchj the issue is that by default a python package does not provide any way to access the documentation post installation. Distributions like Debian and Arch package our documentation in a way that it becomes accessible but by default we have no way of doing the same in Python. If the situation were different I would be 100% for this.\n\nThank you for considering this though. It was extremely thoughtful and maybe if you feel adventurous you could send a message to the Distutils mailing list about this. In the meantime I'm going to close this and you can feel free to re-open it when the tools support it in a meaningful way.\n\nAlso, please don't be discouraged to send other pull requests, we're more than happy to have them.\n",
"Im -1 just to keep the file size low\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1561
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1561/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1561/comments
|
https://api.github.com/repos/psf/requests/issues/1561/events
|
https://github.com/psf/requests/issues/1561
| 18,791,485 |
MDU6SXNzdWUxODc5MTQ4NQ==
| 1,561 |
Header keys lose their case
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12653?v=4",
"events_url": "https://api.github.com/users/magicjin/events{/privacy}",
"followers_url": "https://api.github.com/users/magicjin/followers",
"following_url": "https://api.github.com/users/magicjin/following{/other_user}",
"gists_url": "https://api.github.com/users/magicjin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/magicjin",
"id": 12653,
"login": "magicjin",
"node_id": "MDQ6VXNlcjEyNjUz",
"organizations_url": "https://api.github.com/users/magicjin/orgs",
"received_events_url": "https://api.github.com/users/magicjin/received_events",
"repos_url": "https://api.github.com/users/magicjin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/magicjin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/magicjin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/magicjin",
"user_view_type": "public"
}
|
[
{
"color": "e10c02",
"default": false,
"description": null,
"id": 117744,
"name": "Bug",
"node_id": "MDU6TGFiZWwxMTc3NDQ=",
"url": "https://api.github.com/repos/psf/requests/labels/Bug"
},
{
"color": "e102d8",
"default": false,
"description": null,
"id": 117745,
"name": "Planned",
"node_id": "MDU6TGFiZWwxMTc3NDU=",
"url": "https://api.github.com/repos/psf/requests/labels/Planned"
}
] |
closed
| true | null |
[] |
{
"closed_at": null,
"closed_issues": 29,
"created_at": "2013-11-17T11:29:34Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
},
"description": null,
"due_on": null,
"html_url": "https://github.com/psf/requests/milestone/20",
"id": 487518,
"labels_url": "https://api.github.com/repos/psf/requests/milestones/20/labels",
"node_id": "MDk6TWlsZXN0b25lNDg3NTE4",
"number": 20,
"open_issues": 12,
"state": "open",
"title": "3.0.0",
"updated_at": "2024-05-19T18:43:00Z",
"url": "https://api.github.com/repos/psf/requests/milestones/20"
}
| 11 |
2013-08-30T14:43:45Z
|
2021-09-08T13:05:32Z
|
2017-01-09T16:17:38Z
|
NONE
|
resolved
|
it seems that there is a bug in response.headers.
for example, code below:
``` python
import requests
r = requests.get('http://docs.python-requests.org/en/latest/')
print r.headers._store
```
it will display as:
```
{'connection': ('connection', 'keep-alive'),
'content-encoding': ('content-encoding', 'gzip'),
'content-type': ('content-type', 'text/html'),
'date': ('date', 'Fri, 30 Aug 2013 14:29:28 GMT'),
'last-modified': ('last-modified', 'Tue, 27 Aug 2013 15:01:13 GMT'),
'server': ('server', 'nginx/1.1.19'),
'transfer-encoding': ('transfer-encoding', 'chunked'),
'x-cname': ('x-cname', 'docs.python-requests.org'),
'x-deity': ('x-deity', 'Chimera'),
'x-loaded-deity': ('x-loaded-deity', 'Asgard'),
'x-served': ('x-served', 'Nginx')}
```
but actually the headers should be:
```
{'connection': ('Connection', 'keep-alive'),
'content-encoding': ('Content-Encoding', 'gzip'),
'content-type': ('Content-Type', 'text/html'),
'date': ('Date', 'Fri, 30 Aug 2013 14:29:28 GMT'),
'last-modified': ('Last-Modified', 'Tue, 27 Aug 2013 15:01:13 GMT'),
'server': ('Server', 'nginx/1.1.19'),
'transfer-encoding': ('Transfer-Encoding', 'chunked'),
'x-cname': ('X-Cname', 'docs.python-requests.org'),
'x-deity': ('X-Deity', 'Chimera'),
'x-loaded-deity': ('X-Loaded-Deity', 'Asgard'),
'x-served': ('X-Served', 'Nginx')}
```
because the code comments said:
```
Use the lowercased key for lookups, but store the actual key alongside the value.
```
at file structures.py line 72.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/psf/requests/issues/1561/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1561/timeline
| null |
completed
| null | null | false |
[
"Hmm, this is a puzzling one. I see the same behaviour as you, plus this oddity:\n\n``` python\n>>> r.raw.headers\n{'x-served': 'Nginx',\n 'content-encoding': 'gzip',\n 'transfer-encoding': 'chunked',\n 'connection': 'keep-alive',\n 'server': 'nginx/1.1.19',\n 'last-modified': 'Tue, 27 Aug 2013 15:01:13 GMT',\n 'x-cname': 'docs.python-requests.org',\n 'x-loaded-deity': 'Asgard',\n 'date': 'Fri, 30 Aug 2013 14:59:27 GMT',\n 'x-deity': 'Asgard', 'content-type': 'text/html'}\n```\n\nI wonder if urllib3 is lowercasing these.\n",
"It does. See [L244 of urllib3/response.py](https://github.com/shazow/urllib3/blob/master/urllib3/response.py#L244). I'm not totally convinced this is a bug. For the purposes of the comment, the phrase \"the actual key\" should more correctly read \"the key that was added to the dictionary\". The CID is functioning as intended.\n\n@sigmavirus24, I don't feel like urllib3 is doing the wrong thing here. Do you agree?\n",
"It's been doing that for over a year so I would guess that this is perfectly fine otherwise we would have run into issues much sooner. In other words @Lukasa, I agree with you.\n",
"the reason why I report this bug is that I want to write a proxy based on web page, user directly input the URL in the site and web server fetch the page for user, this require the web server return the same headers back to user which get from remote server, of course, I can uppercase the header name by myself, just want to see if we can do this in requests library.\n",
"Well I can confirm one thing: This is not something requests will do for you. We specifically wrap the headers in a case insensitive dictionary. We have no need for the headers to come back cased as they are sent and most of our users don't need them that way either.\n",
"I consider this a bug. Traditionally, requests preserved the server's casing. I'm not sure what happened.\n",
"@kennethreitz Looks like urllib3 changed its behaviour. I'll see if we can fix it up upstream.\n",
"@Lukasa, if you look at the blame on that file though, the change was a year ago. Something quite possibly may have changed but it isn't on that line of that file. It is probably somewhere else and in a change far more recent.\n",
"I'm not convinced we haven't been doing this wrong for a while. The change is definitely in urllib3: take a look at the headers on the urllib3 response. =)\n",
"This bug will probably stay open a little while until we can sort something out in shazow/urllib3#236, so I renamed it to a clear statement of the symptoms. That way, hopefully, we won't get too many reopens of this issue. =D\n",
"It looks like this has been fixed in urllib3 (shazow/urllib3#236) and I can't reproduce it in Requests 2.12.4, so we can probably close this out unless we want to add a test.\r\n\r\n```python\r\n>>> import requests\r\n\r\n>>> r = requests.get('https://httpbin.org/response-headers?CaseDKeY=aValue')\r\n>>> r.headers\r\n{'Content-Length': '93', 'CaseDKeY': 'aValue',... 'Content-Type': 'application/json'}\r\n\r\n>>> r = requests.get('https://httpbin.org/get', headers={'CaseDKeY': 'aValue'})\r\n>>> r.request.headers\r\n{'Connection': 'keep-alive', 'CaseDKeY': 'aValue',... 'User-Agent': 'python-requests/2.12.4'}\r\n```"
] |
https://api.github.com/repos/psf/requests/issues/1560
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1560/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1560/comments
|
https://api.github.com/repos/psf/requests/issues/1560/events
|
https://github.com/psf/requests/issues/1560
| 18,784,039 |
MDU6SXNzdWUxODc4NDAzOQ==
| 1,560 |
SNI support doesn't support vendoring
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/145979?v=4",
"events_url": "https://api.github.com/users/dstufft/events{/privacy}",
"followers_url": "https://api.github.com/users/dstufft/followers",
"following_url": "https://api.github.com/users/dstufft/following{/other_user}",
"gists_url": "https://api.github.com/users/dstufft/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dstufft",
"id": 145979,
"login": "dstufft",
"node_id": "MDQ6VXNlcjE0NTk3OQ==",
"organizations_url": "https://api.github.com/users/dstufft/orgs",
"received_events_url": "https://api.github.com/users/dstufft/received_events",
"repos_url": "https://api.github.com/users/dstufft/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dstufft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dstufft/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dstufft",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 6 |
2013-08-30T11:59:52Z
|
2021-09-09T01:22:21Z
|
2013-09-25T09:24:45Z
|
CONTRIBUTOR
|
resolved
|
The import in requests/**init**.py does `from requests.packages.urllib3.contrib import pyopenssl` which means it doesn't work when requests is vendored.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1560/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1560/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this issue @dstufft! It's not at all clear to me why that import wouldn't work when requests is vendored, though. Can you elaborate on what the problem is?\n",
"I know what the problem is, no worries.\n",
"Just needs to be an absolute relative import.\n",
"When requests is vendored it's top level name isn't `requests`. For instance I'm vendoring it in pip and it's importable as `pip.vendor.requests`.\n",
"Yea what @kennethreitz said.\n",
"Oh, missed that entirely. =D\n"
] |
https://api.github.com/repos/psf/requests/issues/1559
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1559/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1559/comments
|
https://api.github.com/repos/psf/requests/issues/1559/events
|
https://github.com/psf/requests/issues/1559
| 18,769,938 |
MDU6SXNzdWUxODc2OTkzOA==
| 1,559 |
HTTP transport is case sensitive, it won't handle slightly malformed http URLs.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/181693?v=4",
"events_url": "https://api.github.com/users/offbyone/events{/privacy}",
"followers_url": "https://api.github.com/users/offbyone/followers",
"following_url": "https://api.github.com/users/offbyone/following{/other_user}",
"gists_url": "https://api.github.com/users/offbyone/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/offbyone",
"id": 181693,
"login": "offbyone",
"node_id": "MDQ6VXNlcjE4MTY5Mw==",
"organizations_url": "https://api.github.com/users/offbyone/orgs",
"received_events_url": "https://api.github.com/users/offbyone/received_events",
"repos_url": "https://api.github.com/users/offbyone/repos",
"site_admin": true,
"starred_url": "https://api.github.com/users/offbyone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/offbyone/subscriptions",
"type": "User",
"url": "https://api.github.com/users/offbyone",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-08-30T03:55:55Z
|
2021-09-09T01:22:30Z
|
2013-08-30T04:09:13Z
|
NONE
|
resolved
|
When downloading a URL whose schema is not lowercased, the connection adapter can't be found (say, for example):
_Http_ URLs.
```
Traceback (most recent call last):
File "/Users/offby1/projects/rscp/rscp/downloader.py", line 175, in download_images
self.download_image_url(image_url, target_file)
File "/Users/offby1/projects/rscp/rscp/utils.py", line 45, in f_retry
return f(*args, **kwargs)
File "/Users/offby1/projects/rscp/rscp/downloader.py", line 200, in download_image_url
return self._do_download_image_url(url, target)
File "/Users/offby1/projects/rscp/rscp/downloader.py", line 208, in _do_download_image_url
response = self.session.get(url)
File "/Users/offby1/virtualenv/rscp/lib/python2.7/site-packages/requests/sessions.py", line 347, in get
return self.request('GET', url, **kwargs)
File "/Users/offby1/virtualenv/rscp/lib/python2.7/site-packages/requests/sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "/Users/offby1/virtualenv/rscp/lib/python2.7/site-packages/requests/sessions.py", line 433, in send
adapter = self.get_adapter(url=request.url)
File "/Users/offby1/virtualenv/rscp/lib/python2.7/site-packages/requests/sessions.py", line 474, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url)
InvalidSchema: No connection adapters were found for 'Http://a-host.com/some-file'
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/181693?v=4",
"events_url": "https://api.github.com/users/offbyone/events{/privacy}",
"followers_url": "https://api.github.com/users/offbyone/followers",
"following_url": "https://api.github.com/users/offbyone/following{/other_user}",
"gists_url": "https://api.github.com/users/offbyone/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/offbyone",
"id": 181693,
"login": "offbyone",
"node_id": "MDQ6VXNlcjE4MTY5Mw==",
"organizations_url": "https://api.github.com/users/offbyone/orgs",
"received_events_url": "https://api.github.com/users/offbyone/received_events",
"repos_url": "https://api.github.com/users/offbyone/repos",
"site_admin": true,
"starred_url": "https://api.github.com/users/offbyone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/offbyone/subscriptions",
"type": "User",
"url": "https://api.github.com/users/offbyone",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1559/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1559/timeline
| null |
completed
| null | null | false |
[
"Oh, FFS, it's fixed in devo. Never mind, obviously I've got a crufty version lying around.\n"
] |
https://api.github.com/repos/psf/requests/issues/1558
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1558/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1558/comments
|
https://api.github.com/repos/psf/requests/issues/1558/events
|
https://github.com/psf/requests/issues/1558
| 18,638,777 |
MDU6SXNzdWUxODYzODc3Nw==
| 1,558 |
Make instances of PreparedRequest class pickable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/92239?v=4",
"events_url": "https://api.github.com/users/piotr-dobrogost/events{/privacy}",
"followers_url": "https://api.github.com/users/piotr-dobrogost/followers",
"following_url": "https://api.github.com/users/piotr-dobrogost/following{/other_user}",
"gists_url": "https://api.github.com/users/piotr-dobrogost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piotr-dobrogost",
"id": 92239,
"login": "piotr-dobrogost",
"node_id": "MDQ6VXNlcjkyMjM5",
"organizations_url": "https://api.github.com/users/piotr-dobrogost/orgs",
"received_events_url": "https://api.github.com/users/piotr-dobrogost/received_events",
"repos_url": "https://api.github.com/users/piotr-dobrogost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piotr-dobrogost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piotr-dobrogost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piotr-dobrogost",
"user_view_type": "public"
}
|
[
{
"color": "02e10c",
"default": false,
"description": null,
"id": 76800,
"name": "Feature Request",
"node_id": "MDU6TGFiZWw3NjgwMA==",
"url": "https://api.github.com/repos/psf/requests/labels/Feature%20Request"
},
{
"color": "e102d8",
"default": false,
"description": null,
"id": 117745,
"name": "Planned",
"node_id": "MDU6TGFiZWwxMTc3NDU=",
"url": "https://api.github.com/repos/psf/requests/labels/Planned"
},
{
"color": "0b02e1",
"default": false,
"description": null,
"id": 191274,
"name": "Contributor Friendly",
"node_id": "MDU6TGFiZWwxOTEyNzQ=",
"url": "https://api.github.com/repos/psf/requests/labels/Contributor%20Friendly"
}
] |
closed
| true | null |
[] | null | 18 |
2013-08-27T22:38:40Z
|
2021-09-08T14:00:39Z
|
2016-11-11T08:11:33Z
|
NONE
|
resolved
|
See [How can I save a python HTTP requests from python-requets so I can execute it later?](http://stackoverflow.com/q/18462749/95735) question on Stack Overflow.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1558/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1558/timeline
| null |
completed
| null | null | false |
[
"Is that the only remaining class that isn't Pickleable? If so, I'll work on it for 2.0.\n",
"Looks like it might be. I'd love this in 2.0. =)\n",
":+1: If we make a patch for the current version, would you accept it? -- I want to improve the python-jira library without having to use a non-released version of requests.\n",
"@ssbarnea To the best of my knowledge we have no plans to release any further 1.2.X point releases. The next planned release is 2.0.0. I'm potentially open to you making a patch against master, but I'd want to check with @kennethreitz first.\n",
"Go for it\n\n```\n--\n```\n\nKenneth Reitz\n\nOn Wed, Aug 28, 2013 at 7:34 AM, Cory Benfield [email protected]\nwrote:\n\n> ## @ssbarnea To the best of my knowledge we have no plans to release any further 1.2.X point releases. The next planned release is 2.0.0. I'm potentially open to you making a patch against master, but I'd want to check with @kennethreitz first.\n> \n> Reply to this email directly or view it on GitHub:\n> https://github.com/kennethreitz/requests/issues/1558#issuecomment-23407592\n",
"Sweet, feel free to patch against master then. =)\n",
"Ok, looking at this again we don't currently make any of the trio of `Request`, `PreparedRequest` and `Response` pickleable. `Response` is potentially difficult to pickle (file descriptors, underlying urllib3 response object isn't pickleable), but the other two are the things worth making pickleable anyway. We should shoot for both of them I think.\n",
"I'm not (quite frankly) convinced that a `Request` object should be entirely pickleable. It is an ephemeral object that we use to construct the `PreparedRequest`. `PreparedRequest` objects do not store a reference to it, so I'm 90% sure it does not need to be pickleable. We should probably make it pickleable anyway though it is a seriously minor introduction of code and it will cover us for the future.\n\nI'm wondering if @shazow would be opposed to making the `urllib3` response object pickleable. If he were to approve, then I think it would make sense to also make the Response object pickleable. \n",
"The biggest issue with pickling `PreparedRequest`s is pickling hooks. We could be super-clever and try to pickle the function objects but I'm not convinced that's a great plan. Alternative suggestions?\n",
"Hrm, yea not sure if it makes a lot of sense. Especially since the Response object has a lot of state within it, state which is network-dependent too (is the socket still open? has the body been read? etc).\n\nWhat would it take to make urllib3's Response object pickleable?\n",
"Maybe I should explain better what I want to achieve and hopefully we will\nfind the best wait to implement it.\n\nI want to save issue.update requests so I can call them later. Why? Two\nreasons: speed and reliability, Jira may be down for a while and I cannot\nimplement retry in all the scripts but I could save failed requests for\nlater execution via a cron job. If this behaviour could be enabled without\nchanging existing scripts, the better.\n\nOn Sunday, 1 September 2013, Andrey Petrov wrote:\n\n> Hrm, yea not sure if it makes a lot of sense. Especially since the\n> Response object has a lot of state within it, state which is\n> network-dependent too (is the socket still open? has the body been read?\n> etc).\n> \n> What would it take to make urllib3's Response object pickleable?\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1558#issuecomment-23631177\n> .\n\n## \n\n## \n\nCheers,\nSorin\n",
"I think it's reasonable to have `PreparedRequest` be pickleable without the rest. Sounds like a sensible edge case to me.\n",
"@Lukasa Using cloudpickle from piclouds cloud library(https://pypi.python.org/pypi/cloud) seems like a good option to me. It supports a lot more types than pythons pickle module, including functions (http://docs.picloud.com/client_pitfall.html#nonsupported-objects lists the exceptions). It would require adding a dependency on cloud, but we could just copy the source, the same way as urllib3 and chardet since the serialization code is not likely to change.\n\nIf there are no objections, I can implement this and send in a pull request.\n",
"We only vendor absolutely necessary dependencies and we try to keep that \nnumber as low as possible. As I see it, this is not a high priority for us, \nnor is vendoring a dependency for cloudpickle quite the approach I would like \nto take at the moment.\n",
"So 2.7.0 is imminent. Looking at this there are only one corner case that I think are currently problematic.\n\nOn a prepared request, the only thing I think can hurt us is the case when the `body` attribute is a file object. Is there a specific hook that we're concerned about not being pickleable?\n",
"i'd like to work on this ticket, if no one else plans to. haven't contributed to requests before. \n",
"I thought we had fixed this, but I guess not.\n",
"With Keyan's work merged in with #3489, I think we have this tested as much as we can. Due to the inability to pickle file-objects and generators, I don't believe PreparedRequest will ever be 100% pickleable but we have the common cases tested now.\n"
] |
https://api.github.com/repos/psf/requests/issues/1557
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1557/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1557/comments
|
https://api.github.com/repos/psf/requests/issues/1557/events
|
https://github.com/psf/requests/issues/1557
| 18,629,938 |
MDU6SXNzdWUxODYyOTkzOA==
| 1,557 |
[FROM SESSION-FUTURES] Multiple Redirects
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1316773?v=4",
"events_url": "https://api.github.com/users/akavi1015/events{/privacy}",
"followers_url": "https://api.github.com/users/akavi1015/followers",
"following_url": "https://api.github.com/users/akavi1015/following{/other_user}",
"gists_url": "https://api.github.com/users/akavi1015/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/akavi1015",
"id": 1316773,
"login": "akavi1015",
"node_id": "MDQ6VXNlcjEzMTY3NzM=",
"organizations_url": "https://api.github.com/users/akavi1015/orgs",
"received_events_url": "https://api.github.com/users/akavi1015/received_events",
"repos_url": "https://api.github.com/users/akavi1015/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/akavi1015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akavi1015/subscriptions",
"type": "User",
"url": "https://api.github.com/users/akavi1015",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 13 |
2013-08-27T19:51:38Z
|
2021-09-09T01:22:31Z
|
2013-08-29T13:11:22Z
|
NONE
|
resolved
|
https://github.com/ross/requests-futures/issues/6
The above link has the issue in it.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1557/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1557/timeline
| null |
completed
| null | null | false |
[
"I see no issue in requests proper described there. It looks like you're using requests-futures incorrectly unless there are details you didn't update after trying to get the response from the futures object.\n",
"I do not think I am doing anything incorrectly with requests-futures. I really just can't conceptualize redirecting from page2 to page3 to page5 considering page3 is a temp page. The way to do this (according to ross) is through requests.\n",
"The expected behaviour is as follows:\n\nYou send a POST request to `/index.htm?pageNumber=2`. The server responds with a message that looks like this:\n\n```\nHTTP/1.1 302 Found\nLocation: /index.htm?pageNumber=3\n```\n\nWhen Requests sees the 302 it'll look for the `Location` header, and then kick off a new request (and will turn the verb into GET) to the `/index.htm?pageNumber=3` URL. The server will respond again:\n\n```\nHTTP/1.1 302 Found\nLocation: /index.htm?pageNumber=5\n```\n\nRequests will again follow the redirect, and will finally get a positive response:\n\n```\nHTTP/1.1 200 OK\n...snip...\n```\n\nTo confirm that this happened, please show us what `r.history` and `r.url` contain when you've finished the request.\n",
"`r.history` and `r.url` output (of which the URL is a combination of the pageNumber=2 URL and my payload):\n\n[]\nhttps://purchases.moneymappress.com/load1withcrosssell/ECHNN400/index.htm?pageNumber=2&formComponentsMap%5B%27order%27%5D.addressLine1=test&formComponentsMap%5B%27order%27%5D.shipToCity=&formComponentsMap%5B%27order%27%5D.addressLine2=test&_formComponentsMap%5B%27order%27%5D.hasSameShipToAddress=&formComponentsMap%5B%27order%27%5D.city=test&formComponentsMap%5B%27order%27%5D.faxNumber=&formComponentsMap%5B%27order%27%5D.shipToAddressLine1=&formComponentsMap%5B%27order%27%5D.shipToPostalCode=&formComponentsMap%5B%27order%27%5D.confirmShipToEmailAddress=&formComponentsMap%5B%27order%27%5D.shipToStateId=&formComponentsMap%5B%27order%27%5D.whichStateInput=dropDown&formComponentsMap%5B%27paymentOption%27%5D.creditCardSecurityCode=123&formComponentsMap%5B%27paymentOption%27%5D.effortPaymentOptionId=&formComponentsMap%5B%27order%27%5D.stateId=33&formComponentsMap%5B%27order%27%5D.shipToEmailAddress=&formComponentsMap%5B%27order%27%5D.emailAddress=akwebqa%2Bfuturessession0%40gmail.com&formComponentsMap%5B%27order%27%5D.password=123456&formComponentsMap%5B%27paymentOption%27%5D.creditCardTypeId=3&formComponentsMap%5B%27order%27%5D.shipToFirstName=&formComponentsMap%5B%27order%27%5D.wasPasswordFieldShown=true&formComponentsMap%5B%27order%27%5D.shipToCountryId=1&formComponentsMap%5B%27order%27%5D.shipToAddressLine2=&_formComponentsMap%5B%27order%27%5D.hasAgreedToBeContactedByAffiliates=&formComponentsMap%5B%27order%27%5D.phoneNumber=&formComponentsMap%5B%27paymentOption%27%5D.maskedCreditCardNumber=4444333322221111&formComponentsMap%5B%27offer%27%5D.choiceId=%27AGORA%27&formComponentsMap%5B%27order%27%5D.confirmEmailAddress=akwebqa%2Bfuturessession0%40gmail.com&formComponentsMap%5B%27order%27%5D.postalCode=21202&formComponentsMap%5B%27order%27%5D.whichShipToStateInput=dropDown&formComponentsMap%5B%27order%27%5D.shipToLastName=&formComponentsMap%5B%27offer%27%5D.pricingOptionId=%7B%0D%0A++++++++++++__onCountryChange__%3A+function%28countryId%2C+countryName&formComponentsMap%5B%27order%27%5D.lastName=test&formComponentsMap%5B%27paymentOption%27%5D.creditCardExpirationYear=2014&formComponentsMap%5B%27paymentOption%27%5D.creditCardExpirationMonth=3&formComponentsMap%5B%27order%27%5D.hasSameShipToAddress=true&formComponentsMap%5B%27order%27%5D.firstName=test&formComponentsMap%5B%27order%27%5D.countryId=1\n",
"Right, so you're not getting a 302 back. What's `r.status_code`?\n",
"200\n",
"Right, this is not Requests' fault. =)\n\nThe server is not providing any suggestion that you should be redirected. This will be because something about the request you're sending using Requests is different to the one you sent via Chrome. You need to compare the two. Are you familiar with Wireshark/tcpdump?\n",
"One more option is set up proxy and monitor the request and response.\nOn Aug 28, 2013 8:52 PM, \"Cory Benfield\" [email protected] wrote:\n\n> Right, this is not Requests' fault. =)\n> \n> The server is not providing any suggestion that you should be redirected.\n> This will be because something about the request you're sending using\n> Requests is different to the one you sent via Chrome. You need to compare\n> the two. Are you familiar with Wireshark/tcpdump?\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1557#issuecomment-23423137\n> .\n",
"Would those options still work if the website I am trying to access is on a SSL protocol (I am not familiar with Wireshark/tcpdump)?\n",
"Ah, then it's harder. At this point a man-in-the-middle proxy is likely to be your best bet.\n",
"So I changed my post to post to pageNumber=5 instead of 2 and I am now getting a `r.history` of 302 but a `r.url` of https://purchases.moneymappress.com/load1withcrosssell/ECHNN400/index.htm?pageNumber=2\n",
"Yup, so that indicates that you're being redirected to pageNumber=2. You should sit back and attempt to work out how the URI scheme for this website works.\n",
"With the most recent discussion, I am certain this is not a bug in requests or how we handle redirects. That said, if you need further advice please post your questions to StackOverflow\n"
] |
https://api.github.com/repos/psf/requests/issues/1556
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1556/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1556/comments
|
https://api.github.com/repos/psf/requests/issues/1556/events
|
https://github.com/psf/requests/pull/1556
| 18,611,901 |
MDExOlB1bGxSZXF1ZXN0Nzg4OTY5Nw==
| 1,556 |
Added link to StackOverflow questions related to python-requests. This s...
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/102495?v=4",
"events_url": "https://api.github.com/users/ssbarnea/events{/privacy}",
"followers_url": "https://api.github.com/users/ssbarnea/followers",
"following_url": "https://api.github.com/users/ssbarnea/following{/other_user}",
"gists_url": "https://api.github.com/users/ssbarnea/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ssbarnea",
"id": 102495,
"login": "ssbarnea",
"node_id": "MDQ6VXNlcjEwMjQ5NQ==",
"organizations_url": "https://api.github.com/users/ssbarnea/orgs",
"received_events_url": "https://api.github.com/users/ssbarnea/received_events",
"repos_url": "https://api.github.com/users/ssbarnea/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ssbarnea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssbarnea/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ssbarnea",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-08-27T14:43:51Z
|
2021-09-08T21:01:08Z
|
2013-08-27T15:03:33Z
|
CONTRIBUTOR
|
resolved
|
...hould be the default support way for the library, as clearly most questions are supposed to be programming related.
Signed-off-by: Sorin Sbarnea [email protected]
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1556/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1556/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1556.diff",
"html_url": "https://github.com/psf/requests/pull/1556",
"merged_at": "2013-08-27T15:03:33Z",
"patch_url": "https://github.com/psf/requests/pull/1556.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1556"
}
| true |
[
"Looks good to me!\n",
":cake:\n"
] |
https://api.github.com/repos/psf/requests/issues/1555
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1555/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1555/comments
|
https://api.github.com/repos/psf/requests/issues/1555/events
|
https://github.com/psf/requests/issues/1555
| 18,598,630 |
MDU6SXNzdWUxODU5ODYzMA==
| 1,555 |
Provide a quick link to http://stackoverflow.com/questions/tagged/python-requests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/102495?v=4",
"events_url": "https://api.github.com/users/ssbarnea/events{/privacy}",
"followers_url": "https://api.github.com/users/ssbarnea/followers",
"following_url": "https://api.github.com/users/ssbarnea/following{/other_user}",
"gists_url": "https://api.github.com/users/ssbarnea/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ssbarnea",
"id": 102495,
"login": "ssbarnea",
"node_id": "MDQ6VXNlcjEwMjQ5NQ==",
"organizations_url": "https://api.github.com/users/ssbarnea/orgs",
"received_events_url": "https://api.github.com/users/ssbarnea/received_events",
"repos_url": "https://api.github.com/users/ssbarnea/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ssbarnea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssbarnea/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ssbarnea",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 4 |
2013-08-27T10:19:34Z
|
2021-09-09T01:22:32Z
|
2013-08-27T15:04:06Z
|
CONTRIBUTOR
|
resolved
|
It would be extremely useful to provide a quick link from inside to the requests homepage to http://stackoverflow.com/questions/tagged/python-requests
homepage: http://docs.python-requests.org/en/latest/ - see Useful Links
Currently the homepage contains 3 quick links but none of them is linking to a place where people can ask questions and get answers.
Using the bug tracker for this would be clearly against its purpose :)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1555/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1555/timeline
| null |
completed
| null | null | false |
[
"Feel free to send a pull request because you're 100% correct. :)\n",
":+1:\n",
"https://github.com/kennethreitz/requests/pull/1556\n",
"Job's a good'un! Thanks so much! :star:\n"
] |
https://api.github.com/repos/psf/requests/issues/1554
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1554/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1554/comments
|
https://api.github.com/repos/psf/requests/issues/1554/events
|
https://github.com/psf/requests/issues/1554
| 18,569,398 |
MDU6SXNzdWUxODU2OTM5OA==
| 1,554 |
reading multipart/x-mixed-replace stream
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/933231?v=4",
"events_url": "https://api.github.com/users/wiesson/events{/privacy}",
"followers_url": "https://api.github.com/users/wiesson/followers",
"following_url": "https://api.github.com/users/wiesson/following{/other_user}",
"gists_url": "https://api.github.com/users/wiesson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wiesson",
"id": 933231,
"login": "wiesson",
"node_id": "MDQ6VXNlcjkzMzIzMQ==",
"organizations_url": "https://api.github.com/users/wiesson/orgs",
"received_events_url": "https://api.github.com/users/wiesson/received_events",
"repos_url": "https://api.github.com/users/wiesson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wiesson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wiesson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wiesson",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 17 |
2013-08-26T19:48:19Z
|
2018-10-18T14:33:23Z
|
2013-08-26T22:42:06Z
|
NONE
|
resolved
|
Hi, I'm trying to fetch multipart/x-mixed-replace data with the [Streaming Requests](http://docs.python-requests.org/en/latest/user/advanced/#streaming-requests) example. As I read [here](http://en.wikipedia.org/wiki/Push_technology#HTTP_server_push):
> The content type multipart/x-mixed-replace was developed as part of a technology to emulate server push and streaming over HTTP.
So how to fetch this correctly?
``` python
import requests
settings = { 'interval': '1000', 'count':'20' }
url = 'http://agent.mtconnect.org/sample'
r = requests.get(url, params=settings, stream=True)
for line in r.iter_lines():
if line:
print line
```
the content:
``` xml
--0910dc57f12aa1452fb67c74b7b2ddce
Content-type: text/xml
Content-length: 6299
... long xml ...
--e8f953249be4753b6f93d61bde0c970a
Content-type: text/xml
Content-length: 531
<?xml version="1.0" encoding="UTF-8"?>
<MTConnectStreams xmlns:m="urn:mtconnect.org:MTConnectStreams:1.2" xmlns="urn:mtconnect.org:MTConnectStreams:1.2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:mtconnect.org:MTConnectStreams:1.2 /schemas/MTConnectStreams_1.2.xsd">
<Header creationTime="2013-08-26T19:29:34Z" sender="wiesson.local" instanceId="1377530136" version="1.2.0.21" bufferSize="131072" nextSequence="18234" firstSequence="1" lastSequence="18233"/>
<Streams/>
</MTConnectStreams>
```
So, I'm not able to separate header and content easily. And suggestions? (Normally I don't use github to post my _problem_ but here I can't help myself ;))
A simple curl in the terminal:
``` bash
curl http://http://agent.mtconnect.org/sample\?interval\=0\&count\=50
```
grabs the data.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/933231?v=4",
"events_url": "https://api.github.com/users/wiesson/events{/privacy}",
"followers_url": "https://api.github.com/users/wiesson/followers",
"following_url": "https://api.github.com/users/wiesson/following{/other_user}",
"gists_url": "https://api.github.com/users/wiesson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wiesson",
"id": 933231,
"login": "wiesson",
"node_id": "MDQ6VXNlcjkzMzIzMQ==",
"organizations_url": "https://api.github.com/users/wiesson/orgs",
"received_events_url": "https://api.github.com/users/wiesson/received_events",
"repos_url": "https://api.github.com/users/wiesson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wiesson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wiesson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wiesson",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1554/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1554/timeline
| null |
completed
| null | null | false |
[
"The response is broken into parts, delimited by strings of the form `--<stuff>`. So, read data until you find one of those, then join all the previous lines up with `\\r\\n` characters. You now have a single part, which contains the headers and then the body. Split on the string `\\r\\n\\r\\n`. The first part is the headers: all the rest is the body.\n\n``` python\npart = []\n\nfor line in r.iter_lines():\n if line.startswith('--'): # start of new chunk\n if not part:\n continue # Handle empty parts\n\n strpart = '\\r\\n'.join(part)\n headers, body = strpart.split('\\r\\n\\r\\n', 1)\n\n do_stuff(headers, body)\n\n # Clean up for the next chunk.\n part = []\n elif line: # Portion of current chunk.\n part.append(line)\n```\n\nObviously, clean up the code and break some of it into functions. =)\n",
"Wow, thanks @Lukasa ... I tried your code but it seems that I don't understand it ;)\n\n``` python\nstrpart = '\\r\\n'.join(part)\n```\n\nwill join all line-breaks? The desired XML paylout itself also contains \"\\r\\n\"\n\n``` xml\n['Content-type: text/xml\\r\\nContent-length: 4117\\r\\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\r\\n\n<MTConnectStreams xmlns:m=\"urn:mtconnect.org:MTConnectStreams:1.2\" xmlns=\"urn:mtconnect.org:MTConnectStreams:1.2\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:mtconnect.org:MTConnectStreams:1.2 http://www.mtconnect.org/schemas/MTConnectStreams_1.2.xsd\">\\r\\n \n\n... more xml with \\r\\n ...\n```\n\nso I helped me with: \n\n``` python\nstrpart = '\\r\\n'.join(part[2:])\nheaders = ''.join(part[:2])\nbody = ''.join(strpart.split('\\r\\n\\r\\n',1))\n```\n\nworks fine for me :) \n",
"Okay, sorry for reopening - the problem: It feels like that it is not reading to the end? I've added a screen to demonstrate what I mean ;) \n\nLeft side is requests with iter_lines(), the boundaries are not visible due to if line.startswith('--'),\nRight side is a simple curl in the termal.\n\n\nSo maybe I have to use iter_content and read the body \"manually\" ?\n",
"Nope. :smile:\n\n`Response.iter_lines()` takes an optional `chunk_size` parameter, as you can [see here](http://docs.python-requests.org/en/latest/api/#requests.Response.iter_lines). The `iter_lines()` call will block until that much data has been read from the wire, at which point it will return the next set of data. If a response length is not a multiple of that value, you will not get it all until more data arrives.\n\nYou can get around this, at the cost of some inefficiency, by setting `chunk_size=1`.\n\nNB: The same limitation applies to `iter_content()`.\n",
"Thanks a lot :) I read the docs but I haven't noticed that `chunk_size` parameter :/ I will play around and test some settings. \n",
"Sorry for coming back to this problem - is it possible to change the content-length / chunk_size while reading a stream? As stated in the following snipped, I thought about reading the first element with chunk_size 1, then figuring out the content-length for each interval (as stated in die figure https://f.cloud.github.com/assets/933231/1034192/eed76a4c-0f11-11e3-8834-072193784915.png) . Chunk-size 1 is indeed very slow as you mentioned afore.\n\n``` python\nurl = 'agent.mtconnect.org/sample?interval=0'\nif __name__ == '__main__':\n chunk_size = 1\n chunk_flag = True\n util = XMLBuilder()\n\n requests_stream = requests.get(url, stream=True, timeout=4)\n for line in requests_stream.iter_lines(chunk_size, decode_unicode=None):\n print line\n\n if line.startswith('Content-length') and chunk_flag:\n strpart = line.split(': ')\n chunk_size = int(strpart[1]) # + 83\n chunk_flag = False\n print \"-- Content-length: %d --\" % chunk_size\n #: something like:\n #: requests_stream.iter_lines(chunk_size)\n\n elif line.endswith('</MTConnectStreams>'):\n chunk_flag = True\n\n```\n",
"Absolutely you can.\n\nThe `iter_content()` function returns a generator. If you stop iterating over it before you exhaust the generator, then call the `iter_content()` function again, you get a new generator with the new size that continues from the same point.\n",
"@Lukasa beat me to this, but I was going to include the caveat that questions should be posted to [StackOverflow](http://stackoverflow.com/questions/tagged/python-requests) in the future.\n",
"@sigmavirus24 sorry for that, I will post my problems on stackoverflow in the future. :) @Lukasa, thanks for the quick answer!\n",
"No worries @wiesson . I had the answer ready to go too. I don't mind helping, which is why I keep SO open in a tab 24/7. :)\n",
"@Lukasa, here is my approach: Any comments or suggestions for improvements? No clue how to catch a disconnect :)\n\n``` python\nimport requests\nfrom lxml import etree\n\nif __name__ == '__main__':\n content_length = 512\n flag = True\n requests_stream = requests.get('http://agent.mtconnect.org/sample?interval=0', stream=True, timeout=2)\n while flag:\n for line in requests_stream.iter_lines(3, decode_unicode=None):\n if line.startswith('Content-length'):\n content_length = int(''.join(x for x in line if x.isdigit()))\n break\n\n for xml in requests_stream.iter_content(content_length):\n root = etree.fromstring(xml.lstrip('\\r\\n'))\n ns = dict(m = 'urn:mtconnect.org:MTConnectStreams:1.2')\n header = root.find('.//m:Header', namespaces=ns)\n nextSeq = header.attrib['nextSequence']\n elements = root.findall('.//m:Events/*', namespaces=ns)\n for e in elements:\n print \"%s is %s %s\" % (e.tag.lstrip('{urn:mtconnect.org:MTConnectStreams:1.2}'), e.text, e.attrib)\n break\n```\n",
"@wiesson the first thing I see is that you never set `flag = False` so you're look will go forever. If that's what you intend you could simplify it to:\n\n``` python\nwhile True:\n # ...\n```\n\nAs for waiting for a disconnect, you can probably simulate that by looking at the tests in [shazow/urllib3](/shazow/urllib3). I believe there are a few dummy servers which simulate dropped connections so you could see how your code responds.\n",
"Ok, thanks for the comment, a simple break will end the loop, too. But how do I catch an exception in the `requests_stream.iter_lines` or `requests_stream.iter_content` loop? \n",
"Are you explicitly worried about the generator throwing an exception?\n",
"Hmm, I'm just looking for an exception that catches an event that indicates, there is no new \"message\" or no new event since x seconds. \n",
"You can set a timeout, which should throw an exception if you have to wait that long for data.\n",
"Hi @Lukasa !\r\nI'm trying to download multipart/x-mixed-replace.\r\nWill you help me? ;)\r\n\r\n\r\nthe content:\r\n`\r\n--myboundary\r\nContent-Type: text/plain\r\nContent-Length: 2497\r\n\r\n****text****\r\n\r\n--myboundary\r\nContent-Type: image/jpeg\r\nContent-Length: 591666\r\n\r\n****image data******\r\n`\r\n"
] |
https://api.github.com/repos/psf/requests/issues/1553
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1553/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1553/comments
|
https://api.github.com/repos/psf/requests/issues/1553/events
|
https://github.com/psf/requests/issues/1553
| 18,517,161 |
MDU6SXNzdWUxODUxNzE2MQ==
| 1,553 |
Autodetect proxy on Windows
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2280140?v=4",
"events_url": "https://api.github.com/users/oliverjanik/events{/privacy}",
"followers_url": "https://api.github.com/users/oliverjanik/followers",
"following_url": "https://api.github.com/users/oliverjanik/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverjanik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/oliverjanik",
"id": 2280140,
"login": "oliverjanik",
"node_id": "MDQ6VXNlcjIyODAxNDA=",
"organizations_url": "https://api.github.com/users/oliverjanik/orgs",
"received_events_url": "https://api.github.com/users/oliverjanik/received_events",
"repos_url": "https://api.github.com/users/oliverjanik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/oliverjanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverjanik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/oliverjanik",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 4 |
2013-08-25T06:16:06Z
|
2021-09-09T01:22:29Z
|
2013-08-25T08:54:08Z
|
NONE
|
resolved
|
It would be nice if requests could support automatic proxy detection on Windows.
Both urllib2 and urllib accomplish this by reading windows registry via urllib.getproxies() on Windows and using _scproxy on Mac.
I opened an issue in [urllib3](https://github.com/shazow/urllib3/issues/232) but @shazow doesn't think it belongs there.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2280140?v=4",
"events_url": "https://api.github.com/users/oliverjanik/events{/privacy}",
"followers_url": "https://api.github.com/users/oliverjanik/followers",
"following_url": "https://api.github.com/users/oliverjanik/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverjanik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/oliverjanik",
"id": 2280140,
"login": "oliverjanik",
"node_id": "MDQ6VXNlcjIyODAxNDA=",
"organizations_url": "https://api.github.com/users/oliverjanik/orgs",
"received_events_url": "https://api.github.com/users/oliverjanik/received_events",
"repos_url": "https://api.github.com/users/oliverjanik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/oliverjanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverjanik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/oliverjanik",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1553/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1553/timeline
| null |
completed
| null | null | false |
[
"Looks looks `urllib.getproxies() is already being used (and executed).\nhttps://github.com/kennethreitz/requests/blob/90837359632b43aba7900b65b42744715f0b739a/requests/utils.py#L416\n",
"Here's my dumb test:\n1. Launch Fiddler on windows\n\n``` python\n>>> import urllib\n>>> urllib.getproxies()\n{'http': 'http://127.0.0.1:8888', 'https': 'https://127.0.0.1:8888'}\n```\n\nFiddler detected.\n\n2.\n\n``` python\n>>> urllib.urlopen('http://google.com/').read()\n```\n\nFiddler shows request.\n\n3.\n\n``` python\n>>> import requests\n>>> requests.get('http://bing.com')\n```\n\nFiddler shows nothing.\n",
"Ok, I'm on stable 1.2.3 and this was fixed in https://github.com/kennethreitz/requests/commit/93be6916f98f191eee9ac053994f61e99869735b#requests/utils.py\n\nand is in the 2.0 branch, so I guess I'll have to wait for the release. Maybe you should consider 1.2.4 with hotfixes.\n",
"Hope this gets released! Specifying environment variables is tedious.\n"
] |
https://api.github.com/repos/psf/requests/issues/1552
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1552/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1552/comments
|
https://api.github.com/repos/psf/requests/issues/1552/events
|
https://github.com/psf/requests/pull/1552
| 18,465,936 |
MDExOlB1bGxSZXF1ZXN0NzgxNzgwNA==
| 1,552 |
Allow spaces in the no_proxy environ variable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
},
{
"color": "eb6420",
"default": false,
"description": null,
"id": 44501256,
"name": "Breaking API Change",
"node_id": "MDU6TGFiZWw0NDUwMTI1Ng==",
"url": "https://api.github.com/repos/psf/requests/labels/Breaking%20API%20Change"
}
] |
closed
| true | null |
[] | null | 1 |
2013-08-23T11:38:57Z
|
2021-09-08T23:06:14Z
|
2013-08-24T00:23:58Z
|
MEMBER
|
resolved
|
As mentioned in #1551, we could do with being a little more lenient here. A space isn't a valid URL character anyway, so just yank them all out and everything should be fine.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1552/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1552/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1552.diff",
"html_url": "https://github.com/psf/requests/pull/1552",
"merged_at": "2013-08-24T00:23:57Z",
"patch_url": "https://github.com/psf/requests/pull/1552.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1552"
}
| true |
[
":+1:\n"
] |
https://api.github.com/repos/psf/requests/issues/1551
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1551/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1551/comments
|
https://api.github.com/repos/psf/requests/issues/1551/events
|
https://github.com/psf/requests/issues/1551
| 18,462,607 |
MDU6SXNzdWUxODQ2MjYwNw==
| 1,551 |
Problems with spaces in no_proxy environ variable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2345069?v=4",
"events_url": "https://api.github.com/users/javitgarcia/events{/privacy}",
"followers_url": "https://api.github.com/users/javitgarcia/followers",
"following_url": "https://api.github.com/users/javitgarcia/following{/other_user}",
"gists_url": "https://api.github.com/users/javitgarcia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/javitgarcia",
"id": 2345069,
"login": "javitgarcia",
"node_id": "MDQ6VXNlcjIzNDUwNjk=",
"organizations_url": "https://api.github.com/users/javitgarcia/orgs",
"received_events_url": "https://api.github.com/users/javitgarcia/received_events",
"repos_url": "https://api.github.com/users/javitgarcia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/javitgarcia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/javitgarcia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/javitgarcia",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-08-23T10:02:31Z
|
2021-09-09T02:11:35Z
|
2013-08-23T11:39:30Z
|
NONE
|
resolved
|
In utils.py, in the get_environ_proxies function:
``` python
...
if no_proxy:
# We need to check whether we match here. We need to see if we match
# the end of the netloc, both with and without the port.
no_proxy = no_proxy.split(',')
netloc = urlparse(url).netloc
for host in no_proxy:
if netloc.endswith(host) or netloc.split(':')[0].endswith(host):
# The URL does match something in no_proxy, so we don't want
# to apply the proxies on this URL.
return {}
...
```
It must be replaced:
``` python
no_proxy = no_proxy.split(',')
```
With:
``` python
no_proxy = no_proxy.replace(' ', '').split(',')
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1551/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1551/timeline
| null |
completed
| null | null | false |
[
"Strictly speaking `no_proxy` shouldn't have spaces in it (it's defined as _comma separated_), but I think we can be more lenient here. =)\n",
"This should be resolved by #1552. Thanks for reporting it!\n"
] |
https://api.github.com/repos/psf/requests/issues/1550
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1550/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1550/comments
|
https://api.github.com/repos/psf/requests/issues/1550/events
|
https://github.com/psf/requests/issues/1550
| 18,452,861 |
MDU6SXNzdWUxODQ1Mjg2MQ==
| 1,550 |
No "Host" header entry in response.request.headers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1407472?v=4",
"events_url": "https://api.github.com/users/Cosmius/events{/privacy}",
"followers_url": "https://api.github.com/users/Cosmius/followers",
"following_url": "https://api.github.com/users/Cosmius/following{/other_user}",
"gists_url": "https://api.github.com/users/Cosmius/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Cosmius",
"id": 1407472,
"login": "Cosmius",
"node_id": "MDQ6VXNlcjE0MDc0NzI=",
"organizations_url": "https://api.github.com/users/Cosmius/orgs",
"received_events_url": "https://api.github.com/users/Cosmius/received_events",
"repos_url": "https://api.github.com/users/Cosmius/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Cosmius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cosmius/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Cosmius",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 8 |
2013-08-23T03:55:12Z
|
2021-09-09T01:22:31Z
|
2013-08-27T07:57:37Z
|
NONE
|
resolved
|
Many web servers use the Host header to match different sites, probably the most important HTTP header.
There is no Host entry in the response.request.headers, unless set explicitly when issuing the request.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1550/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1550/timeline
| null |
completed
| null | null | false |
[
"@Cosmius That's a good point! I think we should fix this up for 2.0.\n",
"The \"Accept-Encoding\" and \"Host\" fields get set automatically in httplib's HTTPConnection.putrequest() if they are not set explicitly.\n",
"That's true, it does. Still, I think it'd be better to set it higher up if possible, to make the `PreparedRequest` look as much like what went out on the wire as possible. @kennethreitz may disagree though.\n",
"Actually, on second thoughts I'm really less sure about this. I'll think about it.\n",
"If we had the request object that urllib3 uses then we could perhaps pull some info out of that and put it on a yet different request object, something like `SentRequest`. `PreparedRequest` would display what work we did for this and `SentRequest` could more accurately represent what was sent over the wire. That said, I think it's quite a bit of unnecessary duplication/complication.\n\nAlso, keep in mind that when you create a request with urllib/urllib2 you never set this yourself. For even greater context, `net/http` in Ruby never does or requires this either. There is probably debugging value in this, but does the value outweigh the complexity it would cause.\n",
"Yeah, I think we're ok here. Thanks for the suggestion though!\n",
"Sorry for my absence.\n\nI found this header interesting when I tried to reconstruct the \"raw\" request byte-by-byte. Things would be better if I can use req, resp = urllib2.urlopen(...)\n\nI tried to reconstruct the \"raw\" request because some non-standard HTTP servers behave differently when requested with GET / HTTP/1.1 and GET http://example.com/ HTTP/1.1\n\nThis is really useful when working with the bad-designed servers.\n\nI think you may want to reconsider it. :)\n",
"I agree that this would be useful. =)\n\nMy concern is that right now there is a working, heavily used function for inserting the `Host` header inside `httplib`. This function gets so much use that it's likely to be almost entirely bug free. To provide this header at the level of Requests we'd have to reimplement that logic. Once we do, we have a new set of code that we have to maintain, update if bugs are found, and ensure doesn't rot.\n\nIt's only worth doing that if we think there's a bug lower down that won't get fixed. As far as I can see, there is not. We have to draw a line somewhere about what we put on and what we don't, and this appears to be our line. For instance, we don't provide any way for you to see the full status line either.\n\nWhen I hit these kind of problems I tend to use [tcpdump](http://www.tcpdump.org/) or [Runscope](https://www.runscope.com/), depending on the scope of the problem.\n"
] |
https://api.github.com/repos/psf/requests/issues/1549
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1549/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1549/comments
|
https://api.github.com/repos/psf/requests/issues/1549/events
|
https://github.com/psf/requests/issues/1549
| 18,402,573 |
MDU6SXNzdWUxODQwMjU3Mw==
| 1,549 |
'PoolManager' object has no attribute 'proxy_from_url'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5285404?v=4",
"events_url": "https://api.github.com/users/diplav/events{/privacy}",
"followers_url": "https://api.github.com/users/diplav/followers",
"following_url": "https://api.github.com/users/diplav/following{/other_user}",
"gists_url": "https://api.github.com/users/diplav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/diplav",
"id": 5285404,
"login": "diplav",
"node_id": "MDQ6VXNlcjUyODU0MDQ=",
"organizations_url": "https://api.github.com/users/diplav/orgs",
"received_events_url": "https://api.github.com/users/diplav/received_events",
"repos_url": "https://api.github.com/users/diplav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/diplav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diplav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/diplav",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-08-22T09:29:40Z
|
2021-09-09T01:22:32Z
|
2013-08-27T08:14:46Z
|
NONE
|
resolved
|
There is something wrong with the requests/adapter.py file

while using glance it shows the error:
'PoolManager' object has no attribute 'proxy_from_url'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1549/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1549/timeline
| null |
completed
| null | null | false |
[
"I disagree. =) It looks like a glance problem to me. The current version of Requests does not use `proxy_from_url`. We do use `proxy_from_url` in the unreleased 2.0 branch, but it's not a class method it's a standalone function.\n"
] |
https://api.github.com/repos/psf/requests/issues/1548
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1548/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1548/comments
|
https://api.github.com/repos/psf/requests/issues/1548/events
|
https://github.com/psf/requests/issues/1548
| 18,401,610 |
MDU6SXNzdWUxODQwMTYxMA==
| 1,548 |
Release 1.2.4 to fix urrlib3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/117916?v=4",
"events_url": "https://api.github.com/users/yetty/events{/privacy}",
"followers_url": "https://api.github.com/users/yetty/followers",
"following_url": "https://api.github.com/users/yetty/following{/other_user}",
"gists_url": "https://api.github.com/users/yetty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yetty",
"id": 117916,
"login": "yetty",
"node_id": "MDQ6VXNlcjExNzkxNg==",
"organizations_url": "https://api.github.com/users/yetty/orgs",
"received_events_url": "https://api.github.com/users/yetty/received_events",
"repos_url": "https://api.github.com/users/yetty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yetty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yetty",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-08-22T09:07:48Z
|
2021-09-09T02:11:36Z
|
2013-08-22T09:33:54Z
|
NONE
|
resolved
|
Please release new version to fix urrlib3 p3k bug: https://github.com/shazow/urllib3/issues/177
Now installation through pip is broken and I can't use dev version from Github.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/117916?v=4",
"events_url": "https://api.github.com/users/yetty/events{/privacy}",
"followers_url": "https://api.github.com/users/yetty/followers",
"following_url": "https://api.github.com/users/yetty/following{/other_user}",
"gists_url": "https://api.github.com/users/yetty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yetty",
"id": 117916,
"login": "yetty",
"node_id": "MDQ6VXNlcjExNzkxNg==",
"organizations_url": "https://api.github.com/users/yetty/orgs",
"received_events_url": "https://api.github.com/users/yetty/received_events",
"repos_url": "https://api.github.com/users/yetty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yetty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yetty",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1548/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1548/timeline
| null |
completed
| null | null | false |
[
"My understanding of this bug is that it doesn't impede installation at all. pip moans, but then everything goes on and is fine. The only problem is with actually using the ntlmpool functionality, which I'm not sure we expose. Are you experiencing something different?\n",
"I was just scaried from installation error: https://gist.github.com/yetty/6305114\n\nIf it doesn't mind, than it is OK.\n",
"Yeah, it's definitely scary, but it's not a problem for functionality. =) Our next planned release is 2.0, so we're trying not to rush anything out now. =) Still, thanks for raising this issue!\n"
] |
https://api.github.com/repos/psf/requests/issues/1547
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1547/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1547/comments
|
https://api.github.com/repos/psf/requests/issues/1547/events
|
https://github.com/psf/requests/issues/1547
| 18,383,240 |
MDU6SXNzdWUxODM4MzI0MA==
| 1,547 |
Installation fails in Cygwin 64 bit
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1364443?v=4",
"events_url": "https://api.github.com/users/gregersn/events{/privacy}",
"followers_url": "https://api.github.com/users/gregersn/followers",
"following_url": "https://api.github.com/users/gregersn/following{/other_user}",
"gists_url": "https://api.github.com/users/gregersn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gregersn",
"id": 1364443,
"login": "gregersn",
"node_id": "MDQ6VXNlcjEzNjQ0NDM=",
"organizations_url": "https://api.github.com/users/gregersn/orgs",
"received_events_url": "https://api.github.com/users/gregersn/received_events",
"repos_url": "https://api.github.com/users/gregersn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gregersn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gregersn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gregersn",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 54 |
2013-08-21T22:00:09Z
|
2021-09-09T00:00:57Z
|
2014-02-03T10:54:21Z
|
NONE
|
resolved
|
When I try a pip install requests in Cygwin 64 bit the installation fails.
Here's the end of the log file:
```
Downloading from URL https://pypi.python.org/packages/source/r/requests/requests-1.2.3.tar.gz#md5=adbd3f18445f7fe5e77f65c502e264fb (from https://pypi.python.org/simple/requests/)
Running setup.py egg_info for package requests
Cleaning up...
Removing temporary dir /cygdrive/d/home/greger/cygwin/projects/myproject/build...
No files/directories in /cygdrive/d/home/greger/cygwin/projects/myproject/build/requests/pip-egg-info (from PKG-INFO)
Exception information:
Traceback (most recent call last):
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/basecommand.py", line 134, in main
status = self.run(options, args)
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/commands/install.py", line 236, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 1139, in prepare_files
req_to_install.assert_source_matches_version()
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 394, in assert_source_matches_version
version = self.installed_version
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 390, in installed_version
return self.pkg_info()['version']
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 357, in pkg_info
data = self.egg_info_data('PKG-INFO')
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 293, in egg_info_data
filename = self.egg_info_path(filename)
File "/cygdrive/d/home/greger/cygwin/projects/myproject/lib/python2.7/site-packages/pip/req.py", line 330, in egg_info_path
raise InstallationError('No files/directories in %s (from %s)' % (base, filename))
InstallationError: No files/directories in /cygdrive/d/home/greger/cygwin/projects/myproject/build/requests/pip-egg-info (from PKG-INFO)
```
Don't know if that is a problem with the installation package, or cygwin itself. But thought I'd try here first.
If I try to use easy_install it just leaves all the files in the tmp-dir. And if I from there try to do a python setup.py install, and then try to import requests from the python shell, it just exits without and output.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1547/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1547/timeline
| null |
completed
| null | null | false |
[
"Hmm, I can't reproduce this. =( But I wonder if you're using a Windows virtualenv with a Windows interpreter in Cygwin. It's not clear (to me) how well that would work.\n",
"You are right, I did try through VirtualEnv and with VirtualenvWrapper. Tried again now from just regular cygwin prompt, and here's the log, again:\n\n```\n Downloading from URL https://pypi.python.org/packages/source/r/requests/requests-1.2.3.tar.gz#md5=adbd3f18445f7fe5e77f65c502e264fb (from https://pypi.python.org/simple/requests/)\n Running setup.py egg_info for package requests\n\nCleaning up...\n\n Removing temporary dir /tmp/pip_build_greger...\nNo files/directories in /tmp/pip_build_greger/requests/pip-egg-info (from PKG-INFO)\n\nException information:\nTraceback (most recent call last):\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/basecommand.py\", line 134, in main\n status = self.run(options, args)\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/commands/install.py\", line 236, in run\n requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py\", line 1139, in prepare_files\n req_to_install.assert_source_matches_version()\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py\", line 394, in assert_source_matches_version\n version = self.installed_version\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py\", line 390, in installed_version\n return self.pkg_info()['version']\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py\", line 357, in pkg_info\n data = self.egg_info_data('PKG-INFO')\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py\", line 293, in egg_info_data\n filename = self.egg_info_path(filename)\n File \"/usr/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg/pip/req.py\", line 330, in egg_info_path\n raise InstallationError('No files/directories in %s (from %s)' % (base, filename))\nInstallationError: No files/directories in /tmp/pip_build_greger/requests/pip-egg-info (from PKG-INFO)\n```\n",
"Tested again in a clean installed Cygwin64 bit, same error. \nIt does work in 32 bit version of Cygwin, though. \n",
"That's truly bizarre. It looks like pip is being installed as part of Cygwin, so you shouldn't have any 32/64 bit problems, but I kinda have to believe you are.\n",
"I did install pip with easy_install , so the problem might lay there. Still a bit weird, since pip has no problem installing lots of other packages. \n",
"I was under the impression pip was broken on Windows but I'm probably stuck in the past.\n",
"Pip works fine on Windows, and in my experience is fine in Cygwin too. Except for this.\n",
"same here the requests not install behind cygwin\n",
"I'm getting the same behavior as the OP: \"pip install requests\" works on 32 bit cygwin, but fails on 64 bit cygwin with messages about \"no files/directories\".\n\nPossibly unrelated, but installation-relevant & equally mysterious: 64 bit cygwin wget fails to verify certs for https URLs, while 32 bit cygwin wget succeeds. (Certs are identical on each install, and running the 32 bit wget inside a 64 bit installation [including /usr/ssl/certs directory] works, too.) Anyway, what I'm saying is that installing python stuff under 64 bit cygwin has been a pain in the butt, and I'm keen to hear solutions. \n",
"This sounds like an issue with pip itself.\n",
"I like the way you think @kennethreitz. :P\n\nIt sounds more to me, however, that 64 bit cygwin is broken (maybe).\n",
"Same thing here. Very interested if someone finds a solution.\n",
"Have any details you can add @tanzaho ?\n",
"I get the following when run on cygwin64 and trying to pip install the module : \nDownloading requests-1.2.3.tar.gz (348kB): 348kB downloaded\n Running setup.py egg_info for package requests\nCleaning up...\nNo files/directories in /home/tanzaho/.virtualenvs/django_wordiz/build/requests/pip-egg-info (from PKG-INFO)\n",
"I'm a bit new to running python setup scripts, especially under 64 bit cygwin in Windows 8. But firewall issues forced me to download the requests-1.2.3.tar.gz file by hand (I couldn't use pip or easy_install without timeouts). I gunzipped and untarred the file and then went in to the directory and ran:\n\n> python setup.py install\n\nby hand, and it doesn't output anything and returns to the prompt. Nothing is created in any of the cygwin /usr/lib/python*/site-packages directories. So I think that the process for running the above command is broken rather than it being a connectivity problem or a problem with pip itself. So, you probably don't need to reproduce any network problems to reproduce this problem. Just try to do the above and see if you get the same thing when installing from the command line in a cygwin terminal.\n\nWould like to see this fixed too, as pip and easy_install aren't options for me to install libraries.\n\nAlso, I get core dumps if I try to execute the command above with \"python3\" or \"python3.2\" instead of \"python\".\n",
"> Also, I get core dumps if I try to execute the command above\n\n@mikewn could you provide those? I'm wondering if this isn't an issue in the way Python is built for Windows now. I don't have a windows box to test with though. (Also I didn't know anyone was willing to subject themselves to the horror of Windows 8 ;))\n",
"@sigmavirus24 I have Windows 8. =) Strongly considering moving to Linux on that machine though. Anyway, off-topic.\n\nIt's worth noting that this is absolutely not reproducible on 32-bit Cygwin on Windows 7, which strongly points to a problem with Cygwin.\n",
"@sigmavirus24 - here's the dump. Basically running python3 or python3.2 both yield stack dumps for python 3.2. And no, this isn't my personal box that I'm running Windows 8 on. :)\n\nhttps://gist.github.com/mikewn/6688148\n",
"Does the script, perchance, call a subprocess that uses the UUID module in Python? It turns out that the UUID module in Cygwin64 Python is [currently segfaulting](http://permalink.gmane.org/gmane.comp.python.ipython.devel/11239), which might be causing mysterious failures.\n",
"The install script doesn't but urllib3 does use the uuid module for \n`multipart/form-data`. Since everything is \"compiled\" upon install if it tries \nto even import it then this could certainly be the issue. We could go back to \nthe environment flag to prevent compilation of bytecode but we had our heads \nhanded to us for that.\n",
"This is really an upstream problem for CPython and the Cygwin folks in my mind, so I'm not sure this is on you. I'll post a clean uuid patch against the current Python release on Cygwin on here for the self-starters. \n",
"Just a confirmation that I can install requests correctly with my UUID-patch Python. I just successfully completed a python setup.py install and the test suite passes. I'm putting together a patch for the Cygwin folks, I'll post back here when it's ready.\n",
"Thanks @ahmadia! I'll close this when you let us know the outcome. Otherwise I want it to be easy to find\n",
"Until the patch lands in Cygwin's official repository, you will need to hotfix your existing Python. You can do with the following simple [patch from Evgeny Sologubov](http://bugs.python.org/file31377/uuid.patch) which short-circuits the uuid module from attempting to load further libraries after the uuid routines have been located.\n\nApply the following patch to the uuid.py module in /usr/lib/python2.7\n\n```\ndiff -r 4a318a45c4c3 Lib/uuid.py\n--- a/Lib/uuid.py Mon Aug 19 13:07:18 2013 -0400\n+++ b/Lib/uuid.py Mon Aug 19 21:41:08 2013 +0400\n@@ -429,6 +429,8 @@\n _uuid_generate_random = lib.uuid_generate_random\n if hasattr(lib, 'uuid_generate_time'):\n _uuid_generate_time = lib.uuid_generate_time\n+ if _uuid_generate_random is not None:\n+ break # found everything we were looking for\n\n # The uuid_generate_* functions are broken on MacOS X 10.5, as noted\n # in issue #8621 the function generates the same sequence of values\n```\n",
"Since there's a solution here, I'm actually tempted to close this. How do you feel @Lukasa ?\n",
"Here's a two-liner to hotfix CPython from your Cygwin shell until the patch lands in Cygwin-apps (assuming you've got wget installed):\n\n```\nwget http://bugs.python.org/file31377/uuid.patch \npatch -d /usr/lib/python2.7 -p2 < uuid.patch \n```\n\n+1 on closing this issue.\n",
"@Lukasa ?\n",
"Oh man I meant to write on this and I didn't. I think we should keep it open until cygwin fixes itself. Based on the pain we had with header keys after that issue got closed, I suspect people don't look at closed issues.\n",
"@Lukasa sometimes people don't even look at open ones =P. Either way you're correct. I wonder if we might want a special tag for this kind of issue (where there's a problem we cannot resolve but has some kind of fix available.\n",
"I'm amazed with this process/progress! Awesome work!\n"
] |
https://api.github.com/repos/psf/requests/issues/1546
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1546/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1546/comments
|
https://api.github.com/repos/psf/requests/issues/1546/events
|
https://github.com/psf/requests/issues/1546
| 18,339,864 |
MDU6SXNzdWUxODMzOTg2NA==
| 1,546 |
use a default encoding in Response's text property
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/990113?v=4",
"events_url": "https://api.github.com/users/JerryKwan/events{/privacy}",
"followers_url": "https://api.github.com/users/JerryKwan/followers",
"following_url": "https://api.github.com/users/JerryKwan/following{/other_user}",
"gists_url": "https://api.github.com/users/JerryKwan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JerryKwan",
"id": 990113,
"login": "JerryKwan",
"node_id": "MDQ6VXNlcjk5MDExMw==",
"organizations_url": "https://api.github.com/users/JerryKwan/orgs",
"received_events_url": "https://api.github.com/users/JerryKwan/received_events",
"repos_url": "https://api.github.com/users/JerryKwan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JerryKwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JerryKwan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JerryKwan",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-08-21T07:35:04Z
|
2021-09-08T23:10:44Z
|
2013-08-21T07:52:59Z
|
NONE
|
resolved
|
why not use a default encoding in Response's text property?if the server does not set content-type explicitly, why not use a default encoding such as utf-8? the chardet.detect() function is time consuming. if some one just use response.text and does aware the inner mechanism, he may think the Requests library is too slow and change to other libraries.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1546/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1546/timeline
| null |
completed
| null | null | false |
[
"Thanks for asking this question @JerryKwan! The short and pithy answer is: because it's better to be slow and correct than fast and wrong. =)\n\nIf we were concerned about speed we'd simply not have the `Response.text` property at all, and only ever use `Response.content` (with a silly hack for `Response.json()`). This avoids performing any unicode decoding at all, which will save even more time.\n\nHaving a 'default' encoding is just wild optimism, because no such default exists on the web. Saying that we'll use UTF-8 whenever we don't know what the correct encoding is means that some users will find that Requests very quickly downloads gibberish. They will then conclude that Requests, while very fast, also doesn't work properly, and they'll go and use another library. =)\n\n**EDIT**: A user can also simulate this behaviour by searching for a `Content-Type` header with the encoding, and if it fails to find one set `Response.encoding = 'utf-8'`.\n",
"You would find that a default of 'utf-8' would make a shockingly high number of requests fail :)\n",
"Actually, I forgot to mention something. This is implemented so that you can provide your own default if you'd like.\n\n``` pycon\n>>> r = requests.get('http://httpbin.org/get')\n>>> r.encoding = 'utf-8'\n>>> r.text\n...\n```\n\nThis will fully skip encoding detection.\n"
] |
https://api.github.com/repos/psf/requests/issues/1545
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1545/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1545/comments
|
https://api.github.com/repos/psf/requests/issues/1545/events
|
https://github.com/psf/requests/issues/1545
| 18,313,909 |
MDU6SXNzdWUxODMxMzkwOQ==
| 1,545 |
Setup code coverage
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1408859?v=4",
"events_url": "https://api.github.com/users/Aaron1011/events{/privacy}",
"followers_url": "https://api.github.com/users/Aaron1011/followers",
"following_url": "https://api.github.com/users/Aaron1011/following{/other_user}",
"gists_url": "https://api.github.com/users/Aaron1011/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aaron1011",
"id": 1408859,
"login": "Aaron1011",
"node_id": "MDQ6VXNlcjE0MDg4NTk=",
"organizations_url": "https://api.github.com/users/Aaron1011/orgs",
"received_events_url": "https://api.github.com/users/Aaron1011/received_events",
"repos_url": "https://api.github.com/users/Aaron1011/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aaron1011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aaron1011/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aaron1011",
"user_view_type": "public"
}
|
[
{
"color": "02e10c",
"default": false,
"description": null,
"id": 76800,
"name": "Feature Request",
"node_id": "MDU6TGFiZWw3NjgwMA==",
"url": "https://api.github.com/repos/psf/requests/labels/Feature%20Request"
},
{
"color": "e102d8",
"default": false,
"description": null,
"id": 117745,
"name": "Planned",
"node_id": "MDU6TGFiZWwxMTc3NDU=",
"url": "https://api.github.com/repos/psf/requests/labels/Planned"
},
{
"color": "0b02e1",
"default": false,
"description": null,
"id": 191274,
"name": "Contributor Friendly",
"node_id": "MDU6TGFiZWwxOTEyNzQ=",
"url": "https://api.github.com/repos/psf/requests/labels/Contributor%20Friendly"
}
] |
closed
| true | null |
[] | null | 11 |
2013-08-20T18:51:30Z
|
2021-09-09T00:28:33Z
|
2013-12-05T23:07:55Z
|
NONE
|
resolved
|
I think that it would be useful to set up test coverage with [coverage.py](https://pypi.python.org/pypi/coverage), or a similar tool. I can submit a PR if this sounds like a good idea.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1545/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1545/timeline
| null |
completed
| null | null | false |
[
"Hi @Aaron1011! That's a perfectly good idea. However, 100% code coverage has never been an explicit goal of this project. In the absence of a particular code coverage target, I'm not convinced that adding coverage by default is a good plan. I'll defer to @kennethreitz's judgement in this case.\n\n(Side note: You can do your own code coverage testing very easily by installing the [pytest-cov](https://pypi.python.org/pypi/pytest-cov) plugin for py.test. Then testing requests is done by the command `py.test --cov-report term --cov=requests test_requests.py`. This creates a fair amount of noise because it tests our coverage of our vendored libraries (charade and urllib3), which is irrelevant here. Excluding those we're at roughly 75% test coverage, which is pretty solid. Sample output from master:\n\n```\n============================= test session starts ==============================\nplatform darwin -- Python 2.7.5 -- pytest-2.3.4\nplugins: cov\ncollected 73 items \n\ntest_requests.py .........................................................................\n--------------- coverage: platform darwin, python 2.7.5-final-0 ----------------\nName Stmts Miss Cover\n------------------------------------------------------------------------------------\nrequests/__init__ 26 5 81%\nrequests/adapters 143 41 71%\nrequests/api 22 4 82%\nrequests/auth 112 18 84%\nrequests/certs 6 1 83%\nrequests/compat 56 13 77%\nrequests/cookies 198 82 59%\nrequests/exceptions 15 0 100%\nrequests/hooks 18 1 94%\nrequests/models 359 63 82%\nrequests/packages/__init__ 2 0 100%\nrequests/packages/charade/__init__ 11 1 91%\nrequests/packages/charade/big5freq 3 0 100%\nrequests/packages/charade/big5prober 12 5 58%\nrequests/packages/charade/chardistribution 115 80 30%\nrequests/packages/charade/charsetgroupprober 67 58 13%\nrequests/packages/charade/charsetprober 23 11 52%\nrequests/packages/charade/codingstatemachine 23 15 35%\nrequests/packages/charade/compat 8 4 50%\nrequests/packages/charade/constants 8 0 100%\nrequests/packages/charade/cp949prober 12 5 58%\nrequests/packages/charade/escprober 44 33 25%\nrequests/packages/charade/escsm 17 0 100%\nrequests/packages/charade/eucjpprober 48 35 27%\nrequests/packages/charade/euckrfreq 3 0 100%\nrequests/packages/charade/euckrprober 12 5 58%\nrequests/packages/charade/euctwfreq 3 0 100%\nrequests/packages/charade/euctwprober 12 5 58%\nrequests/packages/charade/gb2312freq 3 0 100%\nrequests/packages/charade/gb2312prober 12 5 58%\nrequests/packages/charade/hebrewprober 69 43 38%\nrequests/packages/charade/jisfreq 3 0 100%\nrequests/packages/charade/jpcntx 69 51 26%\nrequests/packages/charade/langbulgarianmodel 5 0 100%\nrequests/packages/charade/langcyrillicmodel 13 0 100%\nrequests/packages/charade/langgreekmodel 5 0 100%\nrequests/packages/charade/langhebrewmodel 3 0 100%\nrequests/packages/charade/langhungarianmodel 5 0 100%\nrequests/packages/charade/langthaimodel 3 0 100%\nrequests/packages/charade/latin1prober 47 26 45%\nrequests/packages/charade/mbcharsetprober 43 34 21%\nrequests/packages/charade/mbcsgroupprober 14 3 79%\nrequests/packages/charade/mbcssm 41 0 100%\nrequests/packages/charade/sbcharsetprober 70 53 24%\nrequests/packages/charade/sbcsgroupprober 19 8 58%\nrequests/packages/charade/sjisprober 48 35 27%\nrequests/packages/charade/universaldetector 106 51 52%\nrequests/packages/charade/utf8prober 39 28 28%\nrequests/packages/urllib3/__init__ 27 11 59%\nrequests/packages/urllib3/_collections 49 19 61%\nrequests/packages/urllib3/connectionpool 208 66 68%\nrequests/packages/urllib3/contrib/__init__ 0 0 100%\nrequests/packages/urllib3/contrib/ntlmpool 64 64 0%\nrequests/packages/urllib3/contrib/pyopenssl 85 84 1%\nrequests/packages/urllib3/exceptions 42 18 57%\nrequests/packages/urllib3/filepost 40 3 93%\nrequests/packages/urllib3/packages/__init__ 2 0 100%\nrequests/packages/urllib3/packages/ordered_dict 155 78 50%\nrequests/packages/urllib3/packages/six 217 113 48%\nrequests/packages/urllib3/packages/ssl_match_hostname/__init__ 34 15 56%\nrequests/packages/urllib3/poolmanager 79 31 61%\nrequests/packages/urllib3/request 31 16 48%\nrequests/packages/urllib3/response 136 46 66%\nrequests/packages/urllib3/util 170 73 57%\nrequests/sessions 189 20 89%\nrequests/status_codes 8 0 100%\nrequests/structures 55 14 75%\nrequests/utils 244 109 55%\n------------------------------------------------------------------------------------\nTOTAL 3830 1602 58%\n\n========================== 73 passed in 23.84 seconds ==========================\n```\n",
"In general I'm +1 on coverage reports for all projects. This is \n@kennethreitz's final call though.\n",
"Also for reference, we don't use real unit tests but rather tests that are closer to integration tests and those are never a good idea to use in conjunction with a coverage tool since the results can be wildly misleading. Just because you're executing a specific line during a test doesn't mean it's covered by tests and doesn't mean it does what you're expecting to every time. A better goal than adding coverage (in my opinion) is to add two types of tests: unit and integration and run both. The unit tests never have to hit the internet while the integration tests should hit something whether it be a local server or the actual internet. Check out sigmavirus24/betamax for how I'm doing that same thing there.\n",
"@kennethreitz any opinions on this?\n",
"Pinging @kennethreitz \n",
"Let's make this `make coverage` so it's not part of the normal testing process.\n",
":+1:\n",
"So using pytest's coverage plugin it's collected each time but not output each time ;). No need to worry about it putting out the coverage data each time.\n",
"My main worry is just computational intensity. Coverage is greedy :)\n",
"Roger. Shouldn't be too difficult. \n",
"Fixed in 2.1.0.\n"
] |
https://api.github.com/repos/psf/requests/issues/1544
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1544/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1544/comments
|
https://api.github.com/repos/psf/requests/issues/1544/events
|
https://github.com/psf/requests/pull/1544
| 18,293,261 |
MDExOlB1bGxSZXF1ZXN0NzcyNDY1OA==
| 1,544 |
Fixed a bug that caused infinite redirections if the response contains a...
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/227923?v=4",
"events_url": "https://api.github.com/users/MarioVilas/events{/privacy}",
"followers_url": "https://api.github.com/users/MarioVilas/followers",
"following_url": "https://api.github.com/users/MarioVilas/following{/other_user}",
"gists_url": "https://api.github.com/users/MarioVilas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MarioVilas",
"id": 227923,
"login": "MarioVilas",
"node_id": "MDQ6VXNlcjIyNzkyMw==",
"organizations_url": "https://api.github.com/users/MarioVilas/orgs",
"received_events_url": "https://api.github.com/users/MarioVilas/received_events",
"repos_url": "https://api.github.com/users/MarioVilas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MarioVilas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarioVilas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MarioVilas",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 11 |
2013-08-20T12:36:15Z
|
2021-09-08T23:07:33Z
|
2013-08-20T15:07:04Z
|
NONE
|
resolved
|
... Location header pointing to itself (try it against http://google.com/robots.txt to test it)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/227923?v=4",
"events_url": "https://api.github.com/users/MarioVilas/events{/privacy}",
"followers_url": "https://api.github.com/users/MarioVilas/followers",
"following_url": "https://api.github.com/users/MarioVilas/following{/other_user}",
"gists_url": "https://api.github.com/users/MarioVilas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MarioVilas",
"id": 227923,
"login": "MarioVilas",
"node_id": "MDQ6VXNlcjIyNzkyMw==",
"organizations_url": "https://api.github.com/users/MarioVilas/orgs",
"received_events_url": "https://api.github.com/users/MarioVilas/received_events",
"repos_url": "https://api.github.com/users/MarioVilas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MarioVilas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarioVilas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MarioVilas",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1544/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1544/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1544.diff",
"html_url": "https://github.com/psf/requests/pull/1544",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1544.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1544"
}
| true |
[
"Thanks for this @MarioVilas! I agree that this would fix a bug if this happens, though I can't reproduce this with Google's robots.txt. I have to admit that this is an unlikely bug though: surely the associated page would just never work for any browser?\n",
"I've only seen it with Google's robots.txt, but only if I try to access it\nusing a wrong URL (notice the missing \"www.\" in the beginning). I don't\nknow if Google is sending the wrong headers, or if Requests (or urllib3) is\nkeeping the old Location header for some reason.\n\nTo be honest I didn't look much into the cause of the bug, I needed\nsomething fixed for tomorrow and this did the trick :) but I thought you\nshould know anyhow.\n\nOn Tue, Aug 20, 2013 at 2:45 PM, Cory Benfield [email protected]:\n\n> Thanks for this @MarioVilas https://github.com/MarioVilas! I agree that\n> this would fix a bug if this happens, though I can't reproduce this with\n> Google's robots.txt. I have to admit that this is an unlikely bug though:\n> surely the associated page would just never work for any browser?\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/pull/1544#issuecomment-22942116\n> .\n\n## \n\n“There's a reason we separate military and the police: one fights the enemy\nof the state, the other serves and protects the people. When the military\nbecomes both, then the enemies of the state tend to become the people.”\n",
"@MarioVilas See, that's my point, I can't reproduce it with `robots.txt` regardless of which URL I use. What version of Requests are you using? (`requests.__version__` will tell you)\n",
"Actually, doubling back to our code, Requests should throw a `TooManyRedirects` exception. I think that's the correct behaviour here (it's in line with what browsers do).\n",
"I've tried an old one and the most recent one (checked out of github), in both I see the bug.\n\nThis small script reproduces the issue on my machine:\n\nfrom requests import Session\nsession = Session()\nresp = session.request(\n method = \"GET\",\n url = \"http://google.com/robots.txt\",\n headers = {'host': 'google.com'},\n verify = False,\n stream = True,\n timeout = 10.0,\n allow_redirects = True,\n)\n",
"You're right. In fact I don't think Google is sending a malformed response\nat all, my patch just hides the real cause of the bug:\n\n$ curl -v http://google.com/robots.txt > /dev/null\n- About to connect() to google.com port 80 (#0)\n- Trying 173.194.41.238...\n- connected\n- Connected to google.com (173.194.41.238) port 80 (#0)\n > GET /robots.txt HTTP/1.1\n > User-Agent: curl/7.28.1\n > Host: google.com\n > Accept: _/_\n >\n < HTTP/1.1 301 Moved Permanently\n < Location: http://www.google.com/robots.txt\n < Content-Type: text/html; charset=UTF-8\n < X-Content-Type-Options: nosniff\n < Date: Sat, 17 Aug 2013 13:28:09 GMT\n < Expires: Mon, 16 Sep 2013 13:28:09 GMT\n < Cache-Control: public, max-age=2592000\n < Server: sffe\n < Content-Length: 229\n < X-XSS-Protection: 1; mode=block\n < Alternate-Protocol: 80:quic\n\n$ curl -v http://www.google.com/robots.txt > /dev/null\n- About to connect() to www.google.com port 80 (#0)\n- Trying 173.194.41.240...\n- connected\n- Connected to www.google.com (173.194.41.240) port 80 (#0)\n > GET /robots.txt HTTP/1.1\n > User-Agent: curl/7.28.1\n > Host: www.google.com\n > Accept: _/_\n >\n < HTTP/1.1 200 OK\n < Vary: Accept-Encoding\n < Content-Type: text/plain\n < Last-Modified: Fri, 09 Aug 2013 11:32:07 GMT\n < Date: Sat, 17 Aug 2013 13:28:23 GMT\n < Expires: Sat, 17 Aug 2013 13:28:23 GMT\n < Cache-Control: private, max-age=0\n < X-Content-Type-Options: nosniff\n < Server: sffe\n < X-XSS-Protection: 1; mode=block\n < Alternate-Protocol: 80:quic\n < Transfer-Encoding: chunked\n <\n",
"So are you actually getting a redirect loop here? If not, I think there's no problem.\n",
"Why are you setting the Host header yourself?\n",
"There is no redirect loop, Google works correctly. This only happens with Requests when the Host header is set and the redirection takes you to another host.\n\nThat's how I got to reproduce the problem. In my full program I've got a bunch more headers set. Now that I know without it the bug can't be triggered, I'll just filter the host header before calling Requests.\n",
"Closing the pull request because the patch is wrong, the problem is somewhere else.\n",
"FWIW, Requests should set the Host header correctly itself, you shouldn't need to do it. =)\n"
] |
https://api.github.com/repos/psf/requests/issues/1543
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1543/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1543/comments
|
https://api.github.com/repos/psf/requests/issues/1543/events
|
https://github.com/psf/requests/pull/1543
| 18,264,228 |
MDExOlB1bGxSZXF1ZXN0NzcwOTQ5OQ==
| 1,543 |
PEP8ified
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1298565?v=4",
"events_url": "https://api.github.com/users/jtwaleson/events{/privacy}",
"followers_url": "https://api.github.com/users/jtwaleson/followers",
"following_url": "https://api.github.com/users/jtwaleson/following{/other_user}",
"gists_url": "https://api.github.com/users/jtwaleson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jtwaleson",
"id": 1298565,
"login": "jtwaleson",
"node_id": "MDQ6VXNlcjEyOTg1NjU=",
"organizations_url": "https://api.github.com/users/jtwaleson/orgs",
"received_events_url": "https://api.github.com/users/jtwaleson/received_events",
"repos_url": "https://api.github.com/users/jtwaleson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jtwaleson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtwaleson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jtwaleson",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-08-19T21:15:46Z
|
2021-09-08T21:01:08Z
|
2013-08-19T21:59:45Z
|
NONE
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1543/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1543/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1543.diff",
"html_url": "https://github.com/psf/requests/pull/1543",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1543.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1543"
}
| true |
[
"the parts of pep8 you are using here are parts that i have intentionally chosen to ignore.\n\nThanks, though :)\n",
"Righto. Does that include the docstrings? I have to say that I think they are much more readable. Is there some external documentation system that needs to be taken into account?\n\nMind if I retry with some less controversial PEP8 choices?\n",
"Sphinx is the external documentation tool and your doc-strings shouldn't break anything but I'm not 100% sure.\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1542
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1542/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1542/comments
|
https://api.github.com/repos/psf/requests/issues/1542/events
|
https://github.com/psf/requests/issues/1542
| 18,262,475 |
MDU6SXNzdWUxODI2MjQ3NQ==
| 1,542 |
POST Request in Heroku - 403 Forbidden
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2239263?v=4",
"events_url": "https://api.github.com/users/kqdtran/events{/privacy}",
"followers_url": "https://api.github.com/users/kqdtran/followers",
"following_url": "https://api.github.com/users/kqdtran/following{/other_user}",
"gists_url": "https://api.github.com/users/kqdtran/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kqdtran",
"id": 2239263,
"login": "kqdtran",
"node_id": "MDQ6VXNlcjIyMzkyNjM=",
"organizations_url": "https://api.github.com/users/kqdtran/orgs",
"received_events_url": "https://api.github.com/users/kqdtran/received_events",
"repos_url": "https://api.github.com/users/kqdtran/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kqdtran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kqdtran/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kqdtran",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 29 |
2013-08-19T20:44:10Z
|
2017-09-24T16:39:23Z
|
2013-09-24T03:07:54Z
|
NONE
| null |
I'm new to using Requests, and I'm trying to scrape a schedule of classes. My POST request has resulted in "403 Forbidden" on Heroku, but works fine on my local machine. Here's a code snippet I'm having trouble with in my application, using Python 2.7.4, Bottle, BeautifulSoup4, and of course, Requests.
``` python
import requests
from bs4 import BeautifulSoup as Soup
url = "https://telebears.berkeley.edu/enrollment-osoc/osc"
code = "26187"
values = dict(_InField1 = "RESTRIC", _InField2 = code, _InField3 = "13D2")
html = requests.post(url, params=values)
soup = Soup(html.content, from_encoding="utf-8")
sp = soup.find_all("div", {"class" : "layout-div"})[2]
print sp.text
```
It gives me back the string "Computer Science 61A P 001 LEC:" in development. However, when I tried to run it on Heroku (using `heroku run bash` and then run `python`), I got back "403 Forbidden".
My attempts so far include manually setting User Agent headers, changing to `verify=False`, but neither worked. At first I thought it's the school settings, but then I was wondering why it works locally without any problem... Any explanation/suggestion would be really appreciated! Thanks in advance.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1542/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1542/timeline
| null |
completed
| null | null | false |
[
"Are you aware that `params` here refers to query string parameters? (I wouldn't ask if it wasn't something that seems to frequently confuse new users.) In other words, your request looks a post without a body to the following url:\n\n```\nhttps://telebears.berkeley.edu/enrollment-osoc/osc?_InField1=RESTRIC&_InField2=26187&_InField3=13D2\n```\n\nIf instead you wanted a post to `https://telebears.berkeley.edu/enrollment-osoc/osc` with the body `_InField1=RESTRIC&_InField2=code&_InField3=13D2` then you need to pass your `values` dictionary to the `data` parameter like so:\n\n```\nrequests.post(url, data=values)\n```\n",
"I was not aware of that at all - thanks for the suggestion! Unfortunately I still got the same error after changing `params` to `data`.\n",
"@kqdtran I opened that URL in the browser and received the same 403 error. Do you authenticate before getting to that page? If so you need to login with python/requests first.\n",
"I don't think you need to authenticate before getting there, but I believe the page can only be accessed via a post request, like in the following code:\n\n``` python\n<form action=\"https://telebears.berkeley.edu/enrollment-osoc/osc\" method=\"post\" target=\"_blank\"/>\n <input type=\"hidden\" name=\"_InField1\" value=\"RESTRIC\"/>\n <input type=\"hidden\" name=\"_InField2\" value=\"8461\"/>\n <input type=\"hidden\" name=\"_InField3\" value=\"13D2\"/>\n <input type=\"submit\" value=\"Enrollment Information\" />\n</form>\n```\n\nThanks!\n",
"If it helps, here are my attempts at reproducing the error: \n\n\n\n\n",
"So if I recall correctly there is a serious bug fix in python 2.7.5 with SSL. I could be wrong though. Try upgrading your other version of python to 2.7.5.\n",
"Ohhhhh... If that's true, it makes a lot of sense. Unfortunately, I can't upgrade the other version, since it belongs to heroku... I could try to switch to Python 3 though. Will keep you posted, and thanks for all the help!\n",
"@kennethreitz any idea when Heroku will upgrade to python 2.7.5? I thought it was already using that.\n",
"Sorry I'll take it back! By default, Heroku is still using Python 2.7.4. But if you provide a runtime.txt with the string \"python-2.7.5\" inside, it will be using 2.7.5. Unfortunately, I still have the same error under this version :(\n\n\n",
"@kqdtran I just looked at this again with python 2.7.3 (so still not exactly what you're using but close):\n\n``` pycon\n>>> import requests\n>>> r = requests.post('https://telebears.berkeley.edu/enrollment-osoc/osc', data={'_InField1': 'RESTRIC', '_InField2': 'code', '_InField3': '13d2'})\n>>> r\n<Response [200]>\n```\n\nAre you still having this issue? It works with curl too:\n\n``` bash\n~$ curl -i -X POST -d '_InField1=RESTRIC&_InField2=code&_InField3=13D2' https://telebears.berkeley.edu/enrollment-osoc/osc\nHTTP/1.1 200 OK\nDate: Tue, 24 Sep 2013 02:57:17 GMT\nContent-Type: text/html;charset=UTF-8\nContent-Length: 1801\nConnection: close\n```\n",
"Ah right, forgot this was a heroku only issue.\n",
"Humorously enough that curl command gets a 403 on Heroku's servers as well\n\n```\n~ $ curl -i -X POST -d '_InField1=RESTRIC&_InField2=code&_InField3=13D2' https://telebears.berkeley.edu/enrollment-osoc/osc\nHTTP/1.1 403 Forbidden\nDate: Tue, 24 Sep 2013 03:07:17 GMT\nContent-Length: 310\nConnection: close\nContent-Type: text/html; charset=iso-8859-1\n\n<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n<html><head>\n<title>403 Forbidden</title>\n</head><body>\n<h1>Forbidden</h1>\n<p>You don't have permission to access /enrollment-osoc/osc\non this server.</p>\n<hr>\n<address>Apache/2.2.3 (Red Hat) Server at telebears.berkeley.edu Port 443</address>\n</body></html>\n```\n\nSo this is a heroku issue and not a python-requests issue.\n",
"Thanks for helping me confirming the problem, @sigmavirus24!\n",
"@kqdtran did you ever figure out this issue? I'm facing a similar Heroku-specific problem with a GET request. \n",
"@cvolawless can you try making the same request as if it were a curl request? If you receive the same response as requests, you should contact Heroku support. If you get a different response feel free to open a new issue so @Lukasa and I can be of use. :)\n",
"@cvolawless also, if you can't discuss the issue publicly, @Lukasa and I both have our emails on our profile pages, so you can send us an email as an alternative for private (confidential) support.\n",
"@sigmavirus24 yup, curl fails as well. Have a ticket open to Heroku--will update if I find out anything helpful. \n",
"Let us know if we can be of any assistance with Heroku's support. :)\n",
"been playing around with the javascript Todoapp and using my own server url. I find that login/logout doesn't work on versions 1.1.12-min.js. I upgrated to 1.6.14 or even parse-latest.js. Both don't have any reference ParseCollections. It's been very hard figuring out how to fix it.\n\n2016-04-25T09:53:19.298790+00:00 heroku[router]: at=info method=POST path=\"/parse/1/users\" host=intense-hollows-60991.herokuapp.com request_id=9cb5eaf9-603e-4b98-9f78-35058ca0dfd6 fwd=\"5.148.6.116\" dyno=web.1 connect=0ms service=51ms status=403 bytes=466/not well-formed\n\n2016-04-25T09:54:06.692075+00:00 heroku[router]: at=info method=POST path=\"/parse/1/login\" host=intense-hollows-60991.herokuapp.com request_id=d934f856-85be-4315-baf4-7bb64ac501c8 fwd=\"5.148.6.116\" dyno=web.1 connect=1ms service=14ms status=403 bytes=466/not well-formed\n\nhttp://www.mobileparadigm.co.uk/urls/cft_access/services/cft/\n",
"@arkangelx I'm sorry, what specifically is your issue and what does it have to do with the requests library?\n",
"i have pointed my todoapp to my own Parse.serverUrl as instructed, however I can’t perform a login/sign up now. I can save objects easily to my account with mLabs. So in essence\n\n Parse.$ = jQuery;\n\n // Initialize Parse with your Parse application javascript keys\n Parse.initialize(\"5N5FqrkAmn1C8g6hVHvb\",\"aFZ7RUqpgrUVZrDKfM6f”); // this works using the standard Parse sdk WITHOUT hosting myself (login/signup save data)\nadding the line below to point to the heroic setup after importing all my data from my dashboard now doesn’t allow me to sign up\n\nLogs\n\n2016-04-25T09:54:06.692075+00:00 heroku[router]: at=info method=POST path=\"/parse/1/login\" host=intense-hollows-60991.herokuapp.com request_id=d934f856-85be-4315-baf4-7bb64ac501c8 fwd=\"5.148.6.116\" dyno=web.1 connect=1ms service=14ms status=403 bytes=466\n\nSo I tried to change the version of the parse sdk I’m using from 1.1.12-min.js to 1.6.14-min.js or parse-latest.js, this throws an error saying that Parse.Collections isn’t referenced.\n\n Parse.serverURL = 'https://intense-hollows-60991.herokuapp.com/parse/' <https://intense-hollows-60991.herokuapp.com/parse/'> (login/signup save data)\n\n> On 25 Apr 2016, at 11:27, Cory Benfield [email protected] wrote:\n> \n> @arkangelx https://github.com/arkangelx I'm sorry, what specifically is your issue and what does it have to do with the requests library?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly or view it on GitHub https://github.com/kennethreitz/requests/issues/1542#issuecomment-214251913\n",
"@arkangelx That really seems to have nothing to do with the Requests library. The sample code you're providing appears to be Javascript code, which of course is not applicable to this Python library. So once again: where does the Python Requests library come into your problem?\n",
"My apologies, I posted to the wrong session\n\n> On 25 Apr 2016, at 11:35, Cory Benfield [email protected] wrote:\n> \n> @arkangelx https://github.com/arkangelx That really seems to have nothing to do with the Requests library. The sample code you're providing appears to be Javascript code, which of course is not applicable to this Python library. So once again: where does the Python Requests library come into your problem?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly or view it on GitHub https://github.com/kennethreitz/requests/issues/1542#issuecomment-214257644\n",
"Just seeing this — my best guess is that the server in question is blocking requests from AWS (which Heroku is hosted on). Many edge CDNs (and I assume also standard security appliances) offer that as a feature, since it's ultra easy to spin up a botnet (or just socially unacceptable campaigns) there.\n\ne.g. there's nothing un-vanilla about Heroku's Python, so I highly doubt it's not something network related. \n",
"any way to get around this? An Uber API is giving my node app hosted on AWS EC2 403s...",
"There's a heroku addon called \"priximo\" which allows you to proxy your way out",
"There's another one too",
"do you know how to set it up properly? my app still gets a 403 when it sends a request to the Uber API",
"I'm shutting down this conversation so that well over 1000 folks don't get email notifications for an issue unrelated to Requests."
] |
https://api.github.com/repos/psf/requests/issues/1541
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1541/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1541/comments
|
https://api.github.com/repos/psf/requests/issues/1541/events
|
https://github.com/psf/requests/pull/1541
| 18,236,795 |
MDExOlB1bGxSZXF1ZXN0NzY5NDQzNA==
| 1,541 |
[2.0] Tracking PR for the 2.0 changelog
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"color": "e11d21",
"default": false,
"description": null,
"id": 44501305,
"name": "Not Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTMwNQ==",
"url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 2 |
2013-08-19T12:29:57Z
|
2021-09-09T00:01:18Z
|
2013-09-24T17:45:11Z
|
MEMBER
|
resolved
|
This contains all of the current fixes merged into the 2.0 branch in changelog form. I'll keep updating this as new changes get merged.
_Note_: This uses [this link](https://github.com/kennethreitz/requests/compare/v1.2.3...2.0) as the source of changes. I'll keep consulting it as we work on 2.0.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1541/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1541/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1541.diff",
"html_url": "https://github.com/psf/requests/pull/1541",
"merged_at": "2013-09-24T17:45:11Z",
"patch_url": "https://github.com/psf/requests/pull/1541.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1541"
}
| true |
[
":sparkles: :gem: :sparkles:\n",
":beer: :beers: :beers: :beer:\n"
] |
https://api.github.com/repos/psf/requests/issues/1540
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1540/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1540/comments
|
https://api.github.com/repos/psf/requests/issues/1540/events
|
https://github.com/psf/requests/pull/1540
| 18,212,624 |
MDExOlB1bGxSZXF1ZXN0NzY4NDE4MQ==
| 1,540 |
Event hooks update
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1144960?v=4",
"events_url": "https://api.github.com/users/libbkmz/events{/privacy}",
"followers_url": "https://api.github.com/users/libbkmz/followers",
"following_url": "https://api.github.com/users/libbkmz/following{/other_user}",
"gists_url": "https://api.github.com/users/libbkmz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/libbkmz",
"id": 1144960,
"login": "libbkmz",
"node_id": "MDQ6VXNlcjExNDQ5NjA=",
"organizations_url": "https://api.github.com/users/libbkmz/orgs",
"received_events_url": "https://api.github.com/users/libbkmz/received_events",
"repos_url": "https://api.github.com/users/libbkmz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/libbkmz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/libbkmz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/libbkmz",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-08-18T20:02:09Z
|
2021-09-08T21:01:07Z
|
2013-08-18T20:03:03Z
|
CONTRIBUTOR
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1540/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1540/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1540.diff",
"html_url": "https://github.com/psf/requests/pull/1540",
"merged_at": "2013-08-18T20:03:03Z",
"patch_url": "https://github.com/psf/requests/pull/1540.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1540"
}
| true |
[
"Awesome, thanks so much for this! :cake:\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1539
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1539/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1539/comments
|
https://api.github.com/repos/psf/requests/issues/1539/events
|
https://github.com/psf/requests/pull/1539
| 18,211,718 |
MDExOlB1bGxSZXF1ZXN0NzY4Mzg3NQ==
| 1,539 |
Add response callback arguments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1144960?v=4",
"events_url": "https://api.github.com/users/libbkmz/events{/privacy}",
"followers_url": "https://api.github.com/users/libbkmz/followers",
"following_url": "https://api.github.com/users/libbkmz/following{/other_user}",
"gists_url": "https://api.github.com/users/libbkmz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/libbkmz",
"id": 1144960,
"login": "libbkmz",
"node_id": "MDQ6VXNlcjExNDQ5NjA=",
"organizations_url": "https://api.github.com/users/libbkmz/orgs",
"received_events_url": "https://api.github.com/users/libbkmz/received_events",
"repos_url": "https://api.github.com/users/libbkmz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/libbkmz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/libbkmz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/libbkmz",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 2 |
2013-08-18T18:59:09Z
|
2021-09-08T21:01:07Z
|
2013-08-18T20:03:17Z
|
CONTRIBUTOR
|
resolved
|
Or u can use **kwargs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1539/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1539/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1539.diff",
"html_url": "https://github.com/psf/requests/pull/1539",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1539.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1539"
}
| true |
[
"Can you update this to use **kwargs? If you do I'll happily merge. =) Thanks!\n",
"Use (r, request, **kwargs) ?\n"
] |
https://api.github.com/repos/psf/requests/issues/1538
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1538/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1538/comments
|
https://api.github.com/repos/psf/requests/issues/1538/events
|
https://github.com/psf/requests/pull/1538
| 18,201,680 |
MDExOlB1bGxSZXF1ZXN0NzY4MDMxNQ==
| 1,538 |
Remove superfluous double dot
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1330164?v=4",
"events_url": "https://api.github.com/users/enkore/events{/privacy}",
"followers_url": "https://api.github.com/users/enkore/followers",
"following_url": "https://api.github.com/users/enkore/following{/other_user}",
"gists_url": "https://api.github.com/users/enkore/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/enkore",
"id": 1330164,
"login": "enkore",
"node_id": "MDQ6VXNlcjEzMzAxNjQ=",
"organizations_url": "https://api.github.com/users/enkore/orgs",
"received_events_url": "https://api.github.com/users/enkore/received_events",
"repos_url": "https://api.github.com/users/enkore/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/enkore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enkore/subscriptions",
"type": "User",
"url": "https://api.github.com/users/enkore",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 4 |
2013-08-18T01:06:39Z
|
2021-09-08T21:01:07Z
|
2013-08-18T06:26:59Z
|
CONTRIBUTOR
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1538/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1538/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1538.diff",
"html_url": "https://github.com/psf/requests/pull/1538",
"merged_at": "2013-08-18T06:26:59Z",
"patch_url": "https://github.com/psf/requests/pull/1538.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1538"
}
| true |
[
"It's not superfluous. :-1:\n",
"It is, look at the generated docs: http://docs.python-requests.org/en/latest/user/quickstart/#timeouts\n",
"That's just plain bizarre. Perhaps it's a bug in Sphinx but I wasn't expecting that. Regardless, in the meantime I'm changing my vote to :+1:. Sorry I didn't check prior to downvoting.\n",
"This is great, thanks so much! :cookie: :star:\n"
] |
|
https://api.github.com/repos/psf/requests/issues/1537
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1537/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1537/comments
|
https://api.github.com/repos/psf/requests/issues/1537/events
|
https://github.com/psf/requests/pull/1537
| 18,190,224 |
MDExOlB1bGxSZXF1ZXN0NzY3NjE5MA==
| 1,537 |
Allow non-string objects to be posted as data alongside files.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
}
] |
closed
| true | null |
[] | null | 6 |
2013-08-17T06:29:06Z
|
2021-09-08T23:06:20Z
|
2013-08-21T18:46:28Z
|
MEMBER
|
resolved
|
This should resolve #1462.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1537/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1537/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1537.diff",
"html_url": "https://github.com/psf/requests/pull/1537",
"merged_at": "2013-08-21T18:46:28Z",
"patch_url": "https://github.com/psf/requests/pull/1537.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1537"
}
| true |
[
"These test failures are the current cookie test failures in 2.0, nothing to do with this PR.\n",
"@Lukasa will this interfere with the undocumented streaming upload functionality?\n",
"Nope, we decide if we're doing streaming well before this point. This will only affect values in the data dict. =)\n",
"I really need to review (and document) that part of the code ;(\n",
"@kennethreitz I'm pretty sure it is documented. I'd tell you, but the docs are 503ing.\n",
"@kennethreitz Yup: https://github.com/kennethreitz/requests/blob/master/docs/user/advanced.rst#streaming-uploads\n"
] |
https://api.github.com/repos/psf/requests/issues/1536
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1536/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1536/comments
|
https://api.github.com/repos/psf/requests/issues/1536/events
|
https://github.com/psf/requests/pull/1536
| 18,190,158 |
MDExOlB1bGxSZXF1ZXN0NzY3NjE2Nw==
| 1,536 |
RequestsException subclasses IOError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
[
{
"color": "009800",
"default": false,
"description": null,
"id": 44501218,
"name": "Ready To Merge",
"node_id": "MDU6TGFiZWw0NDUwMTIxOA==",
"url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge"
},
{
"color": "eb6420",
"default": false,
"description": null,
"id": 44501256,
"name": "Breaking API Change",
"node_id": "MDU6TGFiZWw0NDUwMTI1Ng==",
"url": "https://api.github.com/repos/psf/requests/labels/Breaking%20API%20Change"
}
] |
closed
| true | null |
[] | null | 1 |
2013-08-17T06:18:11Z
|
2021-09-08T23:06:14Z
|
2013-08-21T18:25:20Z
|
MEMBER
|
resolved
|
This should resolve #1532. Let me know if you'd rather we multiply-inherit from `IOError` and `RuntimeError`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1536/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1536/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1536.diff",
"html_url": "https://github.com/psf/requests/pull/1536",
"merged_at": "2013-08-21T18:25:20Z",
"patch_url": "https://github.com/psf/requests/pull/1536.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1536"
}
| true |
[
"These test failures are the current cookie test failures in 2.0, nothing to do with this PR.\n"
] |
https://api.github.com/repos/psf/requests/issues/1535
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1535/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1535/comments
|
https://api.github.com/repos/psf/requests/issues/1535/events
|
https://github.com/psf/requests/pull/1535
| 18,186,985 |
MDExOlB1bGxSZXF1ZXN0NzY3NTQ3Mg==
| 1,535 |
Skip cookie extraction if necessary
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4",
"events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}",
"followers_url": "https://api.github.com/users/sigmavirus24/followers",
"following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}",
"gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sigmavirus24",
"id": 240830,
"login": "sigmavirus24",
"node_id": "MDQ6VXNlcjI0MDgzMA==",
"organizations_url": "https://api.github.com/users/sigmavirus24/orgs",
"received_events_url": "https://api.github.com/users/sigmavirus24/received_events",
"repos_url": "https://api.github.com/users/sigmavirus24/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sigmavirus24",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-08-17T02:44:53Z
|
2021-09-08T21:01:06Z
|
2013-08-17T03:15:18Z
|
CONTRIBUTOR
|
resolved
|
If `_original_response` is never set/is `None`, then don't try to extract cookies
from the response.
Fixes #1534
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1535/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1535/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1535.diff",
"html_url": "https://github.com/psf/requests/pull/1535",
"merged_at": "2013-08-17T03:15:18Z",
"patch_url": "https://github.com/psf/requests/pull/1535.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1535"
}
| true |
[
":cake: :sparkles:\n"
] |
https://api.github.com/repos/psf/requests/issues/1534
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1534/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1534/comments
|
https://api.github.com/repos/psf/requests/issues/1534/events
|
https://github.com/psf/requests/issues/1534
| 18,182,972 |
MDU6SXNzdWUxODE4Mjk3Mg==
| 1,534 |
extract_cookies_to_jar requires a response with _original_response
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/145979?v=4",
"events_url": "https://api.github.com/users/dstufft/events{/privacy}",
"followers_url": "https://api.github.com/users/dstufft/followers",
"following_url": "https://api.github.com/users/dstufft/following{/other_user}",
"gists_url": "https://api.github.com/users/dstufft/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dstufft",
"id": 145979,
"login": "dstufft",
"node_id": "MDQ6VXNlcjE0NTk3OQ==",
"organizations_url": "https://api.github.com/users/dstufft/orgs",
"received_events_url": "https://api.github.com/users/dstufft/received_events",
"repos_url": "https://api.github.com/users/dstufft/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dstufft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dstufft/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dstufft",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 3 |
2013-08-16T22:55:07Z
|
2021-09-09T02:11:37Z
|
2013-08-17T03:15:18Z
|
CONTRIBUTOR
|
resolved
|
While attempting to make a `LocalFSAdapter()` which handles `file://` urls I discovered that the response objects need to have an `_original_response` attribute or it bombs out at https://github.com/kennethreitz/requests/blob/master/requests/cookies.py#L113.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4",
"events_url": "https://api.github.com/users/kennethreitz/events{/privacy}",
"followers_url": "https://api.github.com/users/kennethreitz/followers",
"following_url": "https://api.github.com/users/kennethreitz/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kennethreitz",
"id": 119893,
"login": "kennethreitz",
"node_id": "MDQ6VXNlcjExOTg5Mw==",
"organizations_url": "https://api.github.com/users/kennethreitz/orgs",
"received_events_url": "https://api.github.com/users/kennethreitz/received_events",
"repos_url": "https://api.github.com/users/kennethreitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kennethreitz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1534/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1534/timeline
| null |
completed
| null | null | false |
[
"This probably doesn't help but a few things:\n- Requests is ill-suited for the `file://` scheme.\n- The `_original_response` object can be faked in a couple ways but only one way if you're interested in python 2 and 3 support. If the latter is your purpose then checkout sigmavirus24/betamax. I'm using the latter method there. If you have questions, feel free to ask me outside of this message.\n\nI also wouldn't be opposed to documenting the method, but for the most part, I don't see this being a bug. Maybe @Lukasa or @kennethreitz \n",
"Heh, @kennethreitz is the one who told me to file it :) I actually have it mocked and it's pretty janky. Simple fix for upstream though and the `file://` url is actually working pretty well, requests seems to handle it fine. The only \"gotchas\" are this one which requires a pretty janky hack to get around, and an error because `file://` urls don't have a hostname but another small hack can fix that too.\n\nFWIW: https://github.com/dstufft/pip/commit/aba6f37d5619b26a4dbc4b2ceb01e9dd2899f5ad\n",
"The fix is simple without a doubt and since it seems @kennethreitz seems okay with it, I'll put together a PR with it.\n\nThat said, the best workaround is not that janky frankly but you're right in that it shouldn't be necessary. My project needs it though :/\n"
] |
https://api.github.com/repos/psf/requests/issues/1533
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1533/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1533/comments
|
https://api.github.com/repos/psf/requests/issues/1533/events
|
https://github.com/psf/requests/issues/1533
| 18,120,658 |
MDU6SXNzdWUxODEyMDY1OA==
| 1,533 |
Post not respecting explicit Content-Type in Python 3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/108889?v=4",
"events_url": "https://api.github.com/users/jamur2/events{/privacy}",
"followers_url": "https://api.github.com/users/jamur2/followers",
"following_url": "https://api.github.com/users/jamur2/following{/other_user}",
"gists_url": "https://api.github.com/users/jamur2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jamur2",
"id": 108889,
"login": "jamur2",
"node_id": "MDQ6VXNlcjEwODg4OQ==",
"organizations_url": "https://api.github.com/users/jamur2/orgs",
"received_events_url": "https://api.github.com/users/jamur2/received_events",
"repos_url": "https://api.github.com/users/jamur2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jamur2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamur2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jamur2",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 5 |
2013-08-15T18:51:26Z
|
2021-09-09T02:11:36Z
|
2013-08-15T19:10:15Z
|
NONE
|
resolved
|
In Python 3 (specifically, I'm on 3.3.2) if you set an explicit Content-Type header in post and have a byte string as your data, 2 Content-Type headers will get set. Example:
``` python
>>> _ = requests.post("http://httpbin.org/post", data=b"123", headers={"Content-Type": "application/octet-stream"})
>>> _.json()['headers']['Content-Type']
'application/octet-stream, application/x-www-form-urlencoded'
```
It looks like it's because the test to see if a `Content-Type` is already set is broken. It tests (`requests/models.py`):
``` python
# Add content-type if it wasn't explicitly provided.
if (content_type) and (not 'content-type' in self.headers):
self.headers['Content-Type'] = content_type
```
But keys in self.headers are bytestrings in Python 3, not strings:
``` python
(pdb) 'content-type' in self.headers
False
(pdb) b'content-type' in self.headers
True
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1533/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1533/timeline
| null |
completed
| null | null | false |
[
"Thanks for reporting this! We've actually already got a fix for this, from PR #1338, which is in the 2.0 branch. =) Should be in the 2.0 release. =)\n",
"Ah, cool. In my defense, GitHub issue search is down :)\n",
"You absolutely don't need a defense. =) I'm glad you opened the issue: it's certainly way better than staying silent. :cake:\n",
"I've encountered this issue, too, and it makes my financial library unusable under Python 3. Is the 2.0 release imminent? Is there a workaround in the meantime?\n",
"@jaraco If you desperately need a workaround you can change the header key on L398 of models.py to be `b'Content-Type'`. That should work. I'm not expecting a huge wait for the 2.0 release, but that's really up to Kenneth.\n"
] |
https://api.github.com/repos/psf/requests/issues/1532
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1532/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1532/comments
|
https://api.github.com/repos/psf/requests/issues/1532/events
|
https://github.com/psf/requests/issues/1532
| 18,081,270 |
MDU6SXNzdWUxODA4MTI3MA==
| 1,532 |
RequestException should subclass IOError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/154988?v=4",
"events_url": "https://api.github.com/users/beck/events{/privacy}",
"followers_url": "https://api.github.com/users/beck/followers",
"following_url": "https://api.github.com/users/beck/following{/other_user}",
"gists_url": "https://api.github.com/users/beck/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/beck",
"id": 154988,
"login": "beck",
"node_id": "MDQ6VXNlcjE1NDk4OA==",
"organizations_url": "https://api.github.com/users/beck/orgs",
"received_events_url": "https://api.github.com/users/beck/received_events",
"repos_url": "https://api.github.com/users/beck/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/beck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beck/subscriptions",
"type": "User",
"url": "https://api.github.com/users/beck",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 6 |
2013-08-14T22:34:45Z
|
2021-09-09T01:22:33Z
|
2013-08-27T07:58:09Z
|
NONE
|
resolved
|
`RuntimeError` [should only be raised if does not fall in any other categories](http://docs.python.org/2/library/exceptions.html#exceptions.RuntimeError).
Using [urllib](http://docs.python.org/2/library/urllib.html#url-opener-objects) as reference, I believe `IOError` is appropriate.
Related: https://github.com/kennethreitz/requests/issues/419
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1532/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1532/timeline
| null |
completed
| null | null | false |
[
"Yeah, I think you're right. @kennethreitz, are you open to me doing this for 2.0?\n",
"yes, especially if IOError subclasses RuntimeError\n",
"IOError is a subclass of EnvironmentError, sadly, not RuntimeError. It'll definitely be backwards incompatible, but I think that's probably acceptable.\n",
"could use multiple inheritence\n",
"I'm not sure that we'd be fixing the issue in that case. =D We can do it, but we're also at a stage where we can break stuff. I doubt anyone is doing `except RuntimeError` anyway, so we're probably pretty safe.\n",
"Implemented in #1536.\n"
] |
https://api.github.com/repos/psf/requests/issues/1531
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1531/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1531/comments
|
https://api.github.com/repos/psf/requests/issues/1531/events
|
https://github.com/psf/requests/issues/1531
| 18,077,104 |
MDU6SXNzdWUxODA3NzEwNA==
| 1,531 |
Request exception return encoded (windows 1255) string. how to sel locale
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1434992?v=4",
"events_url": "https://api.github.com/users/alonisser/events{/privacy}",
"followers_url": "https://api.github.com/users/alonisser/followers",
"following_url": "https://api.github.com/users/alonisser/following{/other_user}",
"gists_url": "https://api.github.com/users/alonisser/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alonisser",
"id": 1434992,
"login": "alonisser",
"node_id": "MDQ6VXNlcjE0MzQ5OTI=",
"organizations_url": "https://api.github.com/users/alonisser/orgs",
"received_events_url": "https://api.github.com/users/alonisser/received_events",
"repos_url": "https://api.github.com/users/alonisser/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alonisser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alonisser/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alonisser",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-08-14T21:09:33Z
|
2021-09-09T01:22:24Z
|
2013-09-21T15:25:59Z
|
NONE
|
resolved
|
I'm trying to figure out how to get proper english/ascii error message from a failes `requests.get`. for some reason (probably being working with a hebrew locale), I get a jibrish unicode error message. I can deocde the message and display it correctly in hebrew, but I really prefer (for various reasons) that the Exception error text would be in english.
checking [Requests Exception source code](https://github.com/kennethreitz/requests/blob/master/requests/exceptions.py) doesn't even lead me in the right direction, Since I seem to be I'm getting a different exception:
`(10060, some encoded error text,')`
and the class is: `<class 'socket.error'>`.
So I guess we are probably looking at a `socket.error` from [socket lib](http://docs.python.org/2/library/socket.html#socket.error). At least as far as I can tell from parsing the Exception args (I can't raise the exception directly, only while running the app, where I do a smelly `except Exception as e` and parse `e.args` and `e.message`).
So:
how do I set the locale so that requests (or the underlying socket lib if my guess is correct) would return an english/ascii error message?
Thanks for the help!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4",
"events_url": "https://api.github.com/users/Lukasa/events{/privacy}",
"followers_url": "https://api.github.com/users/Lukasa/followers",
"following_url": "https://api.github.com/users/Lukasa/following{/other_user}",
"gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Lukasa",
"id": 1382556,
"login": "Lukasa",
"node_id": "MDQ6VXNlcjEzODI1NTY=",
"organizations_url": "https://api.github.com/users/Lukasa/orgs",
"received_events_url": "https://api.github.com/users/Lukasa/received_events",
"repos_url": "https://api.github.com/users/Lukasa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Lukasa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1531/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1531/timeline
| null |
completed
| null | null | false |
[
"Thanks for raising this!\n\nUnfortunately, this is not a property of Requests. I have no idea what's causing this, but you'll have to look into the socket library. If I recall correctly you also opened a Stack Overflow question on this topic, which is probably the right place to look.\n"
] |
https://api.github.com/repos/psf/requests/issues/1530
|
https://api.github.com/repos/psf/requests
|
https://api.github.com/repos/psf/requests/issues/1530/labels{/name}
|
https://api.github.com/repos/psf/requests/issues/1530/comments
|
https://api.github.com/repos/psf/requests/issues/1530/events
|
https://github.com/psf/requests/pull/1530
| 17,977,066 |
MDExOlB1bGxSZXF1ZXN0NzU1OTc5OA==
| 1,530 |
Added Logbook as a requirement.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/423715?v=4",
"events_url": "https://api.github.com/users/daniloassis/events{/privacy}",
"followers_url": "https://api.github.com/users/daniloassis/followers",
"following_url": "https://api.github.com/users/daniloassis/following{/other_user}",
"gists_url": "https://api.github.com/users/daniloassis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daniloassis",
"id": 423715,
"login": "daniloassis",
"node_id": "MDQ6VXNlcjQyMzcxNQ==",
"organizations_url": "https://api.github.com/users/daniloassis/orgs",
"received_events_url": "https://api.github.com/users/daniloassis/received_events",
"repos_url": "https://api.github.com/users/daniloassis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daniloassis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniloassis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daniloassis",
"user_view_type": "public"
}
|
[] |
closed
| true | null |
[] | null | 1 |
2013-08-13T02:51:55Z
|
2021-09-08T21:01:06Z
|
2013-08-13T02:52:11Z
|
NONE
|
resolved
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/423715?v=4",
"events_url": "https://api.github.com/users/daniloassis/events{/privacy}",
"followers_url": "https://api.github.com/users/daniloassis/followers",
"following_url": "https://api.github.com/users/daniloassis/following{/other_user}",
"gists_url": "https://api.github.com/users/daniloassis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/daniloassis",
"id": 423715,
"login": "daniloassis",
"node_id": "MDQ6VXNlcjQyMzcxNQ==",
"organizations_url": "https://api.github.com/users/daniloassis/orgs",
"received_events_url": "https://api.github.com/users/daniloassis/received_events",
"repos_url": "https://api.github.com/users/daniloassis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/daniloassis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daniloassis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/daniloassis",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/psf/requests/issues/1530/reactions"
}
|
https://api.github.com/repos/psf/requests/issues/1530/timeline
| null | null | false |
{
"diff_url": "https://github.com/psf/requests/pull/1530.diff",
"html_url": "https://github.com/psf/requests/pull/1530",
"merged_at": null,
"patch_url": "https://github.com/psf/requests/pull/1530.patch",
"url": "https://api.github.com/repos/psf/requests/pulls/1530"
}
| true |
[
"my bad, sorry.\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.