url
stringlengths
50
53
repository_url
stringclasses
1 value
labels_url
stringlengths
64
67
comments_url
stringlengths
59
62
events_url
stringlengths
57
60
html_url
stringlengths
38
43
id
int64
597k
2.65B
node_id
stringlengths
18
32
number
int64
1
6.83k
title
stringlengths
1
296
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
211
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
author_association
stringclasses
3 values
active_lock_reason
stringclasses
4 values
body
stringlengths
0
65.6k
closed_by
dict
reactions
dict
timeline_url
stringlengths
59
62
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
issue_comments
listlengths
0
30
https://api.github.com/repos/psf/requests/issues/1930
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1930/labels{/name}
https://api.github.com/repos/psf/requests/issues/1930/comments
https://api.github.com/repos/psf/requests/issues/1930/events
https://github.com/psf/requests/pull/1930
28,298,817
MDExOlB1bGxSZXF1ZXN0MTI5MzI2MTk=
1,930
Candidate "improve manual redirect-walking" core API change.
{ "avatar_url": "https://avatars.githubusercontent.com/u/325899?v=4", "events_url": "https://api.github.com/users/zackw/events{/privacy}", "followers_url": "https://api.github.com/users/zackw/followers", "following_url": "https://api.github.com/users/zackw/following{/other_user}", "gists_url": "https://api.github.com/users/zackw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zackw", "id": 325899, "login": "zackw", "node_id": "MDQ6VXNlcjMyNTg5OQ==", "organizations_url": "https://api.github.com/users/zackw/orgs", "received_events_url": "https://api.github.com/users/zackw/received_events", "repos_url": "https://api.github.com/users/zackw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zackw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zackw/subscriptions", "type": "User", "url": "https://api.github.com/users/zackw", "user_view_type": "public" }
[ { "color": "e11d21", "default": false, "description": null, "id": 44501305, "name": "Not Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTMwNQ==", "url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge" } ]
closed
true
null
[]
null
18
2014-02-26T00:34:55Z
2021-09-08T23:06:07Z
2014-03-12T20:36:16Z
CONTRIBUTOR
resolved
Sorry for dropping off the face of the earth for a while, folks. After reading the reaction to my previous pull request, I realized that the bugfixes I want to get in will be easier, cleaner, and make more sense in context if I make the big API change _first_. So this is the proposed big API change. It is not fully baked -- if nothing else, it needs tests -- but I'd like to get your opinion on the idea first. - Session.send now offers a new mode, `iter_redirects=True`, in which it returns an iterator over redirects _instead of_ the first response. In this mode, the first request does not actually fire until you call next() on the iterator the first time. Unlike the legacy Session.resolve_redirects, the first response is included in the generated sequence. - Session.resolve_redirects is preserved and works as it always has, but is now clearly documented as not the API you want. (The docstring for Session.send itself probably needs some improvement as well.) Its calling convention has been slightly polished: the request argument is now optional (defaults to `resp.redirect`), it will accept arbitrary kwargs to be passed to the adapter, and it defaults the same set of kwargs from session settings as `send` itself does. - The `allow_redirects=False` mode of `Session.send` also still works just as it always has. If both allow_redirects=False and iter_redirects=True are specified, allow_redirects=False wins. - SessionRedirectMixin has been replaced by a RedirectIterator class which is not a parent of Session. - Response.history is now always a list.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1930/reactions" }
https://api.github.com/repos/psf/requests/issues/1930/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1930.diff", "html_url": "https://github.com/psf/requests/pull/1930", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1930.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1930" }
true
[ "> Session.send now offers a new mode, `iter_redirects=True`, in which\n> it returns an iterator over redirects instead of the first response. [...] If both allow_redirects=False and iter_redirects=True are specified, allow_redirects=False wins.\n\nAah! Nope, that's not the way to do this. That last sentence should be a bit of a clue: your arguments to a function should never be fighting each other.\n\nMore importantly, function argument should not change the type of the returned value. This makes it substantially more difficult to reason about the function. If you really want to do this then you're going to have to have a new function call here, rather than a new parameter.\n\n@kennethreitz We're going to want your insight here. =)\n", "I agree with @Lukasa's comment. Further, `Session#send` should not return either an iterator or a response. It should only return one ever. The 98% use case desires a Response and that's all we should ever return. I will not budge on this.\n\nI haven't reviewed the code at all, but I will probably leave PR review when I do. As this is described right now, I'm :-1: on the entire change with the caveat that I haven't reviewed much beyond your description @zackw \n", "I think that insisting `Session.send` only ever return one type of object is excessively dogmatic, but I acknowledge that the toggle-switch arguments that conflict are annoying.\n\nPerhaps this is more palatable?\n- `Session.resolve_redirects` is deprecated.\n- `Session.send` remains as is, but the `allow_redirects=False` mode is deprecated.\n- New method `Session.send_no_redirect` does what `Session.send` with `allow_redirects=False` does now.\n- New method `Session.send_iter_redirect` returns the proposed redirect iterator.\n- New method `Session.prepare_request_for_redirect` takes a response and returns a new PreparedRequest to follow the redirect.\n\nI've started implementing this on a new branch at zackw/requests@6703df73e6 to see how it goes. I probably won't be done till tomorrow or Friday.\n\nBetter names for the new `send_*` variants solicited.\n", "I'm not really happy with three new methods, I'm afraid: it feels like excessive complexity.\n\nI don't understand why we're making this so complicated. Why can't we piggyback on what we already have? With `allow_redirects=False`, you'll potentially get back a redirect (which will satisfy your `Response.is_redirect` predicate). It should then be possible to simply feed it to a method on the session (e.g. `Session.follow_redirect()`) which takes all the same parameters as `Session.send`. What's your rationale for not going that way?\n", "> I don't understand why we're making this so complicated. Why can't we piggyback on what we\n> already have? With allow_redirects=False, you'll potentially get back a redirect (which will satisfy\n> your Response.is_redirect predicate). It should then be possible to simply feed it to a method on\n> the session (e.g. Session.follow_redirect()) which takes all the same parameters as Session.send.\n> What's your rationale for not going that way?\n\nFundamentally, the issue I'm trying to fix is that `Session.resolve_redirects` doesn't include the very first response in its generated sequence. That makes it difficult to use correctly even after all its superficial problems (not taking arbitrary kwargs, allowing you to shoot yourself in the foot by passing the wrong request object as the second argument) are corrected. You wind up having to write code like this:\n\n```\nfirst_resp = sess.send(..., allow_redirects=False)\nprocess_response(first_resp)\nfor resp in sess.resolve_redirects(first_resp, ...):\n process_response(resp)\n```\n\nand if what `process_response` does cannot be extracted to its own function, for whatever reason (e.g. nontrivial exception goo), you may find yourself duplicating tens of lines of code.\n\nWhat I _want_ as an API is\n\n```\nfor resp in sess.send(..., iter_redirects=True):\n process_response(resp)\n```\n\nbut since `iter_redirects=True` has been nixed, and `resolve_redirects` cannot itself be changed, we gotta make up a new name. And if we're making up a new name for a new redirect-related operating mode of `send`, it is more globally consistent to give `allow_redirects=False` mode its own method name as well.\n\nIf I understand correctly what you are suggesting, it amounts to\n\n```\nresp = sess.send(req, ..., allow_redirects=False)\nwhile resp.is_redirect:\n process_response(resp)\n resp = sess.send(sess.prepare_request_for_redirect(resp), ..., allow_redirects=False)\nprocess_response(resp)\n```\n\nwhich is more typing and doesn't get rid of the need to process the response in two places (textually).\n", "Well, firstly, it doesn't require us to process the response in two places:\n\n``` python\nresp = sess.send(...)\nwhile True:\n process_response(resp)\n if not resp.is_redirect:\n break\n```\n\nNevertheless, we still don't need three new methods, we only need one: `Session.send_iter` (or equivalent). This will involve rewriting `Session.send` to basically just exhaust `Session.send_iter`. We should not be changing what `Session.send` does by default (which is to follow all redirects), as this is overwhelmingly the most common use case. Your use case is a valuable one, and we want to support it, but we should not be making it the primary interface or causing pain to the users who are happily using the current interface.\n", "I did not propose to change `Session.send` in the default mode, only to excise the `allow_redirects=False` mode to its own method. Please see f4d7bc780d57de4e06651fa2cd4cb790c0ac4789.\n", "Right, but I see no reason to do that either. People who are currently handling their own redirects should not be forced to switch to the new scheme. Especially as if `Session.send()` and `Session.send_iter()` have the same underlying code, `Session.send()` with `allow_redirects=False` can be a simple special case where we only iterate over the iterator once. I don't see that the extra methods buy us anything we don't already have.\n", "I see where you're coming from? Only it turns out that the code is internally tidiest if `Session.send_no_redirect` (in my latest code I'm calling it `send_single`) exists as a separate method, at which point we are only arguing over whether it should be part of the exposed API. I happen to think that we should take the opportunity to deprecate `allow_redirects=False`, which is a confusing name (it sounds to me like it would cause an _error_ upon encountering a redirect).\n", "Ok, so this is a disagreement about API design. I strongly suggest we keep `allow_redirects`. It's been present in the API for a very long time, large quantities of both formal and informal documentation refer to it, and it would be jarring and painful to switch away from it. Additionally, it turns this from a minor release into a major one by being backward incompatible. =)\n", "Are you okay with keeping it but applying a DeprecationWarning? That being what I actually did. ;-)\n", "I am not. =) Requests does not have an official deprecation policy, but if we did I'd be hugely reluctant to throw deprecation warnings into minor releases. Additionally, `DeprecationWarning`s are silent by default in 2.7 onward.\n\nRegardless, even if they were noisy, I'd be against this change: it adds needless complexity and trashes the interface for no good reason. The parameter works fine as is, provides no engineering difficulty to keep, and allows a seamless API transition.\n", "I agree with all of @Lukasa's feedback. Frankly @zackw you seem to want to throw everything out and rewrite it from scratch. While that isn't always a bad thing, you seem to be taking advantage of our desire to help you out. I for one will not accept that. We have tried to help usher you through the process of making your work as close to perfect as possible and all I have seen in this thread is you fighting us. I understand why you're fighting but I see no genuine attempts at compromise. At best you're not making me anymore sympathetic to your goal.\n", "Uh, you realize you just nitpicked a pull request that I already junked\nbased on Cory's feedback? I will make sure of it tomorrow, but I believe\nall of the things you didn't like are already better in the new (still WIP)\nredirection-generator-2 branch, or else they were preeexisting conditions\nin code that I just moved. (The _cookies thing, for instance: I didn't do\nthat. That was there when I got here. It looks wrong to me too, but I don't\nunderstand what it's doing so I left it alone.)\n\nOn the larger issue: My idea of good style is, it has become clear,\nradically different than yours. This is your project and therefore I am\ndoing my damnedest to emulate your style, but the difference is great\nenough that I will inevitably get it wrong at least some of the time. I\nalso assure you that I am not just changing things for the sake of changing\nthings. I am trying to fix real bugs which I personally tripped over, and\nsome of them need API changes to fix. I appreciate your collective patience\nto date, and if there's anything I can do to make this easier for everyone\nI will consider it. For instance, it seemed pointless to officially\nfile all the bugs on the list in my head when I could just send the fixes,\nbut if it would make it easier for you to keep track of what the point is\nhere, I can do that. I'd also be happy to turn up on IRC for a planning\nsession if that would help.\n", "> Uh, you realize you just nitpicked a pull request that I already junked based on Cory's feedback?\n\nYou made no indication that you were junking it. You didn't close it and you didn't indicate you were working on yet another branch. I won't even address your \"nitpicking\" claim. That isn't productive.\n\n> Uh, you realize you just nitpicked a pull request that I already junked based on Cory's feedback?\n\nSorry but you can probably understand why I would think this was something you had done.\n\n> My idea of good style is, it has become clear, radically different than yours.\n\nMost of it isn't even just style. Some of it is just recognizing an anti-pattern when you see it. There is absolutely no need for a class method on the iterator class. Beyond that, class methods should basically always (in Python) be public especially when used in a public manner. The naming convention in the Python community is not just our \"idea\" of good style, it's a rather global convention. A function (or attribute) prefixed with an `_` is meant to be private and an implementation detail. It doesn't matter that everything is public in Python, the intent on the behalf of the author is communicated by those `_`s and indicate that the function is an implementation detail subject to change at will. If you're consuming that function publicly then you're just inviting disaster on yourself and it isn't the author's fault. \n\n> I am trying to fix real bugs which I personally tripped over, and some of them need API changes to fix. [snip] For instance, it seemed pointless to officially file all the bugs on the list in my head when I could just send the fixes, but if it would make it easier for you to keep track of what the point is here, I can do that.\n\nIt's not pointless to us. The only \"bug\" that I've seen fixed is the inconsistency in the type of `Response`'s `history` attribute. I have no other understanding of bugs in this section and I'm not a psychic. Also none of your changes to date have been evident that they fix any specific bugs. In fact they have been all API changes. Perhaps your concept of \"bugs\" are difficulties with what (I admit freely) is an imperfect API for handling redirects yourself. It is acceptable to improve the API but when I look at this all I see is a giant reifying of things.\n\nIf the bugs make sense to you to be grouped together instead of sitting through and filing 100 issues on GitHub then group them. At least we can discuss the merit and core problem of each of them and discuss possible solutions before you spend any more time working on these pull requests that you essentially just keep closing and totally rewriting. This seems like a waste of your time as well as ours whenever we try to review them. I have more projects to tend to than just requests. I review your code not to nitpick but to help it have a chance of being meged. If you see it as nitpicking then I'm not sure I can help you.\n\nYou might also benefit from reading about [30% Feedback](http://blog.42floors.com/thirty-percent-feedback/).\n", "Also, for what it is worth, having a huge change like this in one commit is not helpful to either @Lukasa or me. This [Guide to Git Commits](https://wiki.openstack.org/wiki/GitCommitMessages) for OpenStack should explain why it's harder for us to review one huge commit as opposed to a series of smaller ones which tell a story.\n", "Alright enough fighting guys :)\n\n---\n\n@zachw, would you like to have a Skype or Google Hangout sometime this week or next? Perhaps I could really connect get a good idea for what you're trying to accomplish, and see if I can come up with a supplemental API that would work well :)\n", "@kennethreitz I'm @zackw, not @zachw :-)\n\nCould probably sit down and talk sometime tomorrow. I'm in US/Eastern and can do either Skype or Google (but Skype is preferred). I will also find time to file bugs for all the things I was trying to fix, as requested earlier.\n" ]
https://api.github.com/repos/psf/requests/issues/1929
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1929/labels{/name}
https://api.github.com/repos/psf/requests/issues/1929/comments
https://api.github.com/repos/psf/requests/issues/1929/events
https://github.com/psf/requests/issues/1929
28,291,820
MDU6SXNzdWUyODI5MTgyMA==
1,929
Previous requests in redirect chain don't track their history
{ "avatar_url": "https://avatars.githubusercontent.com/u/772?v=4", "events_url": "https://api.github.com/users/alex/events{/privacy}", "followers_url": "https://api.github.com/users/alex/followers", "following_url": "https://api.github.com/users/alex/following{/other_user}", "gists_url": "https://api.github.com/users/alex/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex", "id": 772, "login": "alex", "node_id": "MDQ6VXNlcjc3Mg==", "organizations_url": "https://api.github.com/users/alex/orgs", "received_events_url": "https://api.github.com/users/alex/received_events", "repos_url": "https://api.github.com/users/alex/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex/subscriptions", "type": "User", "url": "https://api.github.com/users/alex", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-02-25T22:40:13Z
2021-09-08T23:07:55Z
2014-10-10T19:20:59Z
MEMBER
resolved
Example: ``` pycon >>> response = requests.get("http://djangoproject.com") >>> response.history (<Response [301]>, <Response [301]>) >>> response.history[0].history [] >>> response.history[1].history [] ``` I would have expected the first of the two items in `history` to have a `history` of its own.
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1929/reactions" }
https://api.github.com/repos/psf/requests/issues/1929/timeline
null
completed
null
null
false
[ "Thanks for this @alex! Assuming #1919 gets cleaned up this should be fixed by that pull request. =)\n", "I'm actually thinking it'll be better to do stuff in a different order, see pull request #1930, but yeah, this is definitely on my todo list.\n", "@alex fixed!\n" ]
https://api.github.com/repos/psf/requests/issues/1928
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1928/labels{/name}
https://api.github.com/repos/psf/requests/issues/1928/comments
https://api.github.com/repos/psf/requests/issues/1928/events
https://github.com/psf/requests/issues/1928
28,078,839
MDU6SXNzdWUyODA3ODgzOQ==
1,928
Limit overall execution time
{ "avatar_url": "https://avatars.githubusercontent.com/u/1811535?v=4", "events_url": "https://api.github.com/users/andrewtryder/events{/privacy}", "followers_url": "https://api.github.com/users/andrewtryder/followers", "following_url": "https://api.github.com/users/andrewtryder/following{/other_user}", "gists_url": "https://api.github.com/users/andrewtryder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andrewtryder", "id": 1811535, "login": "andrewtryder", "node_id": "MDQ6VXNlcjE4MTE1MzU=", "organizations_url": "https://api.github.com/users/andrewtryder/orgs", "received_events_url": "https://api.github.com/users/andrewtryder/received_events", "repos_url": "https://api.github.com/users/andrewtryder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andrewtryder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andrewtryder/subscriptions", "type": "User", "url": "https://api.github.com/users/andrewtryder", "user_view_type": "public" }
[ { "color": "f7c6c7", "default": false, "description": null, "id": 167537670, "name": "Propose Close", "node_id": "MDU6TGFiZWwxNjc1Mzc2NzA=", "url": "https://api.github.com/repos/psf/requests/labels/Propose%20Close" } ]
closed
true
null
[]
null
14
2014-02-21T22:39:54Z
2015-01-31T16:18:40Z
2015-01-19T09:21:42Z
NONE
null
This seems known but timeout in the .get() function only works if the server doesn't respond. Is there a way to have this also apply to the entire request? (grab a MP3 stream, file too large, site is responding but is too slow?) I'm utilizing requests in an application that grabs only 1MB of each page via .iter_content() for detecting MIME but have some issues where sites are so entirely slow that grabbing the 1MB (or whatever the page is until it completes) can easily take >60s.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1928/reactions" }
https://api.github.com/repos/psf/requests/issues/1928/timeline
null
completed
null
null
false
[ "Thanks for raising this issue!\n\nThe answer is 'yes', sort of. When not streaming the download we apply the supplied `timeout` value to both the connection attempt and the read. However, currently, when streaming the download we only apply the timeout to the connection attempt.\n\nThere's an ongoing discussion about how best to handle read timeouts, but right now we don't do anything useful for you there I'm afraid. Your only real workaround is to read smaller chunks. =(\n", "Gentlefolk,\n\nWhile I am a brand new user of Requests, I do run into the problem where a long running stream does not timeout, it just hangs. In my case, I'm connecting to the Twitter streaming endpoint using `session.iter_lines()`. The loop is trivial:\n\n```\ntry:\n\n stream = twitter_sample(args)\n\n for tweet_bytes in stream.iter_lines(chunk_size=2048): # Read a basic tweet at a time.\n q.put(tweet_bytes.decode()) # Decode the tweet and put it on the queue.\n\nexcept (RequestException, UnicodeError) as e:\n warning('Error: {}'.format(e))\n```\n\n`twitter_sample()` looks like:\n\n```\ndef twitter_sample(args: argparse.Namespace) -> requests.Response:\n\n twitter = OAuth1Session(args.consumer_key, args.consumer_secret,\n args.token, args.token_secret)\n endpoint = 'https://stream.twitter.com/1.1/statuses/sample.json'\n headers = {'Accept': 'application/json; charset=utf-8'}\n timeout = 60.0\n # timeout = Timeout(connect=15.0, read=60.0) # urllib3 Timeout()\n\n stream = twitter.get(endpoint, stream=True, headers=headers, timeout=timeout)\n\n return stream\n```\n\nAs this is a long running stream from Twitter that occasionally just stops, I need to be able to recognize when it does. I can't just \"read less\". Being inspired by the Urllib3 documentation that takes either a `float` or a `Timeout()` for the `timeout` property, I have tried passing in a `Timeout()` class but it is caught in validation. If you let it through or just sanity checked the parameters, I suspect I would get everything I need. (This is the one feature I am getting out of a custom Twitter library and not Requests. I would prefer to use the general purpose and more performant stack that is Requests.)\n\nThank you all for the hard work on this package.\n\nAnon,\nAndrew\n", "Hi there, and thanks for using requests!\n\nAs I mentioned above, there's an ongoing discussion on how best to handle the multiple timeouts. As a temporary workaround before we come up with something better, you can monkeypatch the `TimeoutSauce` internal class.\n\n**Please Note: The following involves monkeypatching a class that is an implementation detail of requests. Be aware that we may change this under your feet without mentioning it in the changelog.**\n\n``` python\nimport requests\nfrom requests.adapters import TimeoutSauce\n\nclass MyTimeout(TimeoutSauce):\n def __init__(self, *args, **kwargs):\n connect = kwargs.get('connect', 5)\n read = kwargs.get('read', connect)\n super(MyTimeout, self).__init__(connect=connect, read=read)\n\nrequests.adapters.TimeoutSauce = MyTimeout\n```\n\nThis code should cause us to set the read timeout as equal to the connect timeout, which is the timeout value you pass on your `Session.get()` call. (Note that I haven't actually tested this code, so it may need some quick debugging, I just wrote it straight into the GitHub window.)\n\nHopefully this will tide you over until we come to a decision about how to handle this.\n", "@Lukasa,\n\nThank you for the monkey patch. I have it running on a test stream right now. I understand using this patch is on my own recognizance. That you disavow all knowledge of its existence and will deny that you were ever involved. ;-)\n\nThere are three main reasons I've moved to using your general purpose stack. First, I'll be able to use this knowledge elsewhere. (The database I use, Couchbase/CouchDB, uses HTTP for control messages.) Second, Requests API is almost as efficient as the Twitter library I am used to. This is a great accomplishment. Third, Requests is more performant. Requests is able to turn gzip on during the stream. This reduces my bandwidth use by a factor of 4 and, apparently, Twitter gives me more tweets. Yay!\n\nKeep up the good work. I'll watch this repo. I'll try to be aware when you've come to a decision. That said, do not hesitate to ping me. As long running timeouts are hard to test, I'll be happy to start testing the fix when you're team is ready.\n\nAnon,\nAndrew\n", "Hi guys,\n\nThanks for the responses. Lukasa, outside of the patch there, do you have a gist or sample of the code I could see working in real time?\n\nI briefly mentioned my usage. Here's something more complete: I use requests to basically grab the \"information\" about a URL pasted. What it does is go to the url, parse what it can based on content.type (what the HTTP server responds with) and also does some guess work with python-magic if there's questions about it. I can see how the overall issue doesn't have a clear solution just yet. \n\nI was trying something like this:\n\n```\n#!/usr/bin/env python\nimport sys\nimport requests\nfrom requests.adapters import TimeoutSauce\n\nclass MyTimeout(TimeoutSauce):\n def __init__(self, *args, **kwargs):\n connect = kwargs.get('connect', 5)\n read = kwargs.get('read', connect)\n super(MyTimeout, self).__init__(connect=connect, read=read)\n\nrequests.adapters.TimeoutSauce = MyTimeout\n\ndef get(url, urlread=True):\n try:\n r = requests.get(url, timeout=5, stream=False, allow_redirects=True)\n return r\n except Exception, e:\n print \"Error: {0}\".format(e)\n return None\n\nurl = sys.argv[1]\nff = get(url)\n\nprint type(ff), ff\n```\n\nWhen you use it with a pure mp3 stream url like: http://mp3.rtvslo.si/ars\n\nIt won't timeout until you KeyboardInterrupt (Ctrl+C)\n\nThanks.\n", "@Lukasa,\n\nI've had the patch running for two days now and it appears to have properly detected hangups from Twitter. My app then safely restarts the connection.\n\nI'm looking forward to your full featured solution. Rest assured though that it is needed and will be used.\n\nAnon,\nAndrew\n", "@reticulatingspline Yes, this won't work for your use-case, I'm afraid. The read timeout function is at the scope of an individual socket `recv()` call, so that if the server stops sending data for more than the read timeout we'll abort.\n\nIf you really want to set a maximum length you'll need to use `stream=True` and `iter_content()` in small chunks. The read timeout will then apply to each `iter_content()` call. That, plus some judicious use of the `time` module, should get you the behaviour you want.\n", "@Lukasa \n\nI was testing this out with it and it seemed to work:\n\n```\nclass TimeoutException(Exception): pass\n\n@contextmanager\ndef time_limit(seconds):\n def signal_handler(signum, frame):\n raise TimeoutException, \"Timed out!\"\n signal.signal(signal.SIGALRM, signal_handler)\n signal.alarm(seconds)\n try:\n yield\n finally:\n signal.alarm(0)\n```\n\nI'd call get within the try/except after a with time_limit(10):\n\nThat seemed to work.\n\nSame concept?\n\nThanks.\n", "Yeah, same concept. =)\n", "Any idea on when we can expect a total connection timeout option? The workarounds don't seem as appealing as a simple \"total_timeout\" option would be.\n", "@mortoray There is no schedule on this feature. Kenneth has expressed ambivalence about the quality of the API, and we won't implement anything until we've got an API he's happy with.\n", "Given that there is no schedule on this feature, having an issue open to track it feels messy.\n", "+1\n", "Further discussion belongs over on https://github.com/sigmavirus24/requests-toolbelt/issues/51\n" ]
https://api.github.com/repos/psf/requests/issues/1927
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1927/labels{/name}
https://api.github.com/repos/psf/requests/issues/1927/comments
https://api.github.com/repos/psf/requests/issues/1927/events
https://github.com/psf/requests/issues/1927
28,077,065
MDU6SXNzdWUyODA3NzA2NQ==
1,927
cannot import pyopenssl
{ "avatar_url": "https://avatars.githubusercontent.com/u/960264?v=4", "events_url": "https://api.github.com/users/flibbertigibbet/events{/privacy}", "followers_url": "https://api.github.com/users/flibbertigibbet/followers", "following_url": "https://api.github.com/users/flibbertigibbet/following{/other_user}", "gists_url": "https://api.github.com/users/flibbertigibbet/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/flibbertigibbet", "id": 960264, "login": "flibbertigibbet", "node_id": "MDQ6VXNlcjk2MDI2NA==", "organizations_url": "https://api.github.com/users/flibbertigibbet/orgs", "received_events_url": "https://api.github.com/users/flibbertigibbet/received_events", "repos_url": "https://api.github.com/users/flibbertigibbet/repos", "site_admin": false, "starred_url": "https://api.github.com/users/flibbertigibbet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flibbertigibbet/subscriptions", "type": "User", "url": "https://api.github.com/users/flibbertigibbet", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-02-21T22:11:00Z
2021-09-09T00:10:04Z
2014-02-21T22:14:33Z
NONE
resolved
``` from requests.packages.urllib3.contrib import pyopenssl ``` results in the error: ``` ImportError: No module named ndg.httpsclient.ssl_peer_verification ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1927/reactions" }
https://api.github.com/repos/psf/requests/issues/1927/timeline
null
completed
null
null
false
[ "Thanks for raising this issue!\n\nYou are not expected to be importing anything from `urllib3`'s `pyopenssl` module yourself, so it's not hugely surprising that this doesn't work. Nevertheless, if you really want to, you can go ahead and do it by installing the necessary dependencies, as included in [this StackOverflow answer](https://stackoverflow.com/questions/18578439/using-requests-with-tls-doesnt-give-sni-support/18579484#18579484).\n", "Thanks!\n" ]
https://api.github.com/repos/psf/requests/issues/1926
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1926/labels{/name}
https://api.github.com/repos/psf/requests/issues/1926/comments
https://api.github.com/repos/psf/requests/issues/1926/events
https://github.com/psf/requests/issues/1926
27,885,873
MDU6SXNzdWUyNzg4NTg3Mw==
1,926
UnicodeEncodeError when auth parameters are outside of latin-1encoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/38861?v=4", "events_url": "https://api.github.com/users/oinopion/events{/privacy}", "followers_url": "https://api.github.com/users/oinopion/followers", "following_url": "https://api.github.com/users/oinopion/following{/other_user}", "gists_url": "https://api.github.com/users/oinopion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/oinopion", "id": 38861, "login": "oinopion", "node_id": "MDQ6VXNlcjM4ODYx", "organizations_url": "https://api.github.com/users/oinopion/orgs", "received_events_url": "https://api.github.com/users/oinopion/received_events", "repos_url": "https://api.github.com/users/oinopion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/oinopion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oinopion/subscriptions", "type": "User", "url": "https://api.github.com/users/oinopion", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2014-02-19T16:01:55Z
2021-09-09T00:01:06Z
2014-03-23T11:51:13Z
NONE
resolved
To reproduce: ``` >>> import requests >>> requests.get('http://example.com', auth=(u'żółty', u'jaźń')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/api.py", line 55, in get return request('get', url, **kwargs) File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/sessions.py", line 349, in request prep = self.prepare_request(req) File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/sessions.py", line 287, in prepare_request hooks=merge_hooks(request.hooks, self.hooks), File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/models.py", line 291, in prepare self.prepare_auth(auth, url) File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/models.py", line 470, in prepare_auth r = auth(self) File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/auth.py", line 48, in __call__ r.headers['Authorization'] = _basic_auth_str(self.username, self.password) File "/home/paczkowski/.virtualenvs/tmp-8a0b7916bbc9fce4/local/lib/python2.7/site-packages/requests/auth.py", line 31, in _basic_auth_str return 'Basic ' + b64encode(('%s:%s' % (username, password)).encode('latin1')).strip().decode('latin1') UnicodeEncodeError: 'latin-1' codec can't encode character u'\u017c' in position 0: ordinal not in range(256) ``` I am not sure what behaviour is correct here, but rising `UnicodeEncodeError` is probably not the best.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1926/reactions" }
https://api.github.com/repos/psf/requests/issues/1926/timeline
null
completed
null
null
false
[ "I don't have the time to find the spec but I think Headers are supposed to be encoded as latin strings (and that's how basic authentication and digest authentication is specified for the server). If Python cannot coerce your unicode credentials to Latin we should raise an exception. UnicodeEncodeError is a good one in my opinion. I don't like the idea of adding yet another exception.\n\nI agree that this doesn't give the user a great deal of information, but at the same time, this is a very accurate message.\n\n@Lukasa should we be checking credentials ahead of time? The problem with that is the fact that we gleam authentication credentials from the Session too. There's no way to check this except in the preparation of the request. If we inherit from the UnicodeDecodeError, we could raise a new exception with the original message attached. It might be more informative to the user, `AuthenticationEncodeError`? Or perhaps a more generic `HeaderEncodeError`?\n", "Both Apache and nginx allow UTF-8 in Basic auth. cURL supports it, too.\n", "Ugh, this is why I hate HTTP. RFC 2616 has the following things to say on this topic:\n\nFirstly, [the definitions of headers](http://pretty-rfc.herokuapp.com/RFC2616#message.headers):\n\n> ```\n> message-header = field-name \":\" [ field-value ]\n> field-name = token\n> field-value = *( field-content | LWS )\n> field-content = <the OCTETs making up the field-value\n> and consisting of either *TEXT or combinations\n> of token, separators, and quoted-string>\n> ```\n\nUTF-8 is necessarily outside the range of tokens and separators, so we need to consider the TEXT BNF rule. Once again, [RFC 2616 to the rescue](http://pretty-rfc.herokuapp.com/RFC2616#basic.rules):\n\n> The TEXT rule is only used for descriptive field contents and values that are not intended to be interpreted by the message parser. Words of *TEXT MAY contain characters from character sets other than ISO-8859-1 Information technology - 8-bit single byte coded graphic - character sets only when encoded according to the rules of RFC 2047 RFC 2047[sic].\n> \n> ```\n> TEXT = <any OCTET except CTLs,\n> but including LWS>\n> ```\n\nNote please that ISO-8859-1 is a synonym of latin-1.\n\nAt this stage things get ambiguous. Strictly speaking, UTF-8 can be represented in the `TEXT` field as a string of opaque octets (as none of them will be mistaken for ASCII control characters). However, plain UTF-8 does _not_ meet the RFC 2047 encoding requirements.\n\nRequests is between a rock and a hard place. RFC 2616 makes clear that any complaint implementation will be able to handle ISO-8859-1, and makes no guarantees about supplying non-RFC 2047-encoded UTF-8 header values. In such a world, Requests is always going to choose the most-likely-to-succeed case, fitting in with our goal of satisfying the 90% use-case. In your situation @oinopion, I think the best thing to do is to build the Basic Auth header yourself: it's not very hard. =) You can therefore take control of the encoding yourself and choose the approach you know your target server supports.\n", "Related: [RFC 5987 – Character Set and Language Encoding for Hypertext Transfer Protocol (HTTP) Header Field Parameters](https://tools.ietf.org/html/rfc5987)\n" ]
https://api.github.com/repos/psf/requests/issues/1925
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1925/labels{/name}
https://api.github.com/repos/psf/requests/issues/1925/comments
https://api.github.com/repos/psf/requests/issues/1925/events
https://github.com/psf/requests/issues/1925
27,826,854
MDU6SXNzdWUyNzgyNjg1NA==
1,925
Use ujson if available
{ "avatar_url": "https://avatars.githubusercontent.com/u/866147?v=4", "events_url": "https://api.github.com/users/ntucker/events{/privacy}", "followers_url": "https://api.github.com/users/ntucker/followers", "following_url": "https://api.github.com/users/ntucker/following{/other_user}", "gists_url": "https://api.github.com/users/ntucker/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ntucker", "id": 866147, "login": "ntucker", "node_id": "MDQ6VXNlcjg2NjE0Nw==", "organizations_url": "https://api.github.com/users/ntucker/orgs", "received_events_url": "https://api.github.com/users/ntucker/received_events", "repos_url": "https://api.github.com/users/ntucker/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ntucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ntucker/subscriptions", "type": "User", "url": "https://api.github.com/users/ntucker", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-02-18T20:53:33Z
2021-09-08T23:10:45Z
2014-02-18T21:12:42Z
NONE
resolved
Benchmarks on the internet seem to suggest that ujson is the fastest json parser (e.g., http://www.justinfx.com/2012/07/25/python-2-7-3-serializer-speed-comparisons/). Maybe try importing ujson first, then simplejson, then json?
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1925/reactions" }
https://api.github.com/repos/psf/requests/issues/1925/timeline
null
completed
null
null
false
[ "Thanks for reporting!\nHowever this seems to be a duplicate of #1595, where the same suggestion has been dismissed before.\n", "@t-8ch Correct. =)\n\n@ntucker Thanks for the suggestion! Unfortunately, this is not a direction we're prepared to go. As I said [in the linked issue](https://github.com/kennethreitz/requests/issues/1595#issuecomment-30993198):\n\n> I see no reason for Requests to favour ujson over any other third-party JSON decoder. We do nothing very complicated with JSON decoding, so replacing the decoder we use either via monkeypatching or via doing the decoding yourself is totally safe. With that in mind, there's no good reason to move away from the standard library in Requests proper.\n" ]
https://api.github.com/repos/psf/requests/issues/1924
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1924/labels{/name}
https://api.github.com/repos/psf/requests/issues/1924/comments
https://api.github.com/repos/psf/requests/issues/1924/events
https://github.com/psf/requests/pull/1924
27,796,661
MDExOlB1bGxSZXF1ZXN0MTI2NTI1OTQ=
1,924
Default proxy scheme to HTTP
{ "avatar_url": "https://avatars.githubusercontent.com/u/238652?v=4", "events_url": "https://api.github.com/users/schlamar/events{/privacy}", "followers_url": "https://api.github.com/users/schlamar/followers", "following_url": "https://api.github.com/users/schlamar/following{/other_user}", "gists_url": "https://api.github.com/users/schlamar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/schlamar", "id": 238652, "login": "schlamar", "node_id": "MDQ6VXNlcjIzODY1Mg==", "organizations_url": "https://api.github.com/users/schlamar/orgs", "received_events_url": "https://api.github.com/users/schlamar/received_events", "repos_url": "https://api.github.com/users/schlamar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/schlamar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schlamar/subscriptions", "type": "User", "url": "https://api.github.com/users/schlamar", "user_view_type": "public" }
[]
closed
true
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" } ]
null
10
2014-02-18T14:27:19Z
2021-09-09T00:01:20Z
2014-05-12T19:05:56Z
CONTRIBUTOR
resolved
See #1622.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1924/reactions" }
https://api.github.com/repos/psf/requests/issues/1924/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1924.diff", "html_url": "https://github.com/psf/requests/pull/1924", "merged_at": "2014-05-12T19:05:56Z", "patch_url": "https://github.com/psf/requests/pull/1924.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1924" }
true
[ "I am happy enough to take this. =)\n", "Note that this would have to go into 2.3.0.\n", "@Lukasa I assume this stalled until 2.3 is in sight? Any rough ETA?\n", "Kenneth is responsible for merging any code change, so he owns this I'm afraid. This will get merged when he has time. =)\n", "Don't you usually assign him to the PR after review so that he knows he can have a look? =)\n", "Not normally, no. =) But I will, just to make you happy. :cake:\n", "Hehe =) I have just seen this on another PR so I thought this is default...\n", "@schlamar I've been doing this for PRs that need to be merged quickly so they don't get lost. I'll assign it and then comment along the lines \"LGTM!\" so that Kenneth gets an email (I'm not sure he has emails turned on for every issue/PR). This is just my way of being certain that he receives a notification. :)\n", "Also, LGTM.\n", ":sparkles: :cake: :sparkles:\n" ]
https://api.github.com/repos/psf/requests/issues/1923
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1923/labels{/name}
https://api.github.com/repos/psf/requests/issues/1923/comments
https://api.github.com/repos/psf/requests/issues/1923/events
https://github.com/psf/requests/pull/1923
27,679,046
MDExOlB1bGxSZXF1ZXN0MTI1OTM5MjU=
1,923
The timeout is in seconds.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-02-16T19:01:11Z
2021-09-09T00:01:21Z
2014-02-16T19:02:05Z
MEMBER
resolved
Resolves #1922.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1923/reactions" }
https://api.github.com/repos/psf/requests/issues/1923/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1923.diff", "html_url": "https://github.com/psf/requests/pull/1923", "merged_at": "2014-02-16T19:02:05Z", "patch_url": "https://github.com/psf/requests/pull/1923.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1923" }
true
[ "I honestly don't know why I created this instead of committing directly. Note that this documentation change is pretending that #1801 doesn't exist for the moment.\n", ":cake: :shipit: (I know it's already shipped :P)\n" ]
https://api.github.com/repos/psf/requests/issues/1922
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1922/labels{/name}
https://api.github.com/repos/psf/requests/issues/1922/comments
https://api.github.com/repos/psf/requests/issues/1922/events
https://github.com/psf/requests/issues/1922
27,678,958
MDU6SXNzdWUyNzY3ODk1OA==
1,922
The documentation does not tell you what units timeout is specified in
{ "avatar_url": "https://avatars.githubusercontent.com/u/1394710?v=4", "events_url": "https://api.github.com/users/colons/events{/privacy}", "followers_url": "https://api.github.com/users/colons/followers", "following_url": "https://api.github.com/users/colons/following{/other_user}", "gists_url": "https://api.github.com/users/colons/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/colons", "id": 1394710, "login": "colons", "node_id": "MDQ6VXNlcjEzOTQ3MTA=", "organizations_url": "https://api.github.com/users/colons/orgs", "received_events_url": "https://api.github.com/users/colons/received_events", "repos_url": "https://api.github.com/users/colons/repos", "site_admin": false, "starred_url": "https://api.github.com/users/colons/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/colons/subscriptions", "type": "User", "url": "https://api.github.com/users/colons", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-02-16T18:56:17Z
2021-09-09T00:10:06Z
2014-02-16T19:02:05Z
NONE
resolved
I correctly assumed seconds, but not with enough confidence that I didn't have to dig about in the source until I found [this](https://github.com/kennethreitz/requests/blob/a5b3719967e685afe9e96359e69177fda0a10d44/requests/packages/urllib3/util.py#L108) to be sure, and even that isn't explicit.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1922/reactions" }
https://api.github.com/repos/psf/requests/issues/1922/timeline
null
completed
null
null
false
[ "Thanks for raising this! I just pushed a fix, the documentation should be updated shortly. =)\n", "Awesome, thanks.\n" ]
https://api.github.com/repos/psf/requests/issues/1921
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1921/labels{/name}
https://api.github.com/repos/psf/requests/issues/1921/comments
https://api.github.com/repos/psf/requests/issues/1921/events
https://github.com/psf/requests/pull/1921
27,629,545
MDExOlB1bGxSZXF1ZXN0MTI1NzQ5NzM=
1,921
Do not set headers with None value
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
[ { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" } ]
null
5
2014-02-14T22:15:56Z
2021-09-08T23:06:12Z
2014-03-03T18:13:13Z
CONTRIBUTOR
resolved
- Regardless of whether they are on the session or not - Fixes #1920
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1921/reactions" }
https://api.github.com/repos/psf/requests/issues/1921/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1921.diff", "html_url": "https://github.com/psf/requests/pull/1921", "merged_at": "2014-03-03T18:13:13Z", "patch_url": "https://github.com/psf/requests/pull/1921.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1921" }
true
[ "Assigning to @Lukasa for review and once past that, I will assign it to Kenneth.\n", "LGTM. :+1:\n", "FWIW, I think we used to do this back in 1.x but someone recently rewrote the `merge_setting` function (also they moved which file it is in so I couldn't find it at first =P) and they dropped this functionality. This is a backwards regression but I can understand if we'd rather let the user shoot themselves in the foot.\n", ":cake:\n", ":coffee: \n" ]
https://api.github.com/repos/psf/requests/issues/1920
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1920/labels{/name}
https://api.github.com/repos/psf/requests/issues/1920/comments
https://api.github.com/repos/psf/requests/issues/1920/events
https://github.com/psf/requests/issues/1920
27,601,965
MDU6SXNzdWUyNzYwMTk2NQ==
1,920
Removing a default header of a session
{ "avatar_url": "https://avatars.githubusercontent.com/u/34607?v=4", "events_url": "https://api.github.com/users/miikka/events{/privacy}", "followers_url": "https://api.github.com/users/miikka/followers", "following_url": "https://api.github.com/users/miikka/following{/other_user}", "gists_url": "https://api.github.com/users/miikka/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/miikka", "id": 34607, "login": "miikka", "node_id": "MDQ6VXNlcjM0NjA3", "organizations_url": "https://api.github.com/users/miikka/orgs", "received_events_url": "https://api.github.com/users/miikka/received_events", "repos_url": "https://api.github.com/users/miikka/repos", "site_admin": false, "starred_url": "https://api.github.com/users/miikka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miikka/subscriptions", "type": "User", "url": "https://api.github.com/users/miikka", "user_view_type": "public" }
[]
closed
true
null
[]
null
5
2014-02-14T15:18:04Z
2021-09-09T00:10:02Z
2014-03-03T18:13:13Z
CONTRIBUTOR
resolved
[The docs](http://docs.python-requests.org/en/latest/user/advanced/#session-objects) say that you can prevent sending a session header by setting the headers value to None in the method's arguments. You would expect (as [discussed on IRC](https://botbot.me/freenode/python-requests/msg/10788170/)) that this would work for session's default headers, too: ``` python session = requests.Session() # Do not send Accept-Encoding session.headers['Accept-Encoding'] = None ``` What happens is that "None" gets sent as the value of header. ``` Accept-Encoding: None ``` For the reference, here is a way that works: ``` python del session.headers['Accept-Encoding'] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1920/reactions" }
https://api.github.com/repos/psf/requests/issues/1920/timeline
null
completed
null
null
false
[ "We _could_ do this, but I'm actually increasingly believing that the default headers dict is the right call here.\n", "> We could do this, but I'm actually increasingly believing that the default headers dict is the right call here.\n\nI'm not sure what you're talking about.\n", "@sigmavirus24 Sorry, I had the context for this issue already. =)\n\nBasically, we allow you to temporarily unset a header like this:\n\n``` python\ns = requests.Session()\ns.get(url, headers={'Accept-Encoding': None})\n```\n\nBut if you try to permanently unset a header on a `Session` in an analogous way, you get surprising behaviour:\n\n``` python\ns = requests.Session()\ns.headers['Accept-Encoding'] = None\ns.get(url) # Sends the header \"Accept-Encoding: None\"\n```\n\nThe question is, should we allow the example above to work, or should we just continue to use the `del` behaviour?\n", "Actually, I think this is a bug in how we merge the headers before firing off a request. I'm going to send a PR in a few with a fix\n", "@Lukasa I think this is actually a regression in how we used to behave but such are the consequences when less tests are preferred to more tests. =P\n" ]
https://api.github.com/repos/psf/requests/issues/1919
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1919/labels{/name}
https://api.github.com/repos/psf/requests/issues/1919/comments
https://api.github.com/repos/psf/requests/issues/1919/events
https://github.com/psf/requests/pull/1919
27,561,104
MDExOlB1bGxSZXF1ZXN0MTI1MzQ5MDg=
1,919
Redirection-related bugfixes.
{ "avatar_url": "https://avatars.githubusercontent.com/u/325899?v=4", "events_url": "https://api.github.com/users/zackw/events{/privacy}", "followers_url": "https://api.github.com/users/zackw/followers", "following_url": "https://api.github.com/users/zackw/following{/other_user}", "gists_url": "https://api.github.com/users/zackw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zackw", "id": 325899, "login": "zackw", "node_id": "MDQ6VXNlcjMyNTg5OQ==", "organizations_url": "https://api.github.com/users/zackw/orgs", "received_events_url": "https://api.github.com/users/zackw/received_events", "repos_url": "https://api.github.com/users/zackw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zackw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zackw/subscriptions", "type": "User", "url": "https://api.github.com/users/zackw", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-02-13T23:15:35Z
2021-09-09T00:01:22Z
2014-02-26T00:50:42Z
CONTRIBUTOR
resolved
This pull request contains the subset of #1913 which (IMNSHO) consists entirely of bugfixes, with no question about backward compatibility or what the best behavior should be. - `Session.resolve_redirects` no longer crashes, when responses are being loaded in `stream=True` mode, if `Response.iter_content` is used to consume the entire stream before advancing the generator. - `Response.history` is now always a list, not a tuple. - Each response in a chain of redirects now has a filled-out history property, consisting of all responses up to but not including itself. And I didn't bother mentioning it in HISTORY.rst, but `Session.send` doesn't create the redirection resolution generator anymore if it's not going to use it, and `resolve_redirects` doesn't extract cookies anymore that `send` has already extracted, both of which should make things ever so slightly more efficient. Some of the new tests are pretty grody, and raise the question of whether the response-modification hook should really be _allowed_ to modify history, but there's specific code in `send` to support them, so this seems to have been a desired feature...
{ "avatar_url": "https://avatars.githubusercontent.com/u/325899?v=4", "events_url": "https://api.github.com/users/zackw/events{/privacy}", "followers_url": "https://api.github.com/users/zackw/followers", "following_url": "https://api.github.com/users/zackw/following{/other_user}", "gists_url": "https://api.github.com/users/zackw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zackw", "id": 325899, "login": "zackw", "node_id": "MDQ6VXNlcjMyNTg5OQ==", "organizations_url": "https://api.github.com/users/zackw/orgs", "received_events_url": "https://api.github.com/users/zackw/received_events", "repos_url": "https://api.github.com/users/zackw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zackw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zackw/subscriptions", "type": "User", "url": "https://api.github.com/users/zackw", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1919/reactions" }
https://api.github.com/repos/psf/requests/issues/1919/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1919.diff", "html_url": "https://github.com/psf/requests/pull/1919", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1919.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1919" }
true
[ "IYNSHO? =D Are you sure you're not a Requests core developer? You certainly have the attitude of one. ;)\n\nI'll try to code review this today, but shouldn't do it this morning, I have too many bugs at work. A quick note, though: we allow Response-modifying hooks to modify history because we use them for authentication. See the `HTTPDigestAuth` handler for an example. I'm very happy to remove hooks, but we'd need to do so in a way that allows not just Requests but everyone who's ever written a third-party auth handler to continue to be able to put 401s in the History. If you've got ideas, hop in the #python-requests channel on Freenode and run them past us. :cake:\n", "New plan: I now think these bugfixes will be simpler and cleaner if I first make one API change (see pull request #1930). In particular, the weird \"`for r in gen: pass`\" thing should be unnecessary. Some replies to your comments, though:\n\nRegarding `resp.content` throwing an error if `iter_content` has been used to consume the entire response body: yeah, I realize direct access to `resp._content_consumed` is unclean, but I am at a loss for a better idea. Catching the exception struck me as inappropriate because `RuntimeError` is pretty vague: what if there were some other reason that might get thrown? I'm open to suggestions. (And I'm definitely down with making the exception more specific.) Maybe a `resp.discard_content()` method? That might be generally useful, but it would be another API change.\n\nAlso on that point, it seemed like unnecessary process hoops to file a bug that I had already fixed, when I could just file the pull request instead. ;-) I'm happy to do so if you want it in the database for tracking purposes or something, though.\n\nRegarding response-modifying hooks, I don't really object to the feature, I just couldn't figure out what it was actually _for_. Now I know, I'll fix up the tests to do something less weird.\n\nRegarding two-letter variable and argument names in tests, I will see what I can do about making the tests in general less cryptic, but sometimes a variable _should_ have a meaningless one- or two-character name: in this case, `ra` and `rb` are two currently-being-inspected entries in a list of Response objects the test is iterating over, and that is _all_ the semantics they have. Response A, Response B. Longer names would make the test _more_ cryptic, by increasing visual clutter and implying meaning where none exists.\n", "(For clarity, all of the changes in here will come back on top of #1930, but this pull no longer makes sense.)\n" ]
https://api.github.com/repos/psf/requests/issues/1918
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1918/labels{/name}
https://api.github.com/repos/psf/requests/issues/1918/comments
https://api.github.com/repos/psf/requests/issues/1918/events
https://github.com/psf/requests/pull/1918
27,545,275
MDExOlB1bGxSZXF1ZXN0MTI1MjUxMzE=
1,918
New Response property, .is_redirect.
{ "avatar_url": "https://avatars.githubusercontent.com/u/325899?v=4", "events_url": "https://api.github.com/users/zackw/events{/privacy}", "followers_url": "https://api.github.com/users/zackw/followers", "following_url": "https://api.github.com/users/zackw/following{/other_user}", "gists_url": "https://api.github.com/users/zackw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zackw", "id": 325899, "login": "zackw", "node_id": "MDQ6VXNlcjMyNTg5OQ==", "organizations_url": "https://api.github.com/users/zackw/orgs", "received_events_url": "https://api.github.com/users/zackw/received_events", "repos_url": "https://api.github.com/users/zackw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zackw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zackw/subscriptions", "type": "User", "url": "https://api.github.com/users/zackw", "user_view_type": "public" }
[ { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" } ]
null
5
2014-02-13T19:31:51Z
2021-09-08T23:05:24Z
2014-02-13T21:03:37Z
CONTRIBUTOR
resolved
Here's a fresh pull request containing only the new `.is_redirect` property for Response objects.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1918/reactions" }
https://api.github.com/repos/psf/requests/issues/1918/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1918.diff", "html_url": "https://github.com/psf/requests/pull/1918", "merged_at": "2014-02-13T21:03:37Z", "patch_url": "https://github.com/psf/requests/pull/1918.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1918" }
true
[ "This is a totally uncontroversial change. :+1: =D\n", ":shipit: \n", ":sparkles: :cake: :sparkles:\n", "Excited about this.\n", "I've discovered the flow for getting @kennethreitz to merge things faster. /Social Coding Hacking/\n" ]
https://api.github.com/repos/psf/requests/issues/1917
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1917/labels{/name}
https://api.github.com/repos/psf/requests/issues/1917/comments
https://api.github.com/repos/psf/requests/issues/1917/events
https://github.com/psf/requests/issues/1917
27,516,802
MDU6SXNzdWUyNzUxNjgwMg==
1,917
Documentation on the response from a streamed upload
{ "avatar_url": "https://avatars.githubusercontent.com/u/2115079?v=4", "events_url": "https://api.github.com/users/techdragon/events{/privacy}", "followers_url": "https://api.github.com/users/techdragon/followers", "following_url": "https://api.github.com/users/techdragon/following{/other_user}", "gists_url": "https://api.github.com/users/techdragon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/techdragon", "id": 2115079, "login": "techdragon", "node_id": "MDQ6VXNlcjIxMTUwNzk=", "organizations_url": "https://api.github.com/users/techdragon/orgs", "received_events_url": "https://api.github.com/users/techdragon/received_events", "repos_url": "https://api.github.com/users/techdragon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/techdragon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/techdragon/subscriptions", "type": "User", "url": "https://api.github.com/users/techdragon", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-02-13T13:14:56Z
2021-09-09T00:10:07Z
2014-02-13T23:48:25Z
NONE
resolved
When using this functionality http://docs.python-requests.org/en/latest/user/advanced/#streaming-uploads there does not appear to be a response returned. I get nothing back when I try to assign the output of the function like so ``` with open('massive-body') as f: response = requests.post('http://some.url/streamed', data=f) print response ``` How can i get the response back from a request made like this. I want to use this on some very large file uploads and was hoping to use the response to avoid having to issue get requests after the upload to confirm the file properties changed in order to know if the upload worked.
{ "avatar_url": "https://avatars.githubusercontent.com/u/2115079?v=4", "events_url": "https://api.github.com/users/techdragon/events{/privacy}", "followers_url": "https://api.github.com/users/techdragon/followers", "following_url": "https://api.github.com/users/techdragon/following{/other_user}", "gists_url": "https://api.github.com/users/techdragon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/techdragon", "id": 2115079, "login": "techdragon", "node_id": "MDQ6VXNlcjIxMTUwNzk=", "organizations_url": "https://api.github.com/users/techdragon/orgs", "received_events_url": "https://api.github.com/users/techdragon/received_events", "repos_url": "https://api.github.com/users/techdragon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/techdragon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/techdragon/subscriptions", "type": "User", "url": "https://api.github.com/users/techdragon", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1917/reactions" }
https://api.github.com/repos/psf/requests/issues/1917/timeline
null
completed
null
null
false
[ "That's unexpected, since it works fine for me:\n\n``` python\n>>> import requests\n>>> with open('get-pip.py', 'r') as f:\n... response = requests.post('http://httpbin.org/post', data=f)\n...\n>>> print response\n<Response [200]>\n```\n\nCan you print `requests.__version__` and provide the version of Python you're using?\n", "This is why I need to not post bug requests while tired. I was issuing `response.text` where i should have been issuing `response.headers`\n\nNo issue, wishing i could delete this one, at least it may be found by future novices who make the same mistake and save them some time. \n", "I'm just glad everything is working fine. =) Thanks for getting in touch!\n" ]
https://api.github.com/repos/psf/requests/issues/1916
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1916/labels{/name}
https://api.github.com/repos/psf/requests/issues/1916/comments
https://api.github.com/repos/psf/requests/issues/1916/events
https://github.com/psf/requests/pull/1916
27,502,464
MDExOlB1bGxSZXF1ZXN0MTI1MDA0ODg=
1,916
Fix Accept-Encoding in default headers
{ "avatar_url": "https://avatars.githubusercontent.com/u/238652?v=4", "events_url": "https://api.github.com/users/schlamar/events{/privacy}", "followers_url": "https://api.github.com/users/schlamar/followers", "following_url": "https://api.github.com/users/schlamar/following{/other_user}", "gists_url": "https://api.github.com/users/schlamar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/schlamar", "id": 238652, "login": "schlamar", "node_id": "MDQ6VXNlcjIzODY1Mg==", "organizations_url": "https://api.github.com/users/schlamar/orgs", "received_events_url": "https://api.github.com/users/schlamar/received_events", "repos_url": "https://api.github.com/users/schlamar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/schlamar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schlamar/subscriptions", "type": "User", "url": "https://api.github.com/users/schlamar", "user_view_type": "public" }
[ { "color": "009800", "default": false, "description": null, "id": 44501218, "name": "Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTIxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge" }, { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" } ]
null
15
2014-02-13T08:49:54Z
2021-09-08T23:06:13Z
2014-03-12T20:37:27Z
CONTRIBUTOR
resolved
urllib3 doesn't support "compress" anyway...
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1916/reactions" }
https://api.github.com/repos/psf/requests/issues/1916/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1916.diff", "html_url": "https://github.com/psf/requests/pull/1916", "merged_at": "2014-03-12T20:37:27Z", "patch_url": "https://github.com/psf/requests/pull/1916.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1916" }
true
[ "This seems reasonable enough to me. =) :+1:\n", "No objections here.\n", ":+1: one fewer thing for me to override to match real browsers.\n", ":shipit: \n", "I think this just makes the code harder to read. Let's just remove 'compress' instead.\n", "I don't want to be too coupled to urllib3. \n", "> I don't want to be too coupled to urllib3.\n\n@kennethreitz You know that urllib3 is doing the decompression, right?\n", "I agree with your point about readability, though. What about introducing a global `ACCEPT_ENCODING` to urllib3 which can be used instead of the `make_headers`.\n", "See https://github.com/shazow/urllib3/pull/350\n", "@kennethreitz updated\n", "@schlamar yes — I wrote the code, believe it or not :)\n\nJust because urllib3 gives us a place to bind to doesn't mean that we should. As a matter of fact, it's often a good place to question ourselves in every way and learn a lot about ourselves.\n", "> yes — I wrote the code, believe it or not\n\nReally? https://github.com/kennethreitz/requests/pull/1299 ;-)\n\n> Just because urllib3 gives us a place to bind to doesn't mean that we should.\n\nWhat does that mean? You still prefer just to remove `compress` here?\n", "> yes — I wrote the code, believe it or not\n\nHe was talking about requests.\n\n> You still prefer just to remove compress here?\n\nHe did say (and never replied to the contrary):\n\n> I don't want to be too coupled to urllib3.\n\nThat made me guess that he just wanted to remove `compress` from the header value.\n", "@kennethreitz updated > just remove compress from accepted encoding.\n", ":sparkles: :cake: :sparkles:\n" ]
https://api.github.com/repos/psf/requests/issues/1915
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1915/labels{/name}
https://api.github.com/repos/psf/requests/issues/1915/comments
https://api.github.com/repos/psf/requests/issues/1915/events
https://github.com/psf/requests/issues/1915
27,491,365
MDU6SXNzdWUyNzQ5MTM2NQ==
1,915
TypeError: getresponse() got an unexpected keyword argument 'buffering'
{ "avatar_url": "https://avatars.githubusercontent.com/u/1447160?v=4", "events_url": "https://api.github.com/users/jcea/events{/privacy}", "followers_url": "https://api.github.com/users/jcea/followers", "following_url": "https://api.github.com/users/jcea/following{/other_user}", "gists_url": "https://api.github.com/users/jcea/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jcea", "id": 1447160, "login": "jcea", "node_id": "MDQ6VXNlcjE0NDcxNjA=", "organizations_url": "https://api.github.com/users/jcea/orgs", "received_events_url": "https://api.github.com/users/jcea/received_events", "repos_url": "https://api.github.com/users/jcea/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jcea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcea/subscriptions", "type": "User", "url": "https://api.github.com/users/jcea", "user_view_type": "public" }
[]
closed
true
null
[]
null
36
2014-02-13T03:00:34Z
2017-03-16T09:35:23Z
2014-02-13T12:50:10Z
NONE
null
Requests 2.2.1. Same thing happens in 1.2.3 (I upgraded from that). I get this traceback: ``` Traceback (most recent call last): File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 313, in _make_request httplib_response = conn.getresponse(buffering=True) TypeError: getresponse() got an unexpected keyword argument 'buffering' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 480, in urlopen body=body, headers=headers) File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 315, in _make_request httplib_response = conn.getresponse() File "/usr/local/lib/python3.3/http/client.py", line 1147, in getresponse response.begin() File "/usr/local/lib/python3.3/http/client.py", line 358, in begin version, status, reason = self._read_status() File "/usr/local/lib/python3.3/http/client.py", line 328, in _read_status raise BadStatusLine(line) http.client.BadStatusLine: '' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.3/site-packages/requests/adapters.py", line 330, in send timeout=timeout File "/usr/local/lib/python3.3/site-packages/requests/packages/urllib3/connectionpool.py", line 530, in urlopen raise MaxRetryError(self, url, e) requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='heimdallr.jcea.es', port=443): Max retries exceeded with url: /PANICO (Caused by <class 'http.client.BadStatusLine'>: '') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./heimdallr.py", line 203, in <module> module.start() File "__main__", line 59, in start File "main", line 23, in start File "panic_report", line 17, in envia_tb_pendiente File "/usr/local/lib/python3.3/site-packages/requests/sessions.py", line 425, in post return self.request('POST', url, data=data, **kwargs) File "auth_http", line 48, in request File "/usr/local/lib/python3.3/site-packages/requests/sessions.py", line 383, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.3/site-packages/requests/sessions.py", line 486, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.3/site-packages/requests/adapters.py", line 378, in send raise ConnectionError(e) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='heimdallr.jcea.es', port=443): Max retries exceeded with url: /PANICO (Caused by <class 'http.client.BadStatusLine'>: '') Makefile:69: recipe for target 'run' failed make: *** [run] Error 1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1915/reactions" }
https://api.github.com/repos/psf/requests/issues/1915/timeline
null
completed
null
null
false
[ "For my own reference: this is 100% reproductible in changeset \"2e3cbe6aed98\" in my \"heimdallr\" Mercurial project, when running on master Raspberry PI.\n", "@jcea did you search other issues on the project? Your ticket reminded me of https://github.com/kennethreitz/requests/issues/1289 but I searched for `getresponse` before I found it. The last activity on it was a month ago.\n\nTo update you, there are patches on bugs.python.org which haven't moved anywhere because no core developers have bothered reviewing them. If you want action on this, your best bet is to go find those issues on bugs.python.org and bump them. I have done that myself but perhaps the more people who do so, the faster a response.\n\nThanks for opening this, but it is a duplicate and it is _not_ a bug in requests.\n", "The relevant section of the linked issue is this one (from [this comment](https://github.com/kennethreitz/requests/issues/1289#issuecomment-31294851)):\n\n> The key is that the `TypeError` raised as the first exception is _unrelated_ to the subsequent ones. In fact, that's the standard control flow in `urllib3`. This means that the real exception that's being raised here is the`request.exceptions.ConnectionError` exception that wraps the `urllib3.exceptions.MaxRetryError` exception being raised in `urllib3`.\n\nPlease give that comment a read, it'll explain what you're actually seeing here.\n", "Hi, I get the same issue when my url has the local IP on which we are running the code on. If the IP is not the local IP and a remote IP, it all works fine.\n\nError:\n\n```\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/packages/urllib3/connectionpool.py\", line 319, in _make_request\n httplib_response = conn.getresponse(buffering=True)\nTypeError: getresponse() got an unexpected keyword argument 'buffering'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"./run.py\", line 241, in <module>\n main()\n File \"./run.py\", line 47, in main\n updateDoc.main()\n File \"/root/raman/db/src/srm/updateDoc.py\", line 29, in main\n updateScore()\n File \"/root/raman/db/src/srm/updateDoc.py\", line 46, in updateScore\n response = requests.get(cpu_url)\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/api.py\", line 55, in get\n return request('get', url, **kwargs)\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/api.py\", line 44, in request\n return session.request(method=method, url=url, **kwargs)\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/sessions.py\", line 456, in request\n resp = self.send(prep, **send_kwargs)\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/sessions.py\", line 559, in send\n r = adapter.send(request, **kwargs)\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/adapters.py\", line 327, in send\n timeout=timeout\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/packages/urllib3/connectionpool.py\", line 493, in urlopen\n body=body, headers=headers)\n File \"/usr/lib/python3.3/site-packages/requests-2.3.0-py3.3.egg/requests/packages/urllib3/connectionpool.py\", line 321, in _make_request\n httplib_response = conn.getresponse()\n File \"/usr/lib64/python3.3/http/client.py\", line 1143, in getresponse\n response.begin()\n File \"/usr/lib64/python3.3/http/client.py\", line 354, in begin\n version, status, reason = self._read_status()\n File \"/usr/lib64/python3.3/http/client.py\", line 316, in _read_status\n line = str(self.fp.readline(_MAXLINE + 1), \"iso-8859-1\")\n File \"/usr/lib64/python3.3/socket.py\", line 297, in readinto\n return self._sock.recv_into(b)\nKeyboardInterrupt\n```\n\nHere is my code -\n\n``` python\n__author__ = 'ramanr'\n\nimport cloudant\nimport requests\nimport math\nimport getInputs\n\n\ndatabase_name=''\ntest_id=''\ndb_uri=''\nperformance_view=''\n\ninputs_dict = {}\n\ndef main():\n global db_uri, database_name, test_id,performance_view\n global avg_memory_threshold, avg_cpu_threshold, avg_fs_threshold\n global inputs_dict\n\n inputs_dict = getInputs.main()\n\n database_name = inputs_dict['database_name']\n test_id = inputs_dict['test_id']\n db_uri = inputs_dict['db_uri']\n performance_view = inputs_dict['performance_view']\n\n updateScore()\n\ndef updateScore():\n\n avg_memory_threshold = int(float(inputs_dict['avg_memory_threshold']))\n avg_cpu_threshold = int(float(inputs_dict['avg_cpu_threshold']))\n avg_fs_threshold = int(float(inputs_dict['avg_fs_threshold']))\n\n # cpu_url = \"http://10.247.32.72:5984/longevity/_design/perfstats/_view/cpu?StorageResourceManagement_3.0SP1_test1\"\n # memory_url = \"http://10.247.32.72:5984/longevity/_design/perfstats/_view/memory?StorageResourceManagement_3.0SP1_test1\"\n # filesystem_url = \"http://10.247.32.72:5984/longevity/_design/perfstats/_view/filesystem?StorageResourceManagement_3.0SP1_test1\"\n\n\n cpu_url = db_uri+'/'+database_name+'/'+'_design/'+performance_view+'/_view/'+'cpu?'+test_id\n memory_url = db_uri+'/'+database_name+'/'+'_design/'+performance_view+'/_view/'+'memory?'+test_id\n filesystem_url = db_uri+'/'+database_name+'/'+'_design/'+performance_view+'/_view/'+'filesystem?'+test_id\n\n response = requests.get(cpu_url)\n #print(response.json())\n dict = response.json()\n lists = dict['rows']\n cpu_score = getScore(lists, avg_cpu_threshold)\n\n response = requests.get(memory_url)\n dict = response.json()\n lists = dict['rows']\n memory_score = getScore(lists,avg_memory_threshold )\n\n response = requests.get(filesystem_url)\n dict = response.json()\n lists = dict['rows']\n filesystem_score = getScore(lists,avg_fs_threshold)\n\n score = round(math.ceil(((0.6*cpu_score)+(0.3*memory_score)+(0.1*filesystem_score))*100)/100, 1)\n print(cpu_score,memory_score,filesystem_score,score)\n updateDoc_score(db_uri, database_name, score)\n\ndef updateDoc_score(db_uri, database_name, score):\n db_server = cloudant.Account(uri=db_uri)\n db = db_server.database(database_name)\n #test_id = test_id\n doc = db.document(test_id)\n doc.merge(({'score': score}))\n\n\n\ndef getScore(lists, threshold):\n avg = math.ceil(lists[0]['value']*100)/100\n #IF(G3<80,10,10-((G3-80)/2))\n\n if avg < threshold:\n score = 10\n else:\n score = 10 - (avg - threshold)/2\n return(score)\n\n\n\n#connect to couchDB and createDB (if necessory)\ndef updateDoc_errorCount():\n db_server = cloudant.Account(uri=db_uri)\n db = db_server.database(database_name)\n #test_id = test_id\n doc = db.document(test_id)\n errorcount = 100\n\n doc.merge(({'error_count': errorcount}))\n\n\n\n\nif __name__ == \"__main__\":\n main()\n```\n\nSo, I get the error on line 46(highlighted in the image below). If my url has the local IP, it doesnt work. If the url is has any other IP, it works. I am using python 3.3 and request 2.3.0\n\n![image](https://cloud.githubusercontent.com/assets/7650923/3035535/b365f050-e08f-11e3-9630-b8970ea4e214.png)\n", "@rramanadham I edited your comment to make it easier to read (using fenced code blocks).\n\nThe problem I'm having is that your traceback shows that the unhandled exception is a `KeyboardInterrupt` exception which means someone pressed Ctrl-C (assuming *nix, otherwise Ctrl-Z if I'm remembering my Windows correctly) which caused the traceback you're seeing. The problem is not passing `buffering=True`, that exception was handled. While it was being handled, someone killed the process.\n\nIf you can get me the actual Traceback we can look at this again. If that traceback shows something different, I'd suggest we open a new issues _but only if we determine this isn't someone killing the process_.\n", "I am killing the process because if I dont kill it, it hangs forever.\nBut, my code runs smooth when I just have a remote IP in the url rather than the local IP.\n", "Right so this seems to be a different issue totally unrelated to this one you've commented on. What is `db_uri` in the local case?\n", "db_uri = http://10.247.32.72:5984\n", "What happens when you curl the URL you generate?\n", "nothing... i dont get response because the url is a couchdb view which runs based off documents that get created and my database has 0 documents. I don't know why. \n\nlglod078:~ # curl -X GET 'http://10.247.32.78:5984/longevity/_design/perfstats/_view/cpu?SRMSuite_3.0.2_test1'\n", "I'm confused: if you don't get a response when you use curl to get it, why would you expect one with requests?\n", "You are right. I didn't know that the curl commands weren't working. The other ones worked, so assumed, this works too. Sorry to take your time. I was struggling for 2 days on this and during the conversation, figured that the problem is somewhere else. Sorry!\n", "No need to apologise, I was just genuinely confused! You were quite right to have approached us, I just thought there was something I hadn't understood. =)\n", "Getting the same thing here. I am making a lot of queries to a JSON api, and this happens occasionally. Just did a batch of 100 with no issue. Before, it happened sometimes after around 20. \n\nRunning on os-x 10.9, installed via `port` for the base stuff, python 3.4.1, etc. I had to install a custom fork of something, i don't recall what, to get 3.x support. I don't think it had to do with this, though, probably ldap.\n\n```\nTraceback (most recent call last):\n File \"/venv-root/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 319, in _make_request\n httplib_response = conn.getresponse(buffering=True)\nTypeError: getresponse() got an unexpected keyword argument 'buffering'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/venv-root/dpscrape/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 493, in urlopen\n body=body, headers=headers)\n File \"/venv-root/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 321, in _make_request\n httplib_response = conn.getresponse()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 1172, in getresponse\n response.begin()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 351, in begin\n version, status, reason = self._read_status()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 313, in _read_status\n line = str(self.fp.readline(_MAXLINE + 1), \"iso-8859-1\")\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/socket.py\", line 371, in readinto\n return self._sock.recv_into(b)\nConnectionResetError: [Errno 54] Connection reset by peer\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/venv-root/lib/python3.4/site-packages/requests/adapters.py\", line 327, in send\n timeout=timeout\n File \"/venv-root/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 543, in urlopen\n raise MaxRetryError(self, url, e)\nrequests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='server.com', port=80): Max retries exceeded with url: /some/v2/?key=abcde&city=&format=json (Caused by <class 'ConnectionResetError'>: [Errno 54] Connection reset by peer)\n```\n", "@awbacker the real problem you're seeing has nothing to do with `getresponse` receiving an extra argument. What's causing this for you is that you're getting a `ConnectionResetError` while making a request.\n", "Okie. I figured I would get an error code of some kind and not an exception for that, and since it was in the middle I wasn't sure. Thanks for clearing that up, and forgive my newness at interpreting python stacktraces. Should I remove the comment and not muddy the issue?\n", "@awbacker when the connection is reset the only sane thing to do is throw an exception because the request/response cycle cannot be completed. Returning a `Response` object would be entirely disingenuous and there's no fake status code we could use to indicate anything other than a catastrophic failure.\n\n> forgive my newness at interpreting python stacktraces\n\nNo worries. Python 3's stacktraces seem to be a source of frustration for a large number of people. This issue continues to see activity because of that exact reason.\n\n> Should I remove the comment and not muddy the issue?\n\nNo need to remove it. If others find this and read the entire comment history, maybe it will give them a hint as to how to find the real problem they're encountering.\n", "I'm getting a similar kind on error but with python3.4 library -\n\nFile \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 319, in _make_request\n httplib_response = conn.getresponse(buffering=True)\nTypeError: getresponse() got an unexpected keyword argument 'buffering'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 321, in _make_request\n httplib_response = conn.getresponse()\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 1172, in getresponse\n response.begin()\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 351, in begin\n version, status, reason = self._read_status()\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 313, in _read_status\n line = str(self.fp.readline(_MAXLINE + 1), \"iso-8859-1\")\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/socket.py\", line 371, in readinto\n return self._sock.recv_into(b)\nsocket.timeout: timed out\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py\", line 327, in send\n timeout=timeout\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 493, in urlopen\n body=body, headers=headers)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 324, in _make_request\n self, url, \"Read timed out. (read timeout=%s)\" % read_timeout)\nrequests.packages.urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='localhost', port=8777): Read timed out. (read timeout=6)\n", "@piyushjajoo No you aren't. Please read this issue carefully. I made [this comment](https://github.com/kennethreitz/requests/issues/1915#issuecomment-34976750), which linked to [this comment](https://github.com/kennethreitz/requests/issues/1289#issuecomment-31294851) from issue #1289. Once again, reproducing the body of the comment:\n\n> The key is that the `TypeError` raised as the first exception is unrelated to the subsequent ones. In fact, that's the standard control flow in `urllib3`. This means that the real exception that's being raised here is the `request.exceptions.ConnectionError` exception that wraps the `urllib3.exceptions.MaxRetryError`exception being raised in `urllib3`.\n\nIn your case, the real exception you care about is the _last_ one: the `ReadTimeoutError`. You should pursue that one.\n", "Thanks, for pointing to correct issue.\n", "@Lukasa : wondering if the issue is related , however i am trying different option as you stated earlier , but none seems to work for me , Please suggest if you have any pointer:\n\nwhen I am making this api call to fetch the details from iCloud , it thows error \n\n> > > api.contacts.all()\n\nLibrary/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html\n InsecureRequestWarning)\n/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html\n InsecureRequestWarning)\nTraceback (most recent call last):\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 372, in _make_request\n httplib_response = conn.getresponse(buffering=True)\nTypeError: getresponse() got an unexpected keyword argument 'buffering'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 544, in urlopen\n body=body, headers=headers)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 374, in _make_request\n httplib_response = conn.getresponse()\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 1162, in getresponse\n raise ResponseNotReady(self.__state)\nhttp.client.ResponseNotReady: Request-sent\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py\", line 370, in send\n timeout=timeout\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 597, in urlopen\n _stacktrace=sys.exc_info()[2])\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/util/retry.py\", line 245, in increment\n raise six.reraise(type(error), error, _stacktrace)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/packages/six.py\", line 309, in reraise\n raise value.with_traceback(tb)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 544, in urlopen\n body=body, headers=headers)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 374, in _make_request\n httplib_response = conn.getresponse()\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 1162, in getresponse\n raise ResponseNotReady(self.__state)\nrequests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', ResponseNotReady('Request-sent',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/gaurav/pyicloud-0.6.2/pyicloud/services/contacts.py\", line 55, in all\n self.refresh_client()\n File \"/Users/gaurav/pyicloud-0.6.2/pyicloud/services/contacts.py\", line 44, in refresh_client\n self.session1.post(self._contacts_changeset_url, params=params_refresh)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py\", line 508, in post\n return self.request('POST', url, data=data, json=json, *_kwargs)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py\", line 465, in request\n resp = self.send(prep, *_send_kwargs)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py\", line 573, in send\n r = adapter.send(request, **kwargs)\n File \"/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py\", line 415, in send\n raise ConnectionError(err, request=request)\nrequests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))\n\nThe code which is making this call is \n\nhost = self._service_root.split('//')[1].split(':')[0]\n self.session.headers.update({'host': host})\n params_contacts = dict(self.params)\n params_contacts.update({\n 'clientVersion': '2.1',\n 'locale': 'en_US',\n 'order': 'last,first',\n })\n req = self.session.get(\n self._contacts_refresh_url,\n params=params_contacts\n )\n self.response = req.json()\n params_refresh = dict(self.params)\n params_refresh.update({\n 'prefToken': req.json()[\"prefToken\"],\n 'syncToken': req.json()[\"syncToken\"],\n })\n self.session.post(self._contacts_changeset_url, params=params_refresh)\n req = self.session.get(\n self._contacts_refresh_url,\n params=params_contacts\n )\n self.response = req.json()\n\nAppreciate your support . I am currently using Python 3.4.3 ..\n", "This is not the same error, the error you're seeing is almost certainly #2568.\n", "Thanks for your response.\nDo we have any solution for this issue ? Also wondering if it is possible to resolve this with any other stable version of Python (say 2.7) .\n", "As you can see on the discussion for #2568, we're working on it. in the meantime, you can resolve the problem by pinning to requests 2.6.0.\n", "I am new to python and requests, if I am doing anything wrong, please help me. Thanks! \n\nI am trying to write a crawler to automatically download some files using requests module. However, I met a problem. \n\nI initialized a new requests session, then I used post method to login into the website, after that as long as I try to use post/get method (a simplified code below): \n\n```\ns=requests.session()\ns.post(url,data=post_data, headers=headers)\n#up to here everything is correct, the next step will report error \ns.get(url) or s.post(url) even repeat s.post(url,data=post_data, headers=headers) will report error \n```\n\nit will report error like the one below: \n\n```\nTraceback (most recent call last):\nFile\"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 372, in _make_request\nhttplib_response = conn.getresponse(buffering=True)\nTypeError: getresponse() got an unexpected keyword argument 'buffering'\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 544, in urlopen\nbody=body, headers=headers)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 374, in _make_request\nhttplib_response = conn.getresponse()\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 1162, in getresponse\nraise ResponseNotReady(self.__state)\nhttp.client.ResponseNotReady: Request-sent\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py\", line 370, in send\ntimeout=timeout\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 597, in urlopen\n_stacktrace=sys.exc_info()[2])\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/util/retry.py\", line 245, in increment\nraise six.reraise(type(error), error, _stacktrace)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/packages/six.py\", line 309, in reraise\nraise value.with_traceback(tb)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 544, in urlopen\nbody=body, headers=headers)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py\", line 374, in _make_request\nhttplib_response = conn.getresponse()\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py\", line 1162, in getresponse\nraise ResponseNotReady(self.__state)\nrequests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', ResponseNotReady('Request-sent',))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\nFile \"test.py\", line 280, in <module>\ntest()\nFile \"test.py\", line 273, in test\nemuch1.getEbook()\nFile \"test.py\", line 146, in getEbook\nself.downloadEbook(ebook)\nFile \"test.py\", line 179, in downloadEbook\nfile_url=self.downloadEbookGetFileUrl(ebook).decode('gbk')\nFile \"test.py\", line 211, in downloadEbookGetFileUrl\ndownload_url=self.downloadEbookGetUrl(ebook)\nFile \"test.py\", line 200, in downloadEbookGetUrl\nrespond_ebook=self.session.get(ebook_url)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py\", line 477, in get\nreturn self.request('GET', url, **kwargs)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py\", line 465, in request\nresp = self.send(prep, **send_kwargs)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py\", line 573, in send\nr = adapter.send(request, **kwargs)\nFile \"/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py\", line 415, in send\nraise ConnectionError(err, request=request)\nrequests.exceptions.ConnectionError: ('Connection aborted.', ResponseNotReady('Request-sent',))\n```\n\nI have totally no idea why this happens, can anyone help me? \n", "@1a1a11a Try upgrading to requests 2.7.0 (released yesterday), which should fix your problem.\n", "@Lukasa YES! After upgrading, the problems seems to be solved!!! Thank you a lot! \n", "@1a1a11a My pleasure. =)\n", "Thank heavens!! I just came here for this and it was fixed, also!!!\n", "Yay - fixed it for me\n" ]
https://api.github.com/repos/psf/requests/issues/1914
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1914/labels{/name}
https://api.github.com/repos/psf/requests/issues/1914/comments
https://api.github.com/repos/psf/requests/issues/1914/events
https://github.com/psf/requests/pull/1914
27,478,611
MDExOlB1bGxSZXF1ZXN0MTI0ODc2MTQ=
1,914
Ensuring that the first argument to ConnectionError is a string
{ "avatar_url": "https://avatars.githubusercontent.com/u/14958?v=4", "events_url": "https://api.github.com/users/hobbeswalsh/events{/privacy}", "followers_url": "https://api.github.com/users/hobbeswalsh/followers", "following_url": "https://api.github.com/users/hobbeswalsh/following{/other_user}", "gists_url": "https://api.github.com/users/hobbeswalsh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hobbeswalsh", "id": 14958, "login": "hobbeswalsh", "node_id": "MDQ6VXNlcjE0OTU4", "organizations_url": "https://api.github.com/users/hobbeswalsh/orgs", "received_events_url": "https://api.github.com/users/hobbeswalsh/received_events", "repos_url": "https://api.github.com/users/hobbeswalsh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hobbeswalsh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hobbeswalsh/subscriptions", "type": "User", "url": "https://api.github.com/users/hobbeswalsh", "user_view_type": "public" }
[ { "color": "e11d21", "default": false, "description": null, "id": 44501305, "name": "Not Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTMwNQ==", "url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge" } ]
closed
true
null
[]
null
6
2014-02-12T22:58:35Z
2021-09-08T23:01:15Z
2014-02-13T19:26:54Z
NONE
resolved
Putting an exception (or another type) as the first argument to an Exception is confusing and makes it so that a caller's "e.message" returns an Exception type rather than a string. We spent quite a long time on this bug today -- a fix would be much appreciated. Thanks! --Robin
{ "avatar_url": "https://avatars.githubusercontent.com/u/14958?v=4", "events_url": "https://api.github.com/users/hobbeswalsh/events{/privacy}", "followers_url": "https://api.github.com/users/hobbeswalsh/followers", "following_url": "https://api.github.com/users/hobbeswalsh/following{/other_user}", "gists_url": "https://api.github.com/users/hobbeswalsh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hobbeswalsh", "id": 14958, "login": "hobbeswalsh", "node_id": "MDQ6VXNlcjE0OTU4", "organizations_url": "https://api.github.com/users/hobbeswalsh/orgs", "received_events_url": "https://api.github.com/users/hobbeswalsh/received_events", "repos_url": "https://api.github.com/users/hobbeswalsh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hobbeswalsh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hobbeswalsh/subscriptions", "type": "User", "url": "https://api.github.com/users/hobbeswalsh", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1914/reactions" }
https://api.github.com/repos/psf/requests/issues/1914/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1914.diff", "html_url": "https://github.com/psf/requests/pull/1914", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1914.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1914" }
true
[ "Thanks for raising this!\n\nWe actually quite like this behaviour, though: it means you can examine the exception object we wrapped. I'm going to let @sigmavirus24 weigh in here, but I think we're happy as-is.\n", "Some of the exception objects that are nested inside of others at this point have much more debugging information than we can simply provide with a string. Further I have seen the underlying exception be something far more nefarious than what you're handling here: an exception caught and wrapped by another which was caught and wrapped by another which was caught and wrapped by another which was caught and wrapped by urllib3 which we then catch and wrap.\n\nGranted I'm not sure we catch and reraise those consistently as `ConnectionError`s but I wouldn't be surprised if one snuck through like that.\n\nOn the other hand, I'm trying to imagine how you must have run into this and my own guess is that you did:\n\n``` python\ntry:\n # something\nexcept requests.exceptions.ConnectionError as e:\n log.debug(str(e))\n```\n\nAnd `str(e)` raised an exception that it was not returning a string as expected. Am I correct? If so a better fix, might be to change the behaviour of the exceptions a bit. It would allow old users relying on `e.message` to be an Exception class to have their cake and you to have yours as well.\n\nIf I'm wrong, please correct me so we can try to sort this out.\n", "Well, we assumed that e.message would return a string. We've got a REST API that does certain things on certain exceptions, and then has a catch-all in case of any subclass of Exception. In that case, we log and then return as JSON the exception type and the exception message, using e.message.\n\nSince we're turing the response into JSON, we expect e.message to be a string (which can be marshaled). In this case, it wasn't, which was confusing, broke our expectations, and as far as I understand (and please correct me if I'm wrong) broke a relatively strong Python convention, which is that e.message should (must?) be a string.\n\nThanks!\n\n--Robin\n", "It's only a convention because most of the time you can do `raise ValueError('message to print')`. There is nothing that says message must be a string though. It's also not that strong considering we receive errors that come from the standard library like that. That aside, what's wrong with calling `str(e.message)` to always be 100% certain it is marshal-able? If you're expecting a string you should make sure you coerce it to be safe.\n\nIf you were expecting it based on documentation of ours claiming it should be a string, I would be 100% supportive of you here, but we're not claiming that ever.\n\nI've taken a similar approach in most of my projects (coercing) to be certain I'll have the API I'm expecting. It allows me to be far more confident in what I'm doing without having to worry about someone breaking from convention. It also allows me to have people pass in (for example) a string like `'123'` or just the number `123` since I'll always call `int()` on that input and if it isn't able to be coerced they will receive a helpful error from `int` that will be easier to Google.\n", "@sigmavirus24 -- Thanks for the clarification. We were not relying on your documentation here; we were just leaning on past expectations of convention, which was clearly mistaken.\n\nThanks the for explanation, and you're right about coercing something into the format you expect it to be in, rather than just trying to marshal it and crossing your fingers.\n\nThanks the the replies!\n\n--Robin\n", "I'm glad we could help @hobbeswalsh \n\nPlease continue sending PRs as you see fit! We don't mind discussing them with you. :)\n\nCheers! :wine_glass: \n" ]
https://api.github.com/repos/psf/requests/issues/1913
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1913/labels{/name}
https://api.github.com/repos/psf/requests/issues/1913/comments
https://api.github.com/repos/psf/requests/issues/1913/events
https://github.com/psf/requests/pull/1913
27,383,342
MDExOlB1bGxSZXF1ZXN0MTI0MzM2NDE=
1,913
Improve API for manual redirection-following
{ "avatar_url": "https://avatars.githubusercontent.com/u/325899?v=4", "events_url": "https://api.github.com/users/zackw/events{/privacy}", "followers_url": "https://api.github.com/users/zackw/followers", "following_url": "https://api.github.com/users/zackw/following{/other_user}", "gists_url": "https://api.github.com/users/zackw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zackw", "id": 325899, "login": "zackw", "node_id": "MDQ6VXNlcjMyNTg5OQ==", "organizations_url": "https://api.github.com/users/zackw/orgs", "received_events_url": "https://api.github.com/users/zackw/received_events", "repos_url": "https://api.github.com/users/zackw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zackw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zackw/subscriptions", "type": "User", "url": "https://api.github.com/users/zackw", "user_view_type": "public" }
[ { "color": "fbca04", "default": false, "description": null, "id": 44501249, "name": "Needs BDFL Input", "node_id": "MDU6TGFiZWw0NDUwMTI0OQ==", "url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input" }, { "color": "e11d21", "default": false, "description": null, "id": 78002701, "name": "Do Not Merge", "node_id": "MDU6TGFiZWw3ODAwMjcwMQ==", "url": "https://api.github.com/repos/psf/requests/labels/Do%20Not%20Merge" } ]
closed
true
null
[]
null
24
2014-02-11T20:33:32Z
2021-09-08T23:06:27Z
2014-03-12T20:59:23Z
CONTRIBUTOR
resolved
I'd like to propose a bunch of bugfixes and API improvements to redirection resolution, particularly when it's being done manually (i.e. `allow_redirects=False` on the initial `send()`). These are all motivated by problems I encountered while trying to chase redirects for every single URL in the Alexa top million, which, as you might imagine, contains an awful lot of misconfiguredness (up to and including IMAP and SMTP servers on ports 80 and 443!) The most significant change is the new `Session.resolve_one_redirect` method, which does what it says - it resolves _one_ redirect. This turns out to be substantially more convenient for applications that need to do complicated processing on each redirect as it happens, than the existing `Session.resolve_redirects` generator. It goes along with `Response.is_redirect`, a new property that is the canonical home for the "is this a redirect" predicate. The second most significant change is that each response in a redirection chain now has an accurate `.history` property, containing all responses up to but not including itself. As a side effect, I anti-resolved issue #1898 - `.history` is now always a _list_. The third most significant change is that `resolve_one_redirect` and `resolve_redirects` do not need to be passed a bunch of arguments that were already passed to the initial `send`; concretely, all of `send`'s kwargs are cached on the `PreparedRequest` and reused thereafter. Everything else is bugfixes, generally in the service of greater robustness against Weird Shit coming off the network.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1913/reactions" }
https://api.github.com/repos/psf/requests/issues/1913/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1913.diff", "html_url": "https://github.com/psf/requests/pull/1913", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1913.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1913" }
true
[ "Wow, this is a substantial change, and an impressive bit of work. I'm not going to dive into code review at this moment for two reasons: I don't have time, and I think we'll want @kennethreitz involved early in the discussion on this pull request.\n\nNevertheless, there'll definitely be a couple of code review comments if we decide to move ahead with this. I'm mostly interested to see what Kenneth has to say about the idea though.\n", "I haven't looked into this at all, but I'm curious as to why you cannot just \ncall `next(self.resolve_redirects())` once instead of having to write a \nspecial method for that. One more reminder: I haven't looked at any of this.\n", "@sigmavirus24 re `next(self.resolve_redirects())`, the most significant reason for doing it differently is that `resolve_redirects` doesn't include the very first response in its iterable (and I didn't feel safe changing that). That means code that needs to look at each response as it comes in is most naturally structured like this:\n\n```\nr = sess.send(..., allow_redirects = False)\nwhile r.is_redirect:\n # ... do stuff with r ...\n r = sess.resolve_one_redirect(r)\n\n# ... do more stuff with r, which is now the final response ...\n```\n\nrather than like this:\n\n```\nr = sess.send(..., allow_redirects = False)\n# ... do stuff with r ...\nfor r1 in sess.resolve_redirects(r):\n # ... do stuff with r1 ...\n\n# ... do more stuff with r? or r1? blech ...\n```\n\nwhich duplicates part of the processing and has some finicky logic after the loop to get hold of the final response.\n", "I'm getting a 503 Service Unavailable when I try to see the results of the failed CI build. All tests did pass in my local environment, but it's entirely possible that I was not running them correctly (not having found any documentation on how to run them, I was just doing `PYTHONPATH=... ./test_requests.py`)\n", "> Nevertheless, there'll definitely be a couple of code review comments if we decide to move ahead with this. I'm mostly interested to see what Kenneth has to say about the idea though.\n\nI feel confident that Kenneth will be -1 as I am, but because of the size of the PR I've already started leaving code review comments in the event I'm wrong.\n\nThat said, there are several changes here that conflict with the explicit feature freeze put into place before 1.x which continues to be upheld.\n", "It occurs to me that it would be helpful if I shared the code that motivated most of these changes. The entire program is here: https://github.com/zackw/tbbscraper/blob/master/sources/canonize A great deal of it is uninteresting for this conversation; the code that does most of the interacting with Requests is [the `HTTPWorker` class](https://github.com/zackw/tbbscraper/blob/master/sources/canonize#L621), and I direct your attention specifically to [`process_one_response`](https://github.com/zackw/tbbscraper/blob/master/sources/canonize#L658), especially the bit where it says \"This logic must match requests.session's idea of what a redirect is\", and to [`chase_redirects`](https://github.com/zackw/tbbscraper/blob/master/sources/canonize#L681), which has the core loop-over-redirections in it.\n", "@zackw Also, as you reply to PR feedback (since there will be so much of it here) can you reply to each comment your fixing with \"Fixed in <sha>\". I won't mind the emails sent and it will help us keep track of what was fixed and when. It also provides more context when performing final reviews.\n", "> can you reply to each comment your fixing with \"Fixed in \"\n\nCan do.\n", "> Can do.\n\nThank you.\n", "FYI going to bed at this point, will respond to further comments tomorrow.\n", "I've glanced through the code review comments stuff so far, and wanted to talk more generally about how this relates to our API freeze.\n\n@zackw has said in one of his comments that he believes these changes would be welcome extensions to the API for anyone who has to manually handle redirects. That's probably true. However, our API freeze policy does not say that we are freezing the API \"except when it'll be useful for people if it was extended\". By default, the Requests answer to API changes will always be \"no\". The reason this issue is still open and being discussed is because we think there is potentially enough value here that we want to look at it in depth. Please don't assume that we hate your PR, @zackw, we're just starting from a very conservative position.\n\nNext, there is a question about how much affordance we should give to people who circumvent Requests' redirection policy. The general Requests policy on this sort of thing (see also: `PreparedRequest` objects) is that if you don't like the way Requests does it you should do it yourself. We have made concessions here in the past, but not many and always under substantial duress.\n\nAgain, I'm going to hold off more dramatic code review until @kennethreitz gives an idea of whether he's likely to want this change at all.\n", "@zackw can you explain the `Response.is_redirect` reasoning to me? I can understand the thought process, as redirects are a very first-class citizen in the HTTP world, but I'd love to hear your feelings on it :)\n", "Thank you _so much_ for this major contribution to the project, by the way! These look like incredible improvements to an underutilized API. I can tell you put some great care and craftsmanship into this :)\n\n---\n\nFor context — Requests is a very large project and we care a lot about the code that gets in, so we have be very selective about new features. So, let's refine what we have here and see what feels right for both of us :)\n", "I think I'm +1 for `Response.is_redirect`. Seems great. :sparkles: :cake: :sparkles:\n", "The changes to the history implementation also sound wonderful, based on your description. If they behave as described, I believe I'm +1. They'll need some heavy code review, however. There's a lot of session and socket leaking that can happen with that code if we're not careful.\n", "I gotta get other work done this afternoon, but, would it be useful for me to split this pull request into two? One strictly for bug fixes, and another for anything that changes the API even a little.\n\n@kennethreitz given that you are open to the possibility of some amount of API changes, I'd like to observe that one API change I didn't make, but rather wanted to, was to have `Session.resolve_redirects` return an iterable that _does_ include the very first response. Such a generator (under another name, for compatibility's sake) would be a workable, perhaps even preferable, alternative to the `resolve_one_redirect` API I did add.\n", "@zackw make one for `Response.is_redirect` first. I'll merge it right away :)\n", "@zackw perhaps we could give resolve redirects an argument to include the first response? \n", "e.g. `Session.resolve_redirects(r, full=True)` or something similar. \n", "(close/open for ci build)\n", "@zackw let's break each change into a new PR as we discuss them and keep this one open for discussion. Then, we can bite one thing off at a time. \n\nSo, first thing first — open a new PR for `Request.is_redirect` :)\n", "Failure with 2.6 only. Well, at least that reassures me it isn't my inability to run the test suite correctly :smile: (I can only conveniently test locally with 2.7 and 3.3.)\n\n2.6's `urlparse` apparently doesn't throw an exception when asked to parse `http://example.[^com/]`; it cheerfully treats that as valid, and we get a ConnectionError instead because, unsurprisingly, there is no top-level domain named \"[%5Ecom\". I will think about what the abstract right thing would be.\n", "+1 on using smaller PRs.\n\nAlso @zackw I'm sorry that none of us ever answered you: we use `py.test` to run the tests.\n", "Alright, made some review comments inline. They are a supplement to @sigmavirus24's. =)\n" ]
https://api.github.com/repos/psf/requests/issues/1912
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1912/labels{/name}
https://api.github.com/repos/psf/requests/issues/1912/comments
https://api.github.com/repos/psf/requests/issues/1912/events
https://github.com/psf/requests/pull/1912
27,367,857
MDExOlB1bGxSZXF1ZXN0MTI0MjQ2MzY=
1,912
Make a Response's history always a tuple
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
[]
closed
true
null
[]
null
5
2014-02-11T17:16:33Z
2021-09-08T23:06:21Z
2014-02-13T12:53:36Z
CONTRIBUTOR
resolved
Real fix for #1898
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1912/reactions" }
https://api.github.com/repos/psf/requests/issues/1912/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1912.diff", "html_url": "https://github.com/psf/requests/pull/1912", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1912.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1912" }
true
[ "I realize this is a bit late, but I think this is going in the wrong direction: Response.history should always be a _list_, not a tuple. I think this because it starts out as an empty list, and if you're manually processing redirections or if you set a response-modification hook, you will encounter it that way even after this change. And it is conceptually a list.\n", "@zackw actually this is the right direction. On any response that did have history previously we were returning tuples and on any without we were returning an empty list. The fact of the matter is that history on a response should be immutable. For that to be the case, it should be a tuple. I also don't quite understand your arguments. If you're manually processing redirections or using a hook you will now get a tuple after this change, as you should have in the first place. That does raise a good point that hook authors will expect a list though (if they're writing hooks dealing with manual processing of redirecitons).\n\n@Lukasa that makes this (sort of) a backwards incompatible change.\n", "@sigmavirus24 \n\n> The fact of the matter is that history on a response should be immutable. For that to be the case, it should be a tuple.\n\nIt is _conceptually_ immutable, but I do not see why the library needs to bother enforcing that. I think it's rather more important for the type of the property to be the same in all contexts. And since the response-modification hook is allowed to modify history, it needs to be a list then. Ergo it should always be a list.\n\nPlease see pull request #1913 -- I have put a good deal of thought into how this stuff needs to work, based on actual application experience.\n\n> If you're manually processing redirections or using a hook you will now get a tuple after this change, as you should have in the first place.\n\nOn reflection, I think that accurately describes the behavior after your patch for manual redirection processing, but _not_ for the response-modification hook, which fires before the conversion to a tuple.\n", "I agree with @zackw on this. Our current tuple-ness is a bit pedantic and we have to dance about it internally for no reason. Let's not do that.\n", "The tuple-ness on the contrary is meaningful and it would be no extra work for @zackw to adapt to this in his PR. Since `redirect_once` does the work for him now, he won't need to worry about munging the history (nor should anyone else). And we should keep in mind that people already expect a tuple _when there is history to be had_. Regardless, I agree it isn't worth arguing about.\n" ]
https://api.github.com/repos/psf/requests/issues/1911
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1911/labels{/name}
https://api.github.com/repos/psf/requests/issues/1911/comments
https://api.github.com/repos/psf/requests/issues/1911/events
https://github.com/psf/requests/issues/1911
27,359,003
MDU6SXNzdWUyNzM1OTAwMw==
1,911
Boto/Route53 + GAE Compatibility Issue
{ "avatar_url": "https://avatars.githubusercontent.com/u/858881?v=4", "events_url": "https://api.github.com/users/TFenby/events{/privacy}", "followers_url": "https://api.github.com/users/TFenby/followers", "following_url": "https://api.github.com/users/TFenby/following{/other_user}", "gists_url": "https://api.github.com/users/TFenby/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TFenby", "id": 858881, "login": "TFenby", "node_id": "MDQ6VXNlcjg1ODg4MQ==", "organizations_url": "https://api.github.com/users/TFenby/orgs", "received_events_url": "https://api.github.com/users/TFenby/received_events", "repos_url": "https://api.github.com/users/TFenby/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TFenby/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TFenby/subscriptions", "type": "User", "url": "https://api.github.com/users/TFenby", "user_view_type": "public" }
[ { "color": "fbca04", "default": false, "description": null, "id": 615414998, "name": "GAE Support", "node_id": "MDU6TGFiZWw2MTU0MTQ5OTg=", "url": "https://api.github.com/repos/psf/requests/labels/GAE%20Support" } ]
closed
true
null
[]
null
4
2014-02-11T16:00:51Z
2021-09-08T09:00:48Z
2014-02-11T16:10:50Z
NONE
resolved
The issue described [here](http://stackoverflow.com/questions/21556587/connecting-to-route53-api-from-google-app-engine-using-boto/) applies to both Boto and gtaylor/python-route53. Basically, when running Boto or Route53 on GAE, the way in which Requests extends httplib doesn't work with Google's httplib, which uses urlfetch instead of self.sock/self.connect. (Assuming I understood everything correctly in my quick debugging.) This causes the connection, despite being created as an HTTPS connection, to be established as HTTP, which Amazon refuses.
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1911/reactions" }
https://api.github.com/repos/psf/requests/issues/1911/timeline
null
completed
null
null
false
[ "We will not support GAE as we have frequently discussed on the issue tracker (most recently on https://github.com/kennethreitz/requests/issues/1905). Please search the bug tracker before opening requests in the future.\n\nCheers!\n", "Ah, my bad. Thanks.\n", "Almighty requests developer,\n\nNow that the underlying `urllib3` already claimed support for AppEngine, does AppEngine a supported platform for requests now? Thank you.\n", "It is not a supported platform. =)\n\nWe will announce if and when we make that decision, but maintaining support for it is tricky and unhelpful.\n" ]
https://api.github.com/repos/psf/requests/issues/1910
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1910/labels{/name}
https://api.github.com/repos/psf/requests/issues/1910/comments
https://api.github.com/repos/psf/requests/issues/1910/events
https://github.com/psf/requests/issues/1910
27,339,679
MDU6SXNzdWUyNzMzOTY3OQ==
1,910
100% processor usage during GET have to wait 60s for response
{ "avatar_url": "https://avatars.githubusercontent.com/u/1621941?v=4", "events_url": "https://api.github.com/users/e-manuel/events{/privacy}", "followers_url": "https://api.github.com/users/e-manuel/followers", "following_url": "https://api.github.com/users/e-manuel/following{/other_user}", "gists_url": "https://api.github.com/users/e-manuel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/e-manuel", "id": 1621941, "login": "e-manuel", "node_id": "MDQ6VXNlcjE2MjE5NDE=", "organizations_url": "https://api.github.com/users/e-manuel/orgs", "received_events_url": "https://api.github.com/users/e-manuel/received_events", "repos_url": "https://api.github.com/users/e-manuel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/e-manuel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-manuel/subscriptions", "type": "User", "url": "https://api.github.com/users/e-manuel", "user_view_type": "public" }
[]
closed
true
null
[]
null
39
2014-02-11T11:11:44Z
2021-09-08T23:08:01Z
2014-03-23T11:49:27Z
NONE
resolved
When GET request have to wait 60s for remote service response, processor usage increases to 100% - version 1 of "requests" worked in this case better. GET is configured with "cert" data and "timeout=120" over SSL connection.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1910/reactions" }
https://api.github.com/repos/psf/requests/issues/1910/timeline
null
completed
null
null
false
[ "Do you have a publicly-accessible URL that I can test against?\n", "More details: GET is fired by sub-process (multiprocessing) - GET is calling API exposed by GSM operator - by this API I am receiving SMS and MMS. Other tasks fired by sub-proceses are using CPU below 1% (server with 2 Xeons).\nI can expose on my server, over valid SSL, PHP script with 60s delay if this is what You mean?\nUnfortunately I can not give You access to API exposed by GSM operator :( \n", "Hmm. In that case, can you show me your code? Feel free to remove identifying information (like URLs, header values etc.)\n", "Here You are - direct call of requests.get from my library\n(which is launched by multiprocessing.Process(target=launch_function,\nargs=('arg1', 'arg2')) ):\n\np_par = {\n 'login': 'xxx',\n 'password': 'xxx',\n 'serviceId': '0000',\n 'timeout': 60000, # wait 60s for response\n 'deleteContent': 'true',\n 'manualconfirm': 'true'\n}\n\np_uri_part = 'getsms.aspx'\n\ndef __request_api2(self, p_par, p_uri_part=None):\n try:\n # GSM\n if self.__channel == '0':\n v_url = '{0}/{1}'.format(self.__API_0_URL, p_uri_part)\n v_crt = '{0}smsapi.crt'.format(self.__PATH_RES_SEC)\n v_key = '{0}smsapi.key'.format(self.__PATH_RES_SEC)\n try:\n v_rsp = requests.get(\n v_url, params=p_par\n ,cert=(v_crt, v_key)\n ,timeout=120\n\n,verify='{0}ca_root_certificates.pem'.format(self.__PATH_RES_SEC)\n )\n v_res = v_rsp.text\n except Exception as e:\n self.__add_error('Error when invoking API: {0}'.format(str(e)))\n v_res = 0\n finally:\n return v_res\n\nca_root_certificates.pem - is grabbed from Mozilla - CA bundle\n\nBest regards,\nArtur\n", "GitHub's email formatting made that pretty difficult to read, so I copy-pasted it into a code block here:\n\n``` python\np_par = {\n 'login': 'xxx',\n 'password': 'xxx',\n 'serviceId': '0000',\n 'timeout': 60000, # wait 60s for response\n 'deleteContent': 'true',\n 'manualconfirm': 'true'\n}\n\np_uri_part = 'getsms.aspx'\n\ndef __request_api2(self, p_par, p_uri_part=None):\n try:\n # GSM\n if self.__channel == '0':\n v_url = '{0}/{1}'.format(self.__API_0_URL, p_uri_part)\n v_crt = '{0}smsapi.crt'.format(self.__PATH_RES_SEC)\n v_key = '{0}smsapi.key'.format(self.__PATH_RES_SEC)\n try:\n v_rsp = requests.get(\n v_url, params=p_par\n ,cert=(v_crt, v_key)\n ,timeout=120\n ,verify='{0}ca_root_certificates.pem'.format(self.__PATH_RES_SEC)\n )\n v_res = v_rsp.text\n except Exception as e:\n self.__add_error('Error when invoking API: {0}'.format(str(e)))\n v_res = 0\n finally:\n return v_res\n```\n", "Hmm, there's nothing obviously wrong with that. When I get a moment I'll try to reproduce something similar on my machine.\n", "I am in my phone, so I'll be brief. I suspect this is pyopenssl interacting with tje timeout. Especially the makefile function in urllib3.contrib.pyopenssl busylooping when no data is being recieved\n", "Wait, does it do that? That's pretty awful.\n", "This is a guess (I think I stumbled upon this previously). I'll check it later. Maybe gevent is also an issue. Gevent and pyopenssl don't really work together.\n", "Hm, I can't reproduce this at the moment.\n\n@e-manuel Could you give more information about your environment:\n- Python version\n- Operating System\n- Using gevent? (or the like)\n- Using pyOpenSSL?\n- Does it work without multiprocessing?\n", "- Python 2.7.6\n- FreeBSD 9.0\n- installed gevent 1.0 but in this case is not used\n- I have installed py27- openssl-0.13\n- my test: GET fired from python prompt work the same way (100% CPU after a few seconds) as sub-process fired from multiprocessing library\n", "Do you also have `pyasn1` and `ndg-httpsclient` installed? (If they and `pyOpenSSL` are installed they trigger a different codepath)\n", "Yes, I have: `py27-asn1-0.1.4_1,1`, `py27-ndg_httpsclient-0.3.2`, `py-openssl 0.13` but following the FreeBSD ports tree:\n- `py27-asn1` is required by `py27-ndg_httpsclient`\n- `py27-ndg_httpsclient` is required by `py-urllib3` (mentioned by @t-8ch) and requires to run: `py-openssl` and `py-asn1`\n Of course, I can de-install any of them, but I like to keep full functionality of requests (SSL, cert. etc.). What I should to do - \"to be or not to be, this is the question...\" ;)\n", "So I managed to reproduce it:\n\nTerminal 1:\n\n``` sh\n# netcat from the nmap project, should work with all ssl capable netcats\n# We complete the SSL handshake but don't send a HTTP response line!\n$ ncat --ssl -l 0.0.0.0 4000\n```\n\nCode:\n\n``` python\nrequest.get('https://127.0.0.1:4000', verify=False, timeout=120)\n```\n\nIt seems this only happens when using PyOpenSSL, timeouts and FreeBSD.\nI can't reproduce it any other way, permutating those conditions and on Linux.\n\n@e-manuel Pick your poison :-)\n", "Manually waiting for the buffer to be ready seems to fix it:\n\n``` patch\ndiff --git a/requests/packages/urllib3/contrib/pyopenssl.py b/requests/packages/urllib3/contrib/pyopenssl.py\nindex d9bda15..67eeee4 100644\n--- a/requests/packages/urllib3/contrib/pyopenssl.py\n+++ b/requests/packages/urllib3/contrib/pyopenssl.py\n@@ -156,6 +156,7 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(rbufsize)\n except OpenSSL.SSL.WantReadError:\n+ select.select([self._sock], [], [])\n continue\n if not data:\n break\n@@ -183,6 +184,7 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(left)\n except OpenSSL.SSL.WantReadError:\n+ select.select([self._sock], [], [])\n continue\n if not data:\n break\n@@ -234,6 +236,7 @@ class fileobject(_fileobject):\n break\n buffers.append(data)\n except OpenSSL.SSL.WantReadError:\n+ select.select([self._sock], [], [])\n continue\n break\n return \"\".join(buffers)\n@@ -244,6 +247,7 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(self._rbufsize)\n except OpenSSL.SSL.WantReadError:\n+ select.select([self._sock], [], [])\n continue\n if not data:\n break\n@@ -271,7 +275,8 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(self._rbufsize)\n except OpenSSL.SSL.WantReadError:\n- continue\n+ select.select([self._sock], [], [])\n+ continue\n if not data:\n break\n left = size - buf_len\n```\n\n(Insert lengthy rant about platform differences, especially concerning openssl here)\n", "> (Insert lengthy rant about platform differences, especially concerning openssl here)\n\nYeah, platforms stink. Let's all develop on Windows. =P\n", "I ranted about this to my housemate last night, by the by. I don't understand why PyOpenSSL busy-waits on this socket instead of calling select like a good person. @t-8ch are you making changes in the urllib3 version, or are you going to submit your patch upstream?\n", "I would like to ask what next - there will be new release of requests ? I would not go back to pycurl :(\n", "@e-manuel all of the changes are in PyOpenSSL. Since you're using the packages from your distro, you'll need to bother the maintainer of that package when it gets released.\n\n@Lukasa your question is especially relevant since FreeBSD seems to strip out the vendored packages so it should be sent upstream if possible.\n", "Yeah, I think we want an answer on what PyOpenSSL is going to do.\n\n@e-manuel In the meantime, you can patch your copy so this stops happening.\n", "@t-8ch thank You very much - Your patch works great, now when GET starts it uses below 2% of CPU and after few seconds less :)\n", "Hurrah! Let's leave this open until we get an idea of what's happening upstream.\n\n@t-8ch saves the day again! I should buy him a boat.\n", "But you loose the timeouts I fear (althoug I'm not sure, if timeouts ever worked with pyopenssl).\n", "Timeouts works now in this manner:\n- when GETs target do not exists (is unreachable) timeout works like swiss quartz;\n- when target exists, but not responds, GET waits forever.\n", "Hmm. Can we pass the timeout to the select call?\n", "Yep.\nselect.select([self._sock], [], [], self._sock.gettimeout())\n", "@pasha-r That's a very neat solution, it didn't even occur to me. That, plus checking the return code from select, should add timeouts back. \n", "I think we should indeed go with our calling `select` ourself. PyOpenSSL does not have the timeout functionality and we simply set the timeout on the non ssl socket.\n\nThis should work:\n\n``` patch\ndiff --git a/requests/packages/urllib3/contrib/pyopenssl.py b/requests/packages/urllib3/contrib/pyopenssl.py\nindex d9bda15..da44a29 100644\n--- a/requests/packages/urllib3/contrib/pyopenssl.py\n+++ b/requests/packages/urllib3/contrib/pyopenssl.py\n@@ -43,7 +43,7 @@ from ndg.httpsclient.subj_alt_name import SubjectAltName as BaseSubjectAltName\n import OpenSSL.SSL\n from pyasn1.codec.der import decoder as der_decoder\n from pyasn1.type import univ, constraint\n-from socket import _fileobject\n+from socket import _fileobject, timeout\n import ssl\n import select\n from cStringIO import StringIO\n@@ -139,6 +139,13 @@ def get_subj_alt_name(peer_cert):\n\n class fileobject(_fileobject):\n\n+ def _wait_for_sock(self):\n+ rd, wd, ed = select.select([self._sock], [], [],\n+ self._sock.gettimeout())\n+ if not rd:\n+ raise timeout()\n+\n+\n def read(self, size=-1):\n # Use max, disallow tiny reads in a loop as they are very inefficient.\n # We never leave read() with any leftover data from a new recv() call\n@@ -156,6 +163,7 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(rbufsize)\n except OpenSSL.SSL.WantReadError:\n+ self._wait_for_sock()\n continue\n if not data:\n break\n@@ -183,6 +191,7 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(left)\n except OpenSSL.SSL.WantReadError:\n+ self._wait_for_sock()\n continue\n if not data:\n break\n@@ -234,6 +243,7 @@ class fileobject(_fileobject):\n break\n buffers.append(data)\n except OpenSSL.SSL.WantReadError:\n+ self._wait_for_sock()\n continue\n break\n return \"\".join(buffers)\n@@ -244,6 +254,7 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(self._rbufsize)\n except OpenSSL.SSL.WantReadError:\n+ self._wait_for_sock()\n continue\n if not data:\n break\n@@ -271,7 +282,8 @@ class fileobject(_fileobject):\n try:\n data = self._sock.recv(self._rbufsize)\n except OpenSSL.SSL.WantReadError:\n- continue\n+ self._wait_for_sock()\n+ continue\n if not data:\n break\n left = size - buf_len\n```\n\nThe code seems suboptimal but I'd like to try keeping this close to the version from the stdlib (where 99% of the fileobject are taken from)\n\nPS: It seems I was totally confused when not being able to reproduce this on Linux.\nIt is just the same there.\n", "Suits me. =)\n", "@t-8ch You are great - now timeout works fine in every case - and may CPUs can \"sleep\" calmly ;)\n" ]
https://api.github.com/repos/psf/requests/issues/1909
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1909/labels{/name}
https://api.github.com/repos/psf/requests/issues/1909/comments
https://api.github.com/repos/psf/requests/issues/1909/events
https://github.com/psf/requests/issues/1909
27,312,448
MDU6SXNzdWUyNzMxMjQ0OA==
1,909
Header output when debugging is poorly formatted and missing values
{ "avatar_url": "https://avatars.githubusercontent.com/u/308610?v=4", "events_url": "https://api.github.com/users/jaraco/events{/privacy}", "followers_url": "https://api.github.com/users/jaraco/followers", "following_url": "https://api.github.com/users/jaraco/following{/other_user}", "gists_url": "https://api.github.com/users/jaraco/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jaraco", "id": 308610, "login": "jaraco", "node_id": "MDQ6VXNlcjMwODYxMA==", "organizations_url": "https://api.github.com/users/jaraco/orgs", "received_events_url": "https://api.github.com/users/jaraco/received_events", "repos_url": "https://api.github.com/users/jaraco/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jaraco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaraco/subscriptions", "type": "User", "url": "https://api.github.com/users/jaraco", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-02-10T23:36:19Z
2021-09-09T00:10:09Z
2014-02-10T23:59:59Z
CONTRIBUTOR
resolved
Consider this script: ``` import logging import requests def setup_requests_logging(level): requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(level) requests_log.propagate = True # enable debugging at httplib level requests.packages.urllib3.connectionpool.HTTPConnection.debuglevel = level <= logging.DEBUG logging.basicConfig(level=logging.DEBUG) setup_requests_logging(logging.getLogger().level) requests.get('http://google.com') ``` If I run that script on Python 3.4.0rc1, the output is clumsy and less than fully helpful: ``` INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): google.com send: b'GET / HTTP/1.1\r\nHost: google.com\r\nAccept-Encoding: gzip, deflate, compress\r\nAccept: */*\r\nUser-Agent: python-requests/2.2.1 CPython/3.4.0b3 Windows/8\r\n\r\n' reply: 'HTTP/1.1 301 Moved Permanently\r\n' DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 301 219 INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): www.google.com header: Location header: Content-Type header: Date header: Expires header: Cache-Control header: Server header: Content-Length header: X-XSS-Protection header: X-Frame-Options header: Alternate-Protocol send: b'GET / HTTP/1.1\r\nHost: www.google.com\r\nUser-Agent: python-requests/2.2.1 CPython/3.4.0b3 Windows/8\r\nAccept: */*\r\nAccept-Encoding: gzip, deflate, compress\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 None header: Date header: Expires header: Cache-Control header: Content-Type header: Set-Cookie header: Set-Cookie header: P3P header: Server header: X-XSS-Protection header: X-Frame-Options header: Alternate-Protocol header: Transfer-Encoding ``` Note, that formatting is how the output is logged. Header names are printed but not their values, and there's only a space following the name. It would be much preferable if the headers were printed line-by-line with their values, or if they were printed as a repr(dict) or json representation. As they are, the output isn't very helpful.
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1909/reactions" }
https://api.github.com/repos/psf/requests/issues/1909/timeline
null
completed
null
null
false
[ "Requests is 2.2.1.\n", "If you look at the output closely you'll see that all of the information is being printed by `requests.packages.urllib3.connectionpool` and none of it is actually produced by something inside of `requests` proper. This is because `requests` does no logging in its code at all. This was removed when Kenneth rewrote the library for v1.0. We recently removed the last of the extraneous imports of the `logging` module. It is most likely that the place you really want to report this is [shazow/urllib3](/shazow/urllib3).\n" ]
https://api.github.com/repos/psf/requests/issues/1908
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1908/labels{/name}
https://api.github.com/repos/psf/requests/issues/1908/comments
https://api.github.com/repos/psf/requests/issues/1908/events
https://github.com/psf/requests/pull/1908
27,206,730
MDExOlB1bGxSZXF1ZXN0MTIzNDQ0NTg=
1,908
Removed unnecessary if-statements
{ "avatar_url": "https://avatars.githubusercontent.com/u/174994?v=4", "events_url": "https://api.github.com/users/benediktkr/events{/privacy}", "followers_url": "https://api.github.com/users/benediktkr/followers", "following_url": "https://api.github.com/users/benediktkr/following{/other_user}", "gists_url": "https://api.github.com/users/benediktkr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benediktkr", "id": 174994, "login": "benediktkr", "node_id": "MDQ6VXNlcjE3NDk5NA==", "organizations_url": "https://api.github.com/users/benediktkr/orgs", "received_events_url": "https://api.github.com/users/benediktkr/received_events", "repos_url": "https://api.github.com/users/benediktkr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benediktkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benediktkr/subscriptions", "type": "User", "url": "https://api.github.com/users/benediktkr", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2014-02-08T19:20:40Z
2021-09-08T22:01:19Z
2014-02-08T19:27:24Z
NONE
resolved
Passes all unit tests like this.
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1908/reactions" }
https://api.github.com/repos/psf/requests/issues/1908/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1908.diff", "html_url": "https://github.com/psf/requests/pull/1908", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1908.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1908" }
true
[ "Those conditionals are 100% necessary. `verify` can be (and frequently is) `False` or a path to a certificate bundle. We must conditionally set the values. Consider the paths through this code: \n1. `verify` is `True`: then `cert_loc` will be the default bundle\n2. `verify` is `False`: then `cert_loc` will be `False`\n3. `verify` is `/path/to/other/bundle`: then `cert_loc` will be `/path/to/other/bundle`\n\nThis code breaks a great deal of features that are apparently untested. This is more of an alarm that those features are not tested rather than a reason for removing the conditionals here. Thanks for your help @benediktkr !\n" ]
https://api.github.com/repos/psf/requests/issues/1907
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1907/labels{/name}
https://api.github.com/repos/psf/requests/issues/1907/comments
https://api.github.com/repos/psf/requests/issues/1907/events
https://github.com/psf/requests/pull/1907
27,139,403
MDExOlB1bGxSZXF1ZXN0MTIzMTQ0MTM=
1,907
Use by default SSL CA certificate bundle from the platform
{ "avatar_url": "https://avatars.githubusercontent.com/u/1174343?v=4", "events_url": "https://api.github.com/users/ticosax/events{/privacy}", "followers_url": "https://api.github.com/users/ticosax/followers", "following_url": "https://api.github.com/users/ticosax/following{/other_user}", "gists_url": "https://api.github.com/users/ticosax/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ticosax", "id": 1174343, "login": "ticosax", "node_id": "MDQ6VXNlcjExNzQzNDM=", "organizations_url": "https://api.github.com/users/ticosax/orgs", "received_events_url": "https://api.github.com/users/ticosax/received_events", "repos_url": "https://api.github.com/users/ticosax/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ticosax/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ticosax/subscriptions", "type": "User", "url": "https://api.github.com/users/ticosax", "user_view_type": "public" }
[]
closed
true
null
[]
null
20
2014-02-07T14:50:28Z
2021-09-08T23:06:02Z
2014-02-08T13:35:06Z
NONE
resolved
If pyopenssl is available and you give an empty ca_certs to urllib3, it will use the default CA of the current platform. Not ready for merging as this feature requires a bleeding edge version of urllib3 (https://github.com/shazow/urllib3/commit/5c25a73dfb48e4260c44e19e3a50fb5d46832c52) I post this PR now, to discuss about its implementation and testing improvements. About testing we could run the project with tox and iterate among different versions of python and with or without dependencies of urllib3.contrib.pyopenssl. ``` pyOpenSSL ndg-httpsclient pyasn1 ``` thx
{ "avatar_url": "https://avatars.githubusercontent.com/u/1174343?v=4", "events_url": "https://api.github.com/users/ticosax/events{/privacy}", "followers_url": "https://api.github.com/users/ticosax/followers", "following_url": "https://api.github.com/users/ticosax/following{/other_user}", "gists_url": "https://api.github.com/users/ticosax/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ticosax", "id": 1174343, "login": "ticosax", "node_id": "MDQ6VXNlcjExNzQzNDM=", "organizations_url": "https://api.github.com/users/ticosax/orgs", "received_events_url": "https://api.github.com/users/ticosax/received_events", "repos_url": "https://api.github.com/users/ticosax/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ticosax/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ticosax/subscriptions", "type": "User", "url": "https://api.github.com/users/ticosax", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1907/reactions" }
https://api.github.com/repos/psf/requests/issues/1907/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1907.diff", "html_url": "https://github.com/psf/requests/pull/1907", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1907.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1907" }
true
[ "Ok, let's think about how this works from the perspective of the standard Requests API. It's intended to be used in one of these ways:\n\n``` python\n# Verify certs using the bundled cacerts.\nr = requests.get('https://somesslurl.com/')\n\n# Use your own cacerts.\nr = requests.get('https://somesslurl.com/', verify='/path/to/cacerts')\n\n# Don't verify.\nr = requests.get('https://somesslurl.com/', verify=False)\n```\n\nYour PR in its current form changes this behaviour in an alarming and subtle way:\n\n``` python\n# Verify certs using the bundled cacerts OR the built-in ones,\n# depending on whether you have pyopenssl installed or not.\nr = requests.get('https://somesslurl.com/')\n\n# Use your own cacerts.\nr = requests.get('https://somesslurl.com/', verify='/path/to/cacerts')\n\n# Don't verify.\nr = requests.get('https://somesslurl.com/', verify=False)\n```\n\nI don't think we should change which certificates we use based on whether or not you install an _optional_ dependency. Our current behaviour WRT pyopenssl is to enable _additional_ features without changing our current behaviour. That's what you should be aiming for here.\n", "Hi @Lukasa ,\nToday if a user import `requests` from global python, and if this package has been patched (like it is intended and encouraged) the user will get the platform CA and not the bundle CA.\nThis pull request aim to give to the user to the same behaviour when he uses `requests` from its own virtualenv, not from global python. In this regard, I think, this PR is pretty consistent.\n\nNow if we want to make progress. Do you suggest to go in a way that is more explicit for the user ?\n\n``` python\n# with pyopenssl\nr = requests.get('https://somesslurl.com/', verify=requests.DEFAULT_CA)\nr = requests.get('https://somesslurl.com/', verify=requests.BUNDLE_CA)\n\n# without pyopenssl\nr = requests.get('https://somesslurl.com/', verify=requests.DEFAULT_CA)\n Traceback\n [...]\n SSLError()\nr = requests.get('https://somesslurl.com/', verify=requests.BUNDLE_CA)\n```\n\nor you completely reject the idea ?\n\nthx\n", "Woah, woah. Your statement that we \"intend and encourage\" people to patch Requests to use the system CAs is not true. We _allow_ people to do that, but we have no particular opinions about whether they should. In fact, we now rigorously maintain our own CA bundle, since `pip` uses it.\n\nWhat you're saying is that on systems where the site-package install of `requests` came from `apt` or `yum` it uses the system CAs. That is the decision of the person who built that package for that repository, and should _not_ be viewed as the viewpoint of the Requests project. We expect people to install requests from `pip`, and when you do that you get the bundled CAs by default.\n\nI don't know whether I completely reject the idea or not. I'm not convinced that this is better than just asking the user to specify the path to the system CAs if they want to use them, which we already allow.\n", "Unfortunately, there is no canonical way to get the System CAs. So, as a user of `requests`, specifying the path of the system CAs is not an option, if I want to provide a library that is platform agnostic. \nToday `requests` does not give real other choice than using the bundle CA.\nThis is the problem I'm trying to fix.\n", "I don't have a good handle on what the use-case is here. If you were providing a platform-agnostic library, why would you want your users to all be using different CA certs? Surely you'd want to ensure that all your users have the _exact same_ certs on all platforms, to avoid annoying platform-specific bugs?\n", "Basically, we are self signing our certificates. So we declare ourself as trustable CA globally for the system.\nThen all ours tools can access our repos.\nOur toolkit that needs access to our online resources are for instance `docker`, `docker-py` and `curl`.\nHere `Go`, and `libcurl` are by default reading the platform CAs but `docker-py`, that uses `requests` underneath, is not. \nIt means I need to patch somehow the setup of all my venv that uses `requests`.\nIt is too ugly and too much work for me.\n\nAs a side note: In terms of security level, the practice of self signing certificates might be considered more secure than relying on well known CA.\n", "OK, so this is my understanding of your situation:\n1. You have a number of SSL certificates that have been signed by your own internal CA.\n2. Those are distributed to your systems (which are over a number of OSes) via some automated deployment system and are installed to the system CA.\n3. You have a number of tools that access your TLS-encrypted endpoints, some of which are based on Requests.\n4. You cannot make local code changes to affect the certificate validation of your Requests-based tools on those systems (or you can, but it's going to be a very costly thing to do that you cannot bear that cost).\n\nDoes that seem right?\n\nThat is a legitimate problem, but I feel like it can be solved by environment variables. Requests exposes the REQUESTS_CA_BUNDLE environment variable which overrides the location of the CA certificates. Is that variable suitable for your use-case? If not, why not?\n", "You perfectly understood my use-case.\nI was completely ignorant about REQUESTS_CA_BUNDLE. \nIt seems perfect to feel the gap.\n\nI'm very sorry for not having see it before.\nAnd I'm grateful for the time you spent to understand my needs.\nI'm closing the PR.\n", "That's not a problem at all, I'm glad we were able to come up with a solution that was good for both of us. I'm glad you're getting lots of use out of Requests!\n", "_Use by default SSL CA certificate bundle from the platform_ is what OP proposes in this issue and that's something I totally agree with. It feels so natural to me that I find it difficult someone could oppose this, yet alone argue that not doing this is better experience. Yet this is what I see here and that's why I decided to share my opinion with you.\n@Lukasa states\n\n> I don't think we should change which certificates we use based on whether or not you install an optional dependency. Our current behaviour WRT pyopenssl is to enable additional features without changing our current behaviour. That's what you should be aiming for here.\n\nThis argument is misguided here as pretty much everyone agrees that when you're dealing with security you should **by default** be as secure as you are able to be, given the environment you operate in. This means using system CA certificates by default (and fallback to bundled ones if it's not possible or very hard to do) and not bundled ones. I think one might argue that security begs for policy of graceful \ndegradation.\n\nAlso @Lukasa states\n\n> Surely you'd want to ensure that all your users have the exact same certs on all platforms, to avoid annoying platform-specific bugs?\n\nSurely, if the only goal is to _avoid annoying platform-specific bugs_ then yes. But if the goal is to make requests secure by default (and this is in line with _HTTP library for humans_ motto) then surely you would want to ensure that all your users have _the best certs available on their systems_.\n\nAdditional bonus is that you save packagers from having to patch this (mis)feature.\n\nI'm curious what @dstuff and @t-8ch think.\n", "@piotr-dobrogost I'm pretty neutral on this, but I suspect Kenneth will be against it. It adds code complexity (we need a fallback path) and project management overhead (extra bugs filed, both when we change which certificates we use to verify and later when people have failures on specific platforms that do not reproduce on others). Given that we're doing a pretty good job of securing the system already, and that we make it possible to use other certificate bundles, you've got a hell of a job proving that this is worth the switch.\n", "> (...) when people have failures on specific platforms that do not reproduce on others.\n\nI doubt there would be many such failures i.e. caused due to difference in CA certs alone. For instance I do not recall many issues raised in this project by people using requests patched to use system CA certs.\n", "> I do not recall many issues raised in this project by people using requests patched to use system CA certs.\n\nYou mean when requests would use a set of locations where the certificates _might_ be? I can remember several and I'm sure a search of issues would result in several issues about it. Most of what I remember are people using distributions of linux that did not use the de facto defaults of other more popular distributions. These people then sent PRs to extend the list of possible locations to include the specific locations for the different versions of that distribution they happened to use. That is exactly what we're trying to avoid in vendoring the certificates. If the SSL library or really anything gave us the information needed to programmatically determine the location without having a hard-coded list of them ourselves, that'd be great.\n", "@sigmavirus24, That's the point of this PR, pyopenssl takes care to provide the default platform CA.\n\nhttps://github.com/shazow/urllib3/commit/5c25a73dfb48e4260c44e19e3a50fb5d46832c52\nhttp://pyopenssl.sourceforge.net/pyOpenSSL.html/openssl-context.html#l2h-132\n", "@ticosax The section of the documentation you linked to quite literally says 'This method may not work properly on OS X'. Such an admission is tantamount to saying 'This method only sometimes works'. \n\nAdditionally, I can tell you that this doesn't work properly because I've hit exactly this bug in my own project, [hyper](https://github.com/Lukasa/hyper). See issue Lukasa/hyper#9, but in summary, `set_default_verify_paths()` only works on some systems, and otherwise fails _without any way to detect it_, as you can see from [the Python documentation](http://docs.python.org/3.4/library/ssl.html#ssl.SSLContext.set_default_verify_paths). Given that PyOpenSSL is only a thin wrapper around SSL, much like the stdlib version, I will assume that PyOpenSSL has exactly the same problems with its method.\n\nBTW, one of those systems is Windows. =)\n", "@Lukasa summed up everything I was about to say in a much nicer way. Count this as a +1 for what he just said.\n", "@sigmavirus24 \n\n> These people then sent PRs to extend the list of possible locations to include the specific locations for the different versions of that distribution they happened to use.\n\nThen all you have to do per @ticosax's remark – _(...) pyopenssl takes care to provide the default platform CA._ – is to direct these people to pyopenssl and close issue right away :)\n\nThe theme of this project is that you care about what most people do and you try to make their lives easier. Now, I bet most people use only a handful of systems so there's no problem for pyopenssl project to take care of those. When someone uses unpopular system he should not expect that every piece of software on earth would handle his system. In this case telling him he has to point requests to his CA certs is perfectly fine.\n\n@Lukasa \n\n> The section of the documentation you linked to quite literally says 'This method may not work properly on OS X'. Such an admission is tantamount to saying 'This method only sometimes works'. \n\nOS X is a popular OS (unfortunately as I don't like Apple at all) so if it really does not work for it then it's unacceptable. However I don't believe OS X actively hides its CA certs :)\n", "@piotr-dobrogost Gotta read my whole answer. I have personal evidence that this does not work on Windows (though OS X was actually fine), where I got SSL certificate errors when talking to Twitter's servers. Given that there's no error response from the method, we just can't rely on it.\n", "Yup, they're [in the registry](http://msdn.microsoft.com/en-us/library/windows/desktop/aa388136%28v=vs.85%29.aspx), which makes getting at the certs a nightmare. OpenSSL does appear to provide an engine for getting at Microsoft's CryptoAPI stuff, but it's essentially undocumented.\n\nFor those who want an illuminating OpenSSL email trail, [here's one](http://openssl.6102.n7.nabble.com/SSL-CTX-set-default-verify-paths-and-Windows-td25299.html). The summary is this:\n\n> `set_default_verify` is effectively `_load_verify_locations` \n> using env vars SSL_CERT_FILE SSL_CERT_DIR if they exist \n> and otherwise `X509_get_default_cert_{file,dir}()` which return \n> a compiled-in file and directory normally file \"cert.pem\" and \n> subdir \"certs\" under OPENSSLDIR, which is configurable at build \n> time and can be seen with commandline openssl version -d .\n\nPut another way, there's no intelligence here.\n\nI hereby speak on behalf of the requests project: there's no chance of us using the default system certs, at least not via OpenSSL. Too easy to break, too hard to detect when it does.\n", "> (...) at least not via OpenSSL. Too easy to break, too hard to detect when it does.\n\nFair enough.\n\n> Put another way, there's no intelligence here.\n\nI believe it's official interface of OpenSSL which uses 3 _well known_ locations when looking for certs. That's not much but it's something.\nHowever this\n\n> Given that there's no error response from the method, we just can't rely on it.\n\nis a blocker, indeed.\n" ]
https://api.github.com/repos/psf/requests/issues/1906
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1906/labels{/name}
https://api.github.com/repos/psf/requests/issues/1906/comments
https://api.github.com/repos/psf/requests/issues/1906/events
https://github.com/psf/requests/issues/1906
27,125,706
MDU6SXNzdWUyNzEyNTcwNg==
1,906
OpenSSL.SSL.Error: [('SSL routines', 'SSL3_GET_RECORD', 'decryption failed or bad record mac')]
{ "avatar_url": "https://avatars.githubusercontent.com/u/102495?v=4", "events_url": "https://api.github.com/users/ssbarnea/events{/privacy}", "followers_url": "https://api.github.com/users/ssbarnea/followers", "following_url": "https://api.github.com/users/ssbarnea/following{/other_user}", "gists_url": "https://api.github.com/users/ssbarnea/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ssbarnea", "id": 102495, "login": "ssbarnea", "node_id": "MDQ6VXNlcjEwMjQ5NQ==", "organizations_url": "https://api.github.com/users/ssbarnea/orgs", "received_events_url": "https://api.github.com/users/ssbarnea/received_events", "repos_url": "https://api.github.com/users/ssbarnea/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ssbarnea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ssbarnea/subscriptions", "type": "User", "url": "https://api.github.com/users/ssbarnea", "user_view_type": "public" }
[ { "color": "e11d21", "default": false, "description": null, "id": 136589914, "name": "Needs Info", "node_id": "MDU6TGFiZWwxMzY1ODk5MTQ=", "url": "https://api.github.com/repos/psf/requests/labels/Needs%20Info" }, { "color": "f7c6c7", "default": false, "description": null, "id": 167537670, "name": "Propose Close", "node_id": "MDU6TGFiZWwxNjc1Mzc2NzA=", "url": "https://api.github.com/repos/psf/requests/labels/Propose%20Close" } ]
closed
true
null
[]
null
49
2014-02-07T10:31:07Z
2021-09-08T08:00:38Z
2015-01-19T09:22:04Z
CONTRIBUTOR
resolved
It seems that latest requests (2.2.1) is also affected by bug: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_GET_RECORD', 'decryption failed or bad record mac')] It seems to be an workaround here http://stackoverflow.com/questions/21497591/urllib2-reading-https-url-failure but I don't know how to apply it to requests.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1906/reactions" }
https://api.github.com/repos/psf/requests/issues/1906/timeline
null
completed
null
null
false
[ "Thanks for this!\n\nYeah, this isn't really a request bug, as the SO question highlights: it's a Debian or OpenSSL bug.\n\nWith that said, a possible workaround would be an extension of the transport adapter demonstrated on my blog, here: https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/\n", "It's a real problem and I do not have the confirmation that the workaround works. What is even more awkward is that my automation script that used to query a server every hour for few hundred requests started to fail suddenly, even without me changing anything on the machine. I guess they may have changed the configuration of the web server.\n\nStill, still problems occurs on latest distro of Ubuntu, with all patches and the last version of OpenSSL is one year old. We need to implement an workaround for this. \n\nAlso, I tried the workaround specified on OpenSSL forums but it doesn't work, I will try your approach and see.\n", "I agree that it's a real problem. I do not necessarily agree that Requests needs a workaround for every bug in any of our dependencies. \n\nWhat is not clear to me at this time is how severe this bug is, how widespread it is, and how easy it is to work around. It's also not easy for me to find those things out today: I'm at a conference and won't have time near a laptop. It would be useful if @sigmavirus24 could take a look: otherwise I'll have to dig into it tomorrow. \n\nThe key thing to know is that OpenSSL has had other bugs in the past that we haven't worked around in the core Requests code. Most notably, 0.9.8f (I think) that ships on OS X 10.8 has a similar bug in it that we have never worked around. \n\nWe are not required to fix every bug in all of our dependencies, especially implicit ones. We _may_ fix this one, but I don't know yet. \n\nNote that another workaround is to use a different version of OpenSSL. Just saying. \n", "Your workaround didn't work this time, I tried it and same error. And regarding using a new openssl, that's not easy at all openssl is very well tied into the system and I imagine that installing another one will only bring more problems.\n\nIf we would have a working-workaround it would be fine, I don't mind applying a patch to this script, but it doesn't work. I tried with ssl_version=ssl.PROTOCOL_TLSv1\n\n... I am trying now to build a minimal test-case.\n", "Oops, it seems that it was a red herring, it's not related to this. Instead it seems to be related to multiprocessing. I was using multiprocessing in order to perform requests on 10 threads, to speedup the process. This worked well for weeks but suddenly stopped working a couple of days ago... with this error.\n", "What does your multiprocessing code look like? If you're sharing `Session` objects across processes, bad stuff will happen. =)\n", "I was not sharing any session, in fact I was calling a function on each thread, passing a parameter and inside this function I has my code which was creating a session, performing 2-3 requests and returning. I know, strange.\n", "That's very odd. Is it easily reproducible?\n", "@ssbarnea can you at least share the URL so that we can attempt to reproduce it? So far I think we have almost sufficient information, we just need the URL or characteristics of the server to reproduce it:\n- Multiprocessing using _10 threads_\n- Function that instantiates a Session and makes more than one request (ostensibly to the same server) with it.\n", "Started getting this as well running https://github.com/edx/dyno-slayer\n\nProblem started when changing:\n\n``` python\nDEFAULT_MIN_TIMING_WINDOW = 60\nDEFAULT_MAX_TIMING_WINDOW = 120\n```\n\nto:\n\n``` python\nDEFAULT_MIN_TIMING_WINDOW = 20\nDEFAULT_MAX_TIMING_WINDOW = 30\n```\n", "@crizCraig I'm not familiar with that project. What is the significance of those values? Are they:\n- Configuration file values?\n- Environment values?\n- Other?\n", "Sorry, those just define a length of time to sample heroku logs via their API. The entire program is this file:\n\nhttps://github.com/edx/dyno-slayer/blob/master/scripts/slayer.py\n", "We need substantially better diagnostics than that I'm afraid. =) Can you provide us a traceback, for instance?\n", "I wonder if someone who works @heroku could give us a hand. They might have some insight. @catsby can you give us some info about the SSL configuration on a default Heroku box? Can you point us at someone who can?\n", "@crizCraig Ping. =)\n", "I'm experiencing this issue in Windows 7 with Python 2.7.9 x32\n\n```\nSSLError: [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:581)\n```\n", "@zvodd Have you tried forcing SSL negotiation at different versions, as per [this article](https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/)?\n", "Closed for inactivity.\n", "Ran into this issue using multiprocessing. Have not been able to fix it.\n", "@maxcountryman can you provide any of the details we've been asking for?\n", "@sigmavirus24 well, I'm not using Heroku. What other details did you want?\n", "@maxcountryman \n- Your version of python and openssl\n- You operatingsystem and version\n- A minimal breaking code snippet\n- If possible a public URL which triggers the bug (together with the mentioned code snippet)\n", "Hi @Lukasa @sigmavirus24 @t-8ch ,\nIs your article (https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/) compatible with Python 3? Is it right that the current issue will be solved by updating python to version 2.7.9?\n", "@ulandj The article should be Python 3 compatible. Upgrading to Python 2.7.9 will solve a lot of problems.\n", "@Lukasa i.e. after upgrading to Python 2.7.9 I don't need use your adapter in the article, right? And \n\n``` python\nimport requests\nconn = requests.Session()\nconn.put(url, data=body, headers=headers)\nconn.delete(url, data=body, headers=headers)\n```\n\nwill work with any count of multiprocesses?\n", "If you're encountering this problem with multiprocessing, I've never been given a repro scenario for it, so I don't actually know what's happening. If you can demonstrate the problem with a bit of sample code I'd like to see it.\n", "well, we first upgrade to python 2.7.9 and will try to run with multiprocessing. if this error appears again, I will let you know. Thanks.\n", "@Lukasa our customer says that they have the following error: \n\n```\nTraceback (most recent call last): \nFile \"/opt/SketchSync/WorkersApp.py\", line 230, in run self.producer.release_job(self.worker_id, payload, done) \nFile \"/opt/SketchSync/SketchSyncProducer.py\", line 88, in release_job self.queue.delete_job(payload) \nFile \"/opt/SketchSync/SketchSyncProducer.py\", line 177, in delete_job self.queue.delete(mid) \nFile \"/usr/local/lib/python2.7/site-packages/iron_mq.py\", line 58, in delete result = self.client.delete(url) \nFile \"/usr/local/lib/python2.7/site-packages/iron_core.py\", line 233, in delete retry=retry, body=body) \nFile \"/usr/local/lib/python2.7/site-packages/iron_core.py\", line 152, in request r = self._doRequest(url, method, body, headers) \nFile \"/usr/local/lib/python2.7/site-packages/iron_core.py\", line 117, in _doRequest r = self.conn.delete(url, data=body, headers=headers) \nFile \"/usr/local/lib/python2.7/site-packages/requests/sessions.py\", line 527, in delete return self.request('DELETE', url, **kwargs) \nFile \"/usr/local/lib/python2.7/site-packages/requests/sessions.py\", line 456, in request resp = self.send(prep, **send_kwargs) \nFile \"/usr/local/lib/python2.7/site-packages/requests/sessions.py\", line 559, in send r = adapter.send(request, **kwargs) \nFile \"/usr/local/lib/python2.7/site-packages/requests/adapters.py\", line 382, in send raise SSLError(e, request=request) \nSSLError: [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:1750) \n```\n\nDoes it say about something?\n\nUsing:\npython (2.7.9)\nrequests (2.3.0)\n", "@ulandj sounds like you're running into http://stackoverflow.com/a/3724938/1953283\n", "@sigmavirus24 \nyou think that if I move this line of code (from constructor of class) - https://github.com/iron-io/iron_core_python/blob/master/iron_core.py#L162 here - https://github.com/iron-io/iron_core_python/blob/master/iron_core.py#L188, then it will work without errors?\n" ]
https://api.github.com/repos/psf/requests/issues/1905
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1905/labels{/name}
https://api.github.com/repos/psf/requests/issues/1905/comments
https://api.github.com/repos/psf/requests/issues/1905/events
https://github.com/psf/requests/issues/1905
27,056,980
MDU6SXNzdWUyNzA1Njk4MA==
1,905
HTTPS not working on Google App Engine
{ "avatar_url": "https://avatars.githubusercontent.com/u/2834052?v=4", "events_url": "https://api.github.com/users/cpavon/events{/privacy}", "followers_url": "https://api.github.com/users/cpavon/followers", "following_url": "https://api.github.com/users/cpavon/following{/other_user}", "gists_url": "https://api.github.com/users/cpavon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cpavon", "id": 2834052, "login": "cpavon", "node_id": "MDQ6VXNlcjI4MzQwNTI=", "organizations_url": "https://api.github.com/users/cpavon/orgs", "received_events_url": "https://api.github.com/users/cpavon/received_events", "repos_url": "https://api.github.com/users/cpavon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cpavon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cpavon/subscriptions", "type": "User", "url": "https://api.github.com/users/cpavon", "user_view_type": "public" }
[ { "color": "fbca04", "default": false, "description": null, "id": 615414998, "name": "GAE Support", "node_id": "MDU6TGFiZWw2MTU0MTQ5OTg=", "url": "https://api.github.com/repos/psf/requests/labels/GAE%20Support" } ]
closed
true
null
[]
null
13
2014-02-06T15:22:35Z
2021-09-08T09:00:47Z
2014-02-06T17:12:38Z
NONE
resolved
Take a look here: http://stackoverflow.com/questions/21605328/python-requests-on-google-app-engine-not-working-for-https Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1905/reactions" }
https://api.github.com/repos/psf/requests/issues/1905/timeline
null
completed
null
null
false
[ "Unfortunately, Requests explicitly doesn't support GAE. It's just not Python. =)\n", "I guess directing OP to some details on why/how _GAE is not Python_ would be appreciated :)\n", "Heh, this is a fair point.\n\nThe short answer is that GAE doesn't use the same standard library as core Python does. This is particularly significant with the networking sections (e.g. `socket`, `ssl`, `httplib`) which are radically different from their standard library counterparts.\n\nThe number of differences is substantial, the differences themselves can often be subtle or only found after substantial use (e.g. rate limiting), and testing on GAE is relatively expensive for the core dev team to do.\n\nFor those reasons, it's easier for us simply to not support GAE. We're prepared to make small code changes if they fix an obvious problem for GAE, but we are disinclined to radically change the codebase for any non-standard Python version (see the recent pull request modifying the codebase for Jython, which was rejected).\n\nIf Requests doesn't work on GAE, GAE is bugged. We work fine on Python 3.3, Python 2.7, Python 2.6 and PyPy: GAE can sort itself out. =)\n", "Thanks for the explanation!\n", "I understand fixing AppEngine is not something you intend to pursue. In case it helps anyone else, though, I think I have found the cause: line 172 [here](https://github.com/kennethreitz/requests/pull/1892/files#diff-28e67177469c0d36b068d68d9f6043bfR172). It's my impression that [get_netrc_auth essentially fails on AppEngine](https://github.com/kennethreitz/requests/pull/1709/files#diff-5956087d5835a57d9ef6fff974f6fd9bL97), and without that, authentication will not survive redirects, resulting in infinite recursive redirects to any endpoint that requires Auth (not just POSTs). \n\nI'm not sure, but the absence of the feature mentioned here: https://github.com/kennethreitz/requests/pull/1892#issuecomment-33730591 , which at the time seemed like a \"nice-to-have\", may be a factor. \n\n@kennethreitz had seemed [concerned](https://github.com/kennethreitz/requests/pull/1892#issuecomment-33821403) that making this design decision for the developer might have adverse consequences... I wonder if there are others outside of the GAE ecosystem who would prefer Authentication tokens survive across redirects. \n\nPerhaps an option to allow this would assuage the GAE community as well as those in similar situations?\n", "There are absolutely people who would prefer that we keep the auth tokens on over redirects. However, both @sigmavirus24 and I very strongly believe that being insecure by default is dangerous. @kennethreitz has a more nuanced view than we do, and I acknowledge that. However, I think that #1892 is a good change.\n\nHowever, this bug cannot be #1892 because we haven't released a version with #1892 in it yet. =)\n", "Well that's embarrassing =) \nThanks for your patience Lukasa. \n\n(for the record, what I intended to suggest was opt-out for #1892, not opt-in). \n", "There's no need to thank me, it's an easy mistake to make. I have the advantage of having written that particular fix, so I'm keeping track of it. ;)\n\nThe fix currently is opt-out: if you pass the argument `allow_redirects=False`, you can handle the redirect logic yourself and deal with authentication however you like. \n", "Hi @rattrayalex Your patch does not seem to work anymore, in particular to connect to Google IP addresses, as such IP's are blocked in GAE's new sockets API: https://cloud.google.com/appengine/docs/python/sockets/\n", "/CC @jonparrott\n", "urllib3 now has [contrib support](http://urllib3.readthedocs.org/en/latest/contrib.html#google-app-engine) for GAE. I haven't personally tested this out, but if you configure requests to use `AppEngineManager` instead of the normal `PoolManager` it should (in theory) work. I'll be happy to try to take on any issues with getting that to work.\n", "@Lukasa think that should go into the toolbelt?\n", "Seems reasonable to me.\n" ]
https://api.github.com/repos/psf/requests/issues/1904
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1904/labels{/name}
https://api.github.com/repos/psf/requests/issues/1904/comments
https://api.github.com/repos/psf/requests/issues/1904/events
https://github.com/psf/requests/pull/1904
26,981,151
MDExOlB1bGxSZXF1ZXN0MTIyMzE4NzE=
1,904
Document the `Response.reason` attribute.
{ "avatar_url": "https://avatars.githubusercontent.com/u/46775?v=4", "events_url": "https://api.github.com/users/mjpieters/events{/privacy}", "followers_url": "https://api.github.com/users/mjpieters/followers", "following_url": "https://api.github.com/users/mjpieters/following{/other_user}", "gists_url": "https://api.github.com/users/mjpieters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mjpieters", "id": 46775, "login": "mjpieters", "node_id": "MDQ6VXNlcjQ2Nzc1", "organizations_url": "https://api.github.com/users/mjpieters/orgs", "received_events_url": "https://api.github.com/users/mjpieters/received_events", "repos_url": "https://api.github.com/users/mjpieters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mjpieters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mjpieters/subscriptions", "type": "User", "url": "https://api.github.com/users/mjpieters", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2014-02-05T17:31:31Z
2021-09-08T23:11:09Z
2014-02-05T18:32:16Z
CONTRIBUTOR
resolved
Made `.status_code` and `.reason` consistent with one another, adding some examples. Addresses #1225.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1904/reactions" }
https://api.github.com/repos/psf/requests/issues/1904/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1904.diff", "html_url": "https://github.com/psf/requests/pull/1904", "merged_at": "2014-02-05T18:32:16Z", "patch_url": "https://github.com/psf/requests/pull/1904.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1904" }
true
[ "This is a purely documentation change, so I'm going to go ahead and merge it.\n\nThanks! :cake: Thanks for all your work over on SO as well. =3\n" ]
https://api.github.com/repos/psf/requests/issues/1903
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1903/labels{/name}
https://api.github.com/repos/psf/requests/issues/1903/comments
https://api.github.com/repos/psf/requests/issues/1903/events
https://github.com/psf/requests/issues/1903
26,869,676
MDU6SXNzdWUyNjg2OTY3Ng==
1,903
Requests can't handle HTTPS proxy requests.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5655555?v=4", "events_url": "https://api.github.com/users/richmilne/events{/privacy}", "followers_url": "https://api.github.com/users/richmilne/followers", "following_url": "https://api.github.com/users/richmilne/following{/other_user}", "gists_url": "https://api.github.com/users/richmilne/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richmilne", "id": 5655555, "login": "richmilne", "node_id": "MDQ6VXNlcjU2NTU1NTU=", "organizations_url": "https://api.github.com/users/richmilne/orgs", "received_events_url": "https://api.github.com/users/richmilne/received_events", "repos_url": "https://api.github.com/users/richmilne/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richmilne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richmilne/subscriptions", "type": "User", "url": "https://api.github.com/users/richmilne", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-02-04T09:45:52Z
2021-09-08T23:08:02Z
2014-02-04T10:35:02Z
NONE
resolved
Consider this snippet of code: ``` python import requests from requests.auth import HTTPProxyAuth auth = HTTPProxyAuth('username', 'password') proxy = {'https': 'https://192.168.0.1/'} req = requests.get('http://www.google.com', proxies=proxy, auth=auth) ``` What I expect the code to do is to send an encrypted request (as implied by http_S_ to the proxy URL (192.168.0.1), the payload of which is the URL I want to retrieve (www.google.com). What Requests does instead is send a query directly to the target (Google), with the proxy credentials, Base64 encoded, in the header! The request does not go near the proxy, which you can check by entering invalid credentials in the snippet above (which should result in some 40\* error, but passes silently.) If the proxy URL is changed to "HTTP", the transfer works as expected (credentials sent to proxy, proxy fetches target URL), only the conversation is not encrypted. This was tested under Requests v1.2.0, and the latest version, 2.2.1 Am I just using the module incorrectly, or is this a bug in Requests?
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1903/reactions" }
https://api.github.com/repos/psf/requests/issues/1903/timeline
null
completed
null
null
false
[ "The answer is complicated, let me address it in stages.\n\nFirst, you're misusing the library. Consider the proxies dictionary. It has this form: `{scheme: proxyURL}`. When you made a request, we look at the scheme for the URL you've asked for, and see if you've asked for proxies on that scheme. In your example above, you've said you want all HTTPS requests proxied, but not HTTP requests. This leads to us not routing via the proxy (since you didn't ask for one). If you want to avoid accidentally leaking your auth, you should place it in the proxy URL: that is, for your current case, your proxy dict should be:\n\n``` python\nproxy = {'http': 'https://username:[email protected]/'}\n```\n\nYour other issue (about Requests not making HTTPS connections to your proxy) is discussed at phenomenal length in #1622. The current state of play is: Requests never makes TLS connections to proxies. This is in line, roughly, with what browsers are doing (from the Squid documentation [here](http://wiki.squid-cache.org/Features/HTTPS)):\n\n> Unfortunately, popular modern browsers do not permit configuration of TLS/SSL encrypted proxy connections. There are open bug reports against most of those browsers now, waiting for support to appear. If you have any interest, please assist browser teams with getting that to happen.\n\nI am open to a feature request for making connections via HTTPS to proxies, but it's not a priority at this time, unless you can provide a compelling reason it ought to be. =)\n", "Wow! That was a very quick, and helpful, response. As I suspected, I was using requests incorrectly. Thanks for clearing that up for me, and apologies for wasting your time...\n", "That's not a problem at all. =) In the future, if you're worried about whether a question is a bug or simply a misunderstanding, you can email either myself or @sigmavirus24. =) We both publish our email addresses on GitHub, and we're always happy to take questions.\n" ]
https://api.github.com/repos/psf/requests/issues/1902
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1902/labels{/name}
https://api.github.com/repos/psf/requests/issues/1902/comments
https://api.github.com/repos/psf/requests/issues/1902/events
https://github.com/psf/requests/pull/1902
26,803,306
MDExOlB1bGxSZXF1ZXN0MTIxMzMyMDk=
1,902
Remove unused loggers.
{ "avatar_url": "https://avatars.githubusercontent.com/u/46775?v=4", "events_url": "https://api.github.com/users/mjpieters/events{/privacy}", "followers_url": "https://api.github.com/users/mjpieters/followers", "following_url": "https://api.github.com/users/mjpieters/following{/other_user}", "gists_url": "https://api.github.com/users/mjpieters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mjpieters", "id": 46775, "login": "mjpieters", "node_id": "MDQ6VXNlcjQ2Nzc1", "organizations_url": "https://api.github.com/users/mjpieters/orgs", "received_events_url": "https://api.github.com/users/mjpieters/received_events", "repos_url": "https://api.github.com/users/mjpieters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mjpieters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mjpieters/subscriptions", "type": "User", "url": "https://api.github.com/users/mjpieters", "user_view_type": "public" }
[ { "color": "009800", "default": false, "description": null, "id": 44501218, "name": "Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTIxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge" }, { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
4
2014-02-03T13:45:16Z
2021-09-08T23:05:05Z
2014-02-07T02:32:41Z
CONTRIBUTOR
resolved
Logging has been removed long ago, the import and `log` object are dead code to be pruned with a vengeance.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1902/reactions" }
https://api.github.com/repos/psf/requests/issues/1902/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1902.diff", "html_url": "https://github.com/psf/requests/pull/1902", "merged_at": "2014-02-07T02:32:41Z", "patch_url": "https://github.com/psf/requests/pull/1902.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1902" }
true
[ ":+1:\n", ":+1: \n", "Dear Jenkins: please rebuild this pull request.\n", "Well that failed spectacularly.\n" ]
https://api.github.com/repos/psf/requests/issues/1901
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1901/labels{/name}
https://api.github.com/repos/psf/requests/issues/1901/comments
https://api.github.com/repos/psf/requests/issues/1901/events
https://github.com/psf/requests/pull/1901
26,803,055
MDExOlB1bGxSZXF1ZXN0MTIxMzMwNTg=
1,901
One last Charade reference to remove here.
{ "avatar_url": "https://avatars.githubusercontent.com/u/46775?v=4", "events_url": "https://api.github.com/users/mjpieters/events{/privacy}", "followers_url": "https://api.github.com/users/mjpieters/followers", "following_url": "https://api.github.com/users/mjpieters/following{/other_user}", "gists_url": "https://api.github.com/users/mjpieters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mjpieters", "id": 46775, "login": "mjpieters", "node_id": "MDQ6VXNlcjQ2Nzc1", "organizations_url": "https://api.github.com/users/mjpieters/orgs", "received_events_url": "https://api.github.com/users/mjpieters/received_events", "repos_url": "https://api.github.com/users/mjpieters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mjpieters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mjpieters/subscriptions", "type": "User", "url": "https://api.github.com/users/mjpieters", "user_view_type": "public" }
[]
closed
true
null
[]
null
5
2014-02-03T13:40:24Z
2021-09-08T23:05:07Z
2014-02-04T08:53:53Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1901/reactions" }
https://api.github.com/repos/psf/requests/issues/1901/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1901.diff", "html_url": "https://github.com/psf/requests/pull/1901", "merged_at": "2014-02-04T08:53:53Z", "patch_url": "https://github.com/psf/requests/pull/1901.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1901" }
true
[ "This is an obvious :+1:. =D\n", "This amounts to essentially a documentation change. I don't see why it shouldn't be merged immediately (or as soon as the build passes).\n", "Looks like the CI server is having a tough time, I haven't seen any of these builds pass.\n", "It notifies people in IRC right? Freenode has been having some trouble since last night. It could be that the server is blocked trying to connect to IRC to notify of older jobs that have passed/failed.\n", "Anyway, the build is an irrelevance here.\n" ]
https://api.github.com/repos/psf/requests/issues/1900
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1900/labels{/name}
https://api.github.com/repos/psf/requests/issues/1900/comments
https://api.github.com/repos/psf/requests/issues/1900/events
https://github.com/psf/requests/pull/1900
26,797,484
MDExOlB1bGxSZXF1ZXN0MTIxMzAxOTQ=
1,900
Reinstate falling back to self.text for JSON responses
{ "avatar_url": "https://avatars.githubusercontent.com/u/46775?v=4", "events_url": "https://api.github.com/users/mjpieters/events{/privacy}", "followers_url": "https://api.github.com/users/mjpieters/followers", "following_url": "https://api.github.com/users/mjpieters/following{/other_user}", "gists_url": "https://api.github.com/users/mjpieters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mjpieters", "id": 46775, "login": "mjpieters", "node_id": "MDQ6VXNlcjQ2Nzc1", "organizations_url": "https://api.github.com/users/mjpieters/orgs", "received_events_url": "https://api.github.com/users/mjpieters/received_events", "repos_url": "https://api.github.com/users/mjpieters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mjpieters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mjpieters/subscriptions", "type": "User", "url": "https://api.github.com/users/mjpieters", "user_view_type": "public" }
[]
closed
true
null
[]
null
7
2014-02-03T12:01:10Z
2021-09-08T23:05:09Z
2014-02-11T16:55:21Z
CONTRIBUTOR
resolved
A JSON response that has no encoding specified will be decoded with a detected UTF codec (compliant with the JSON RFC), but if that fails, we guessed wrong and need to fall back to charade character detection (via `self.text`). Kenneth removed this functionality (by accident?) in 1451ba0c6d395c41f86da35036fa361c3a41bc90, this reinstates it again and adds a log warning. Fixes #1674
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1900/reactions" }
https://api.github.com/repos/psf/requests/issues/1900/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1900.diff", "html_url": "https://github.com/psf/requests/pull/1900", "merged_at": "2014-02-11T16:55:21Z", "patch_url": "https://github.com/psf/requests/pull/1900.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1900" }
true
[ "Hooray! :+1: :cake:\n", "> Nowhere else in the file is logging used. Can you remove this line and the import at the top of the file please?\n\nRest of the `logging` references excised in #1902. \n", "Thanks @mjpieters! :cake: \n", "Lets kick the CI server and get this rebuilt shall we?\n", "@Lukasa for the record, i only got notified when someone directly mentions me ;)\n", "Looks like they're all stuck on the IRC notifier. Sigh.\n", "(close/open for build retrigger, sorry)\n" ]
https://api.github.com/repos/psf/requests/issues/1899
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1899/labels{/name}
https://api.github.com/repos/psf/requests/issues/1899/comments
https://api.github.com/repos/psf/requests/issues/1899/events
https://github.com/psf/requests/issues/1899
26,765,137
MDU6SXNzdWUyNjc2NTEzNw==
1,899
PEP8 Compliance
{ "avatar_url": "https://avatars.githubusercontent.com/u/1371925?v=4", "events_url": "https://api.github.com/users/cli248/events{/privacy}", "followers_url": "https://api.github.com/users/cli248/followers", "following_url": "https://api.github.com/users/cli248/following{/other_user}", "gists_url": "https://api.github.com/users/cli248/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cli248", "id": 1371925, "login": "cli248", "node_id": "MDQ6VXNlcjEzNzE5MjU=", "organizations_url": "https://api.github.com/users/cli248/orgs", "received_events_url": "https://api.github.com/users/cli248/received_events", "repos_url": "https://api.github.com/users/cli248/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cli248/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cli248/subscriptions", "type": "User", "url": "https://api.github.com/users/cli248", "user_view_type": "public" }
[]
closed
true
null
[]
null
12
2014-02-02T17:28:56Z
2021-09-08T23:07:56Z
2014-10-05T17:24:31Z
NONE
resolved
I checked `PEP8` compliance using following command, ``` pep8 --statistics --ignore=E501 --exclude='requests/packages/*' requests test_requests.py ``` The stats are - 2 **E125** continuation line does not distinguish itself from next logical line - 6 **E126** continuation line over-indented for hanging indent - 30 **E128** continuation line under-indented for visual indent - 1 **E203** whitespace before ':' - 2 **E226** missing whitespace around arithmetic operator - 2 **E23** missing whitespace after ',' - 2 **E241** multiple spaces after ',' - 22 **E251** unexpected spaces around keyword / parameter equals - 1 **E261** at least two spaces before inline comment - 3 **E303** too many blank lines (2) - 1 **W293** blank line contains whitespace I plan to make `requests` more `PEP8` compliant, and I am wondering which errors should I ignore beside **E501**.
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1899/reactions" }
https://api.github.com/repos/psf/requests/issues/1899/timeline
null
completed
null
null
false
[ "This has been proposed in the past and @kennethreitz vastly dislikes most of pep8 from my experience. I've never been able to tell exactly what style he likes to follow but it's very much his call. Until he can get around to replying to this, I would hold off on your efforts so you do not expend too much energy.\n\nFWIW, As the maintainer of Flake8 as well as this project, I appreciate your desire to make this project more compliant.\n", "Correct. At Requests we view PEP8 more as [a set of guidelines](http://youtu.be/b6kgS_AwuH0). I wouldn't get too invested in this idea. =)\n", "Ok, let's wait @kennethreitz's rely. It is interesting that he does't like PEP8 style.\n", "There are things he likes, there are things he doesn't. =)\n", "You should talk to some of the folks who maintain Twisted about their feelings on PEP8. :wink: \n", "Haha, that reminds me Shakespeare's famous remark, **There are a thousand Hamlets in a thousand people's eyes**\n\nI will wait @kennethreitz's reply and see what I should do next. \n\nBTW: I think @kennethreitz should come up with a style guide for `requests` if he doesn't like pep8. \n", "He's begun a similar project here: http://docs.python-guide.org/en/latest/\n", "@rattrayalex sorry?\n", "While it's not a style guide for requests specifically, @kennethreitz 's guide linked above documents many of his opinions on python style in general. Though he does seem to say nicer things about PEP8 there: http://docs.python-guide.org/en/latest/writing/style/#pep-8\n", "You may find observing the [blames on the PEP8 section](https://github.com/kennethreitz/python-guide/blame/master/docs/writing/style.rst) of interest.\n", "Ah! Not a lot of @kennethreitz in that ;-P\n(tl;dr, none of the recent blames are his)\nThanks!\n", "Since this is highly unlikely to be done, I'm closing this. Thanks everyone for your interest.\n" ]
https://api.github.com/repos/psf/requests/issues/1898
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1898/labels{/name}
https://api.github.com/repos/psf/requests/issues/1898/comments
https://api.github.com/repos/psf/requests/issues/1898/events
https://github.com/psf/requests/pull/1898
26,763,596
MDExOlB1bGxSZXF1ZXN0MTIxMTU4NDc=
1,898
Made .history type consistent (tuples only)
{ "avatar_url": "https://avatars.githubusercontent.com/u/81353?v=4", "events_url": "https://api.github.com/users/zopieux/events{/privacy}", "followers_url": "https://api.github.com/users/zopieux/followers", "following_url": "https://api.github.com/users/zopieux/following{/other_user}", "gists_url": "https://api.github.com/users/zopieux/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zopieux", "id": 81353, "login": "zopieux", "node_id": "MDQ6VXNlcjgxMzUz", "organizations_url": "https://api.github.com/users/zopieux/orgs", "received_events_url": "https://api.github.com/users/zopieux/received_events", "repos_url": "https://api.github.com/users/zopieux/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zopieux/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zopieux/subscriptions", "type": "User", "url": "https://api.github.com/users/zopieux", "user_view_type": "public" }
[ { "color": "e11d21", "default": false, "description": null, "id": 44501305, "name": "Not Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTMwNQ==", "url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge" } ]
closed
true
null
[]
null
5
2014-02-02T16:09:22Z
2021-09-08T23:06:24Z
2014-02-11T16:56:32Z
NONE
resolved
An empty history was represented by an empty list whereas a non-empty history was a tuple. Now the type of .history is always a tuple, so it does not break code such as `req.history + (req,)`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1898/reactions" }
https://api.github.com/repos/psf/requests/issues/1898/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1898.diff", "html_url": "https://github.com/psf/requests/pull/1898", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1898.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1898" }
true
[ "Six test cases failed. See details on this [page](http://ci.kennethreitz.org/job/requests-pr/PYTHON=3.3/lastBuild/console)\n\n```\n> _r.history.append(r)\nE AttributeError: 'tuple' object has no attribute 'append'\n\nrequests/auth.py:179: AttributeError\n```\n", "This is not a totally unreasonable change, but you've fixed it in the wrong place. If you look at `sessions.py` line 533, you can see where we convert this to a tuple.\n", "The exact line that @Lukasa is referring to is [here](https://github.com/kennethreitz/requests/blob/master/requests/sessions.py#L533). Just out-dent that line and you'll be fine. Please revert the change you've made here before making this change. Also please continue working on this PR and do not open a new one @Zopieux \n\nPlease leave a comment when you've updated the PR.\n", "This breaks the codebase :)\n", "I am sorry I was not reactive enough to submit a working PR. I am pleased to see that other open PRs include this bugfix!\n" ]
https://api.github.com/repos/psf/requests/issues/1897
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1897/labels{/name}
https://api.github.com/repos/psf/requests/issues/1897/comments
https://api.github.com/repos/psf/requests/issues/1897/events
https://github.com/psf/requests/pull/1897
26,757,766
MDExOlB1bGxSZXF1ZXN0MTIxMTM3MTk=
1,897
Document requirements for SNI support on Python2
{ "avatar_url": "https://avatars.githubusercontent.com/u/48501?v=4", "events_url": "https://api.github.com/users/aliles/events{/privacy}", "followers_url": "https://api.github.com/users/aliles/followers", "following_url": "https://api.github.com/users/aliles/following{/other_user}", "gists_url": "https://api.github.com/users/aliles/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aliles", "id": 48501, "login": "aliles", "node_id": "MDQ6VXNlcjQ4NTAx", "organizations_url": "https://api.github.com/users/aliles/orgs", "received_events_url": "https://api.github.com/users/aliles/received_events", "repos_url": "https://api.github.com/users/aliles/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aliles/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aliles/subscriptions", "type": "User", "url": "https://api.github.com/users/aliles", "user_view_type": "public" }
[]
closed
true
null
[]
null
11
2014-02-02T09:18:43Z
2021-09-08T23:10:55Z
2014-02-11T16:55:33Z
NONE
resolved
Adds a section to Request's advanced usage guide on what Server Name Indication is, its purpose, and how to enable it on Python2.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1897/reactions" }
https://api.github.com/repos/psf/requests/issues/1897/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1897.diff", "html_url": "https://github.com/psf/requests/pull/1897", "merged_at": "2014-02-11T16:55:33Z", "patch_url": "https://github.com/psf/requests/pull/1897.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1897" }
true
[ "Thanks for this!\n\nHowever, I'm on the fence. So far we've made a conscious decision not to _officially_ document this support in Requests, because we don't adequately test it. We don't have an SNI environment in our CI system, we've regressed this support in the past and we'll probably do it again.\n\nWe should combine that with the fact that Requests is meant to be _for humans_. This SNI support is a frankly-fairly-ugly workaround for a serious failing in Python 2. I'm happy to suggest this as a workaround for people everywhere else on the internet (and I do), but I'm reluctant to add it to the core documentation.\n\nHowever, if the rest of the team wants to I'm happy to let them, so I'll leave this open until @kennethreitz and @sigmavirus24 take a look.\n", "Would any number of caveats, warnings, or flashing red text indicating the tenuous level of support convince you otherwise? ;-)\n\nFor humans, receiving an error message like:\n\n```\nrequests.exceptions.SSLError: hostname 'haveibeenpwned.com' doesn't match either of '*.azurewebsites.net', '*.scm.azurewebsites.net', '*.azure-mobile.net', '*.scm.azure-mobile.net'\n```\n\nCan be pretty daunting if you're not familiar with some of the more esoteric aspects of SSL. And frustrating if you are. Even if the actual _hack_ is not described within the Requests documentation, what about documenting this variation between Python2 an 3 support and (possibly) pointing to external information?\n\nThe most likely alternatives (as I see them) for humans encountering this error is to:\n- Give up on using SSL.\n- Disable host name verification.\n Neither of these are particularly desirable outcomes. Including some coverage of SNI in the documentation may help to avoid these.\n", "Hmm.\n\nI wonder if what we actually need is an FAQ or help section, which could then contain something a bit more like this:\n\n#### Help! What's this \"hostname doesn't match\" error mean and how do I fix it?\n\nSome brief discussion about SNI and hostname verification.\n", "That's a very good idea. (At least in my opinion)\n\nTo where should a FAQ entry direct the reader for further information? Unfortunately the current [urllib3 documentation for SNI](http://urllib3.readthedocs.org/en/latest/contrib.html#sni-support-for-python-2) is empty. I'd be very happy to write a FAQ entry once I have a better understanding of what does and doesn't belong in the main Requests docs. :-)\n\nI did find [this blog entry](http://ibofobi.dk/blog/archive/2013/03/sni-support-for-requests/) on SNI support in Requests, but it was written before Requests would optimistically attempt to enable it.\n", "I'm actually quite enamoured by the idea of having an FAQ section. I think there's a bit too much in the Advanced docs that is really documenting some weird edge case behaviour.\n\nI'll take a look at this at some point fairly soon: it'd be nice to break the advanced docs down a little bit (they're huge!).\n\nMy canonical SNI reference is [this Stack Overflow answer](https://stackoverflow.com/questions/18578439/using-requests-with-tls-doesnt-give-sni-support/18579484#18579484).\n", "Excellent, I'll update this pull request to move documentation to the FAQ, referring to the Stack Overflow answer details on enabling Python2 support rather than including it locally.\n", "I've updated this pull request, removing the new section from the advanced docs and adding a new question to the FAQ.\n", "Awesome, that's a good start.\n\nI think we want to move the fact that this answer applies only to Python 2.X earlier in that section. Talking at length about Requests' lack of SNI support before saying \"actually it's fine in 3.X\" is a bit sad (and makes us look bad!). Maybe the first paragraph could be:\n\n> These errors occur when :ref:`SSL certificate verification <verification>` fails to match the certificate the server responds with to the hostname Requests thinks it's contacting. If you're certain the server's SSL setup is correct (for example, because you can visit the site with your browser) and you're using Python 2.6 or 2.7, a possible explanation is that you need SNI support.\n\nThoughts?\n", "I certainly had no intent of making Requests look bad. ;-)\n\nI've adjusted that first paragraph accordingly. I also like mentioning the\nPython2 restriction early as it helps the reader diagnose more quickly if\nthis issue could be affecting them.\n", "Excellent, I'm happy with that.\n\nNormally I'd merge documentation changes myself, but this changes the layout a little bit and I want to chat with @kennethreitz a bit before we go ahead and merge. It will definitely get merged though. =D\n", "Looks good! We can always refactor later. \n" ]
https://api.github.com/repos/psf/requests/issues/1896
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1896/labels{/name}
https://api.github.com/repos/psf/requests/issues/1896/comments
https://api.github.com/repos/psf/requests/issues/1896/events
https://github.com/psf/requests/pull/1896
26,680,468
MDExOlB1bGxSZXF1ZXN0MTIwNzc0ODA=
1,896
Provide a pythonic way to inject your won where() function
{ "avatar_url": "https://avatars.githubusercontent.com/u/1174343?v=4", "events_url": "https://api.github.com/users/ticosax/events{/privacy}", "followers_url": "https://api.github.com/users/ticosax/followers", "following_url": "https://api.github.com/users/ticosax/following{/other_user}", "gists_url": "https://api.github.com/users/ticosax/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ticosax", "id": 1174343, "login": "ticosax", "node_id": "MDQ6VXNlcjExNzQzNDM=", "organizations_url": "https://api.github.com/users/ticosax/orgs", "received_events_url": "https://api.github.com/users/ticosax/received_events", "repos_url": "https://api.github.com/users/ticosax/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ticosax/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ticosax/subscriptions", "type": "User", "url": "https://api.github.com/users/ticosax", "user_view_type": "public" }
[ { "color": "e11d21", "default": false, "description": null, "id": 78002701, "name": "Do Not Merge", "node_id": "MDU6TGFiZWw3ODAwMjcwMQ==", "url": "https://api.github.com/repos/psf/requests/labels/Do%20Not%20Merge" } ]
closed
true
null
[]
null
12
2014-01-31T13:18:33Z
2021-09-08T23:01:08Z
2014-01-31T14:16:47Z
NONE
resolved
If a module called `requests_extension` define a function called where(); this function will be used instead of requests.certs.where()
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1896/reactions" }
https://api.github.com/repos/psf/requests/issues/1896/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1896.diff", "html_url": "https://github.com/psf/requests/pull/1896", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1896.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1896" }
true
[ "Thanks for this!\n\nI'm -0.5 on this change, because I believe it to be a security risk. It is upsettingly easy to create a situation where such a module is installed that defines a `where()` function that returns arbitrary certificates, and such behaviour would not be noticed. Monkeypatching makes this behaviour harder (though not impossible).\n\nI'm going to wait until @sigmavirus24 weighs in. =)\n", "Today requests.get() accept a parameter `verify` that allows to pass arbitrary ca-certs.\nSo the vulnerability is somehow already there isn't it ?\n", "The difference is how the vulnerability is attacked. For `verify`, you have to control the parameters being passed to a single `requests` call. This means if I write code myself that calls `requests.get()`, I _know_ whether or not my certs are getting validated.\n\nWith this patch, I would _not_ know that unless I scoured the imported packages before I called `requests.get()` to confirm that `requests_extension` package existed. If one was installed on my system for any reason, even if I set `verify=True` explicitly I would still tacitly accept certs that I shouldn't.\n", "If you let your user write python modules like request_extensions.py, then they can call `request.get(verify=False)` easily.\nIf you do not allow your user to write code, then they can not write nor install nor load request_extensions.py.\nI mean, it is either secure or not. There is no \"in between\" situation like you describe, where you allow user to write python code and assume that all `request.get(verify=True)` will be honoured.\n", "I think you're totally misunderstanding me.\n\nSuppose you install `pip`. `pip` uses Requests and verifies all TLS certificates. In a clean Python install, you know that all is well and that you cannot be Man-in-the-Middle attacked.\n\nWith your suggestion, if I can get access to your computer I can install my own `requests_extension` package that adds my own self-signed root certificate to your list of certificates. This will immediately start affecting `pip`, and indeed any other script on your machine that uses Requests: they'll all start loading my own certificates.\n\nObviously, for me to do that I would have required access to your machine, so let me provide a worse example. Suppose I'm Generic Evil Corp, and I want to ship a Python API client. I build it on top of requests, but I don't want to bother having to buy a TLS certificate, so I self-sign one and then add my own `requests_extension` module to my dependencies. Anyone who installs my module now quietly accepts my TLS certificate as 'trusted', and _they don't know they're doing it_.\n\nIn essence, the difference is scope. In your case, I have to verify that I've never installed a `requests_extension` package, because if I have it affects _everything_. There is no compelling use-case for this that I can see that justifies this enormous risk.\n\nI'm actually upgrading my objection to -1. The more I think about it the worse it sounds.\n", "Oh man, I just keep thinking of ways this can be made worse. The surface area of ways to attack certificate validation here is _enormous_. Just think of the sheer number of ways Python modules can be imported or installed: this is totally terrifying. I'm going further than -1, I'm closing this. Thanks for providing the work, but I think the danger inherent in this totally outweighs any advantage it has.\n", "Beyond the security implications that @Lukasa has already outlined, I am 100% opposed to the hard coding of an import statement of a module that is not maintained by one of us. This provides a rich opportunity for someone to make a package on PyPI that subtly hides this and installs a package named `requests_extension`. Let's say I added that to a package I already distribute, then I could, by someone installing an otherwise innocuous package, totally take control of certificate discovery on their installation without their knowledge.\n\nYou claim requests is insecure already but this would introduce the largest of backdoors into requests as it exists now.\n\nThank you for your contribution but we can not accept it in good conscience.\n\nFurthermore, I don't feel there is further need for discussion on this issue.\n", "Actually, one last comment:\n\nThis is not pythonic by any means. The pythonic way to handle this would still be unacceptable.\n", "thank you @Lukasa I understand the use case you pointed now,\nand agreed it was a bad idea to do it that way.\n\nWhat about adding built-in operating system flavor instead ?\nMost package maintainer of requests for major linux distributions just hardcode the path to the main cert file like `/etc/ssl/certs/ca-certificates.crt` for debian based distributions for instance.\n\nThen `requests` could provide out of the box some of those locations, if we can guess reliably the type of distribution requests is running on.\n\nHow does it sounds ?\n", "@ticosax we experimented with including a list of popular locations and just kept receiving pull requests to add it for more and more obscure repositories so we removed them entirely and rely on what we have now. It is highly unlikely we'll be moving back to that model\n", "Fair enough.\nthx for replying\n", "Thank you for contributing @ticosax \n" ]
https://api.github.com/repos/psf/requests/issues/1895
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1895/labels{/name}
https://api.github.com/repos/psf/requests/issues/1895/comments
https://api.github.com/repos/psf/requests/issues/1895/events
https://github.com/psf/requests/issues/1895
26,645,863
MDU6SXNzdWUyNjY0NTg2Mw==
1,895
PreparedRequest.prepare_body should not add a Transfer-Encoding header when manually supplied with a Content-Length
{ "avatar_url": "https://avatars.githubusercontent.com/u/92943?v=4", "events_url": "https://api.github.com/users/gholms/events{/privacy}", "followers_url": "https://api.github.com/users/gholms/followers", "following_url": "https://api.github.com/users/gholms/following{/other_user}", "gists_url": "https://api.github.com/users/gholms/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gholms", "id": 92943, "login": "gholms", "node_id": "MDQ6VXNlcjkyOTQz", "organizations_url": "https://api.github.com/users/gholms/orgs", "received_events_url": "https://api.github.com/users/gholms/received_events", "repos_url": "https://api.github.com/users/gholms/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gholms/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gholms/subscriptions", "type": "User", "url": "https://api.github.com/users/gholms", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-01-30T23:58:13Z
2021-09-09T00:10:05Z
2014-02-17T14:39:25Z
NONE
resolved
I need to use a PUT request to upload sys.stdin to Amazon S3, but since S3 does not support chunked transfer encoding I have to supply a `Content-Length` header myself. Doing so seemed to successfully prevent HTTPAdapter.send from trying to do chunking, but S3 rejected the request anyway because PreparedRequest.prepare_body noticed that the body was iterable and set a `Transfer-Encoding` header on its own. A user-supplied `Content-Length` should prevent the latter from happening as well.
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1895/reactions" }
https://api.github.com/repos/psf/requests/issues/1895/timeline
null
completed
null
null
false
[ "Thanks for raising this!\n\nThere's an open issue that contains a substantial discussion on this point, #1648. The short of it, however, is that we almost never expect people to provide their own Content-Length header.\n\nI have a question though: how are you able to set Content-Length when reading from sys.stdin?\n", "Closing to centralize discussion on #1648.\n" ]
https://api.github.com/repos/psf/requests/issues/1894
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1894/labels{/name}
https://api.github.com/repos/psf/requests/issues/1894/comments
https://api.github.com/repos/psf/requests/issues/1894/events
https://github.com/psf/requests/pull/1894
26,597,677
MDExOlB1bGxSZXF1ZXN0MTIwMzQ4MTk=
1,894
Make 'raise_for_status' return the response object
{ "avatar_url": "https://avatars.githubusercontent.com/u/28710?v=4", "events_url": "https://api.github.com/users/vmalloc/events{/privacy}", "followers_url": "https://api.github.com/users/vmalloc/followers", "following_url": "https://api.github.com/users/vmalloc/following{/other_user}", "gists_url": "https://api.github.com/users/vmalloc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vmalloc", "id": 28710, "login": "vmalloc", "node_id": "MDQ6VXNlcjI4NzEw", "organizations_url": "https://api.github.com/users/vmalloc/orgs", "received_events_url": "https://api.github.com/users/vmalloc/received_events", "repos_url": "https://api.github.com/users/vmalloc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vmalloc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vmalloc/subscriptions", "type": "User", "url": "https://api.github.com/users/vmalloc", "user_view_type": "public" }
[]
closed
true
null
[]
null
10
2014-01-30T13:26:25Z
2017-02-10T13:31:28Z
2014-01-30T17:25:23Z
NONE
null
This would make simple usages simpler: ``` python result = requests.get("http://api.server.com/path").raise_for_status().json() ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1894/reactions" }
https://api.github.com/repos/psf/requests/issues/1894/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1894.diff", "html_url": "https://github.com/psf/requests/pull/1894", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1894.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1894" }
true
[ "Thanks for this @vmalloc! I do not think we'll accept it at this time though. I'm on my phone at the moment but I'll be happy to give a more detailed explanation in a short while.\n", "Ok, is it because it's incomplete or because this behaviour isn't desirable? \n", "This is a pretty drastic API change, and we're under an \"API Freeze\" at the moment. That API has been in place for almost two years now, and it changing it would break a lot of people's code :)\n\nAlso, Raise == Exception.\n\nHowever, thanks so much for contributing! Keep 'em coming!\n", "I'm just curious - how would it break people's code? Is anyone asserting it returns None?—\nRotem\n\nOn Thu, Jan 30, 2014 at 7:25 PM, Kenneth Reitz [email protected]\nwrote:\n\n> This is a pretty drastic API change, and we're under an \"API Freeze\" at the moment. That API has been in place for almost two years now, and it changing it would break a lot of people's code :)\n> Also, Raise == Exception.\n> \n> ## However, thanks so much for contributing! Keep 'em coming!\n> \n> Reply to this email directly or view it on GitHub:\n> https://github.com/kennethreitz/requests/pull/1894#issuecomment-33710628\n", "@vmalloc that isn't really the issue. There's a semantic meaning in the name `raise_for_status`. The meaning in it is that you want to raise an Exception for something that is not an okay status code (vaguely some status code < 400). There should be no explicit return from that method. There is an implicit return of `None` because any Python function (or method) written without a `return` returns `None`. The function has one purpose: raise an exception. Returning the response makes the API inconsistent -- which is why it is a breaking API change -- and it makes the meaning of the function nebulous.\n\nI understand why you feel this change is desirable but it is not a change we can accept. We also will not accept an addition of a method to provide this behaviour. It isn't something that fits in with the overall design of requests.\n\nThanks for contributing though! :cake:\n", "i know this PR is quite old, but i really like it!... mostly because IMO it is completely in sync with the `requests` package's main objective of reducing clutter. without this change, the following is required:\n\n```\nresult = requests.get(\"http://api.server.com/path\")\nresult.raise_for_status()\nresult = result.json()\n```\n\nisn't @vmalloc's version much cleaner? if the problem is the naming of `raise_for_status`, how about renaming it (with a deprecating alias)? e.g.:\n\n```\nresult = requests.get(\"http://api.server.com/path\").require_ok().json()\n```\n\n(or whatever you want) or how about a new option to all of the request methods:\n\n```\nresult = requests.get(\"http://api.server.com/path\", require_ok=True).json()\n```\n\nbtw, i currently always monkey-patch `Response.raise_for_status` to do exactly this. ugly!\n", "I frankly still don't understand the reason this was rejected. I don't\nthink this qualifies for \"API breakage\" per se, and the utility of this\nchange is enormous.\n\nI suspect the pattern you mentioned is prevalent in 99% of the use cases.\nOn יום ו׳, 13 במאי 2016 at 0:08 metagriffin [email protected]\nwrote:\n\n> i know this PR is quite old, but i really like it!... mostly because IMO\n> it is completely in sync with the requests package's main objective of\n> reducing clutter. without this change, the following is required:\n> \n> result = requests.get(\"http://api.server.com/path\")\n> result.raise_for_status()\n> result = result.json()\n> \n> isn't @vmalloc https://github.com/vmalloc's version much cleaner? if\n> the problem is the naming of raise_for_status, how about renaming it\n> (with a deprecating alias)? e.g.:\n> \n> result = requests.get(\"http://api.server.com/path\").require_ok().json()\n> \n> (or whatever you want) or how about a new option to all of the request\n> methods:\n> \n> result = requests.get(\"http://api.server.com/path\", require_ok=True).json()\n> \n> btw, i currently always monkey-patch Response.raise_for_status to do\n> exactly this. ugly!\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly or view it on GitHub\n> https://github.com/kennethreitz/requests/pull/1894#issuecomment-218886371\n", "+1 for this change, it would be very handy! \r\n`myjson = requests.get(\"http://api.server.com/path\").raise_for_status().json()` is exactly what you want to write when dealing with REST API returning JSON objects.\r\nI don't see the point of `raise_for_status()` renaming, because making it a chainable method does not change its semantics.", "@guillp I agree. It's funny to me how this library willy-nilly broke compatibility time and again (e.g. with `json` turning to `json()`) but adding a single return value where there was none is a major backward compatibility issue...", "Hi @vmalloc.\r\n\r\n*Please* try to extend charity to the maintainers of OSS projects you use. We have not willy-nilly broken compatibility. The move from `json` to `json()` was accompanied by a major version bump and a substantial internal refactoring. Other breakages of compatibility have been considered to be bugs and errors and treated as such.\r\n\r\nIronically, I was previously inclined to look much more warmly on this proposal than my fellow maintainers. Now I'm not. The answer is no." ]
https://api.github.com/repos/psf/requests/issues/1893
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1893/labels{/name}
https://api.github.com/repos/psf/requests/issues/1893/comments
https://api.github.com/repos/psf/requests/issues/1893/events
https://github.com/psf/requests/pull/1893
26,545,712
MDExOlB1bGxSZXF1ZXN0MTIwMDU2OTA=
1,893
Avoid breaking crappy distribution methods.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[ { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
4
2014-01-29T19:23:36Z
2021-09-08T23:11:10Z
2014-01-30T17:23:57Z
MEMBER
resolved
Apparently RPM doesn't like us having the full license text in the 'license' section, as in #1878. Seems innocuous to change this, because realistically who the hell cares?
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1893/reactions" }
https://api.github.com/repos/psf/requests/issues/1893/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1893.diff", "html_url": "https://github.com/psf/requests/pull/1893", "merged_at": "2014-01-30T17:23:57Z", "patch_url": "https://github.com/psf/requests/pull/1893.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1893" }
true
[ "LGTM :+1: \n", "Before, this was important because it was a lovely feature of crate.io. \n\nLong live crate.io.\n", "One day, crate.io will return as pypi.python.org. Unless @dstufft explodes in a ball of caremad.\n", "Oh, but he _is_ a ball of caremad! That's his secret power.\n" ]
https://api.github.com/repos/psf/requests/issues/1892
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1892/labels{/name}
https://api.github.com/repos/psf/requests/issues/1892/comments
https://api.github.com/repos/psf/requests/issues/1892/events
https://github.com/psf/requests/pull/1892
26,545,045
MDExOlB1bGxSZXF1ZXN0MTIwMDUyODQ=
1,892
Repopulate ~/.netrc auth.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[ { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
21
2014-01-29T19:15:37Z
2021-09-08T23:07:28Z
2014-01-31T17:19:53Z
MEMBER
resolved
This should be a fix for #1885. @sigmavirus24, can you give me some code review? =)
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1892/reactions" }
https://api.github.com/repos/psf/requests/issues/1892/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1892.diff", "html_url": "https://github.com/psf/requests/pull/1892", "merged_at": "2014-01-31T17:19:53Z", "patch_url": "https://github.com/psf/requests/pull/1892.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1892" }
true
[ "One comment otherwise LGTM.\n\nYou don't need to write a test for this, but I have an idea that I'd like to test it with.\n", "Done and done.\n", ":shipit: \n", "I need to think about this. \n", "@kennethreitz we absolutely cannot continue reusing authorizations on redirects to sites that are not the same host. With that in mind we almost certainly need to issue a CVE. I'll happily work on that though.\n\nWe've been leaking credentials and we need to at least address that. Whether we repopulate the auth after stopping the leak or not is more of a feature decision. I'm sure one of the security experts, like @dstufft would back up @Lukasa and I on that.\n", "@kennethreitz , call me the outsider but @sigmavirus24 got a point. As the new guy 'round these parts. You have 1.1million downloads. Even if half that still use, we are talking about a **major** security failure in the code base. :no_good: \n", "I'm happy to remove the repopulation of `~/.netrc`, but the only reason this shouldn't get merged I just fixed (I wasn't respecting `Session.trust_env`).\n", "@Lukasa So in that sense, are you suggesting that the session itself may be invalidated and then a restart is required?\n", "No. =)\n\nAll I was saying is that the previous version of this fix didn't respect the flag that controls whether we should look at `~/.netrc`. Now it does.\n\nThe barest minimum we have to do is remove Authorization headers on redirects to new hosts. That's pretty much mandatory. Everything else is gravy.\n", "Oh well alrighty then! :) :+1: \n", "@Sirenity it isn't a big deal but requests should have far more than 8 million downloads last I checked (we're probably upwards of 9 million) and that's only from PyPI. We have no way of measuring how many downloads installations came from distribution repositories (i.e., people installing aptitude, yum, or pacman).\n", "@sigmavirus24 this was an explicit design decision, and it has been stated as such many times before. I've considered implementing a patch much like many times before, and this was a minor concern in the back of my mind when I did \"the big refactor\". As I said, I need to think about it.\n\nFurther comments about are neither helpful nor welcome. :)\n", "Oof, I just noticed this. This really should change FWIW, carrying authentication data across access boundaries is a bad idea generally. It'd be up to Mitre but a CVE is probably appropriate as well. If you do get a CVE and you do change this, include the CVE number in the changelog please :]\n", "This simple patch is not a technical change, but a fairly large philosophical one. Traditionally, I have recommended that all security-concious users disable redirection support, and follow them on their own. This patch would make security the something that Requests does for you (something that I already did for the Python world when I verified SSL by default). This would take it to the next level.\n\nI'm not judging it as either good or bad. I just need need to think about it. \n\nAgain, no further input needed. Will be back to you shortly.\n", "Also, if we could refactor this, it would be great. You ruined my beautiful code. :)\n", "\\o/\n\n> On Jan 31, 2014, at 12:21 PM, Kenneth Reitz [email protected] wrote:\n> \n> Also, if we could refactor this, it would be great. You ruined my beautiful code. :)\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "I have a CVE Identifier for this when we release the next version. I also have one for the Proxy-Authorization exposure bug.\n", "What is the CVE? Useful for discussion even prior to release. \n\n> On Jan 31, 2014, at 12:40 PM, Ian Cordasco [email protected] wrote:\n> \n> I have a CVE Identifier for this when we release the next version. I also have one for the Proxy-Authorization exposure bug.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "For this one: CVE-2014-1829\n\n_Edit_ Just got it from MITRE this morning so it may not show up in searches yet.\n", "I cleaned up the ugly code. :P\n", "We've learned a valuable lesson today: the easiest way to get Kenneth to write more code is to put really ugly code in front of him and let his natural instincts take over. ;)\n" ]
https://api.github.com/repos/psf/requests/issues/1891
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1891/labels{/name}
https://api.github.com/repos/psf/requests/issues/1891/comments
https://api.github.com/repos/psf/requests/issues/1891/events
https://github.com/psf/requests/pull/1891
26,493,847
MDExOlB1bGxSZXF1ZXN0MTE5NzU5NTc=
1,891
Add request to RequestException
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
[ { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
7
2014-01-29T02:16:48Z
2021-09-08T23:06:23Z
2014-02-11T16:58:23Z
CONTRIBUTOR
resolved
Pass request objects in `HTTPAdapter`. Fixes #1890
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1891/reactions" }
https://api.github.com/repos/psf/requests/issues/1891/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1891.diff", "html_url": "https://github.com/psf/requests/pull/1891", "merged_at": "2014-02-11T16:58:23Z", "patch_url": "https://github.com/psf/requests/pull/1891.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1891" }
true
[ "LGTM, I'm happy with this. :cake:\n", "Uh, maybe.\n", "@kennethreitz I have no strong opinion on this either way frankly. #1890 seemed like a reasonable feature request though. The decision is all yours.\n", "In the context of grequests, I think this will be nice. Let's just not make a habit of this :)\n", "I'm assuming this is the PreparedRequest. \n", "It is. =)\n", "@kennethreitz you assume correctly. And this was a one time idea anyway. It seemed reasonable enough to allow it.\n" ]
https://api.github.com/repos/psf/requests/issues/1890
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1890/labels{/name}
https://api.github.com/repos/psf/requests/issues/1890/comments
https://api.github.com/repos/psf/requests/issues/1890/events
https://github.com/psf/requests/issues/1890
26,490,372
MDU6SXNzdWUyNjQ5MDM3Mg==
1,890
include Request object as attribute on RequestExceptions
{ "avatar_url": "https://avatars.githubusercontent.com/u/83819?v=4", "events_url": "https://api.github.com/users/keturn/events{/privacy}", "followers_url": "https://api.github.com/users/keturn/followers", "following_url": "https://api.github.com/users/keturn/following{/other_user}", "gists_url": "https://api.github.com/users/keturn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/keturn", "id": 83819, "login": "keturn", "node_id": "MDQ6VXNlcjgzODE5", "organizations_url": "https://api.github.com/users/keturn/orgs", "received_events_url": "https://api.github.com/users/keturn/received_events", "repos_url": "https://api.github.com/users/keturn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/keturn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keturn/subscriptions", "type": "User", "url": "https://api.github.com/users/keturn", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-01-29T00:53:11Z
2021-09-09T00:10:08Z
2014-02-11T16:58:23Z
NONE
resolved
It'd be handy if you could check RequestException.request to find out about the request that caused the error.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1890/reactions" }
https://api.github.com/repos/psf/requests/issues/1890/timeline
null
completed
null
null
false
[ "Isn't the response included? That should have the request on it already.\n", "Whoops. That's only for `HTTPError`s. The issue is that you do not necessarily always have a request object that can be passed in. I'm working on a PR to add the attribute when possible. (Same for response objects.)\n" ]
https://api.github.com/repos/psf/requests/issues/1889
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1889/labels{/name}
https://api.github.com/repos/psf/requests/issues/1889/comments
https://api.github.com/repos/psf/requests/issues/1889/events
https://github.com/psf/requests/issues/1889
26,480,608
MDU6SXNzdWUyNjQ4MDYwOA==
1,889
logging KeyError: '<key>'
{ "avatar_url": "https://avatars.githubusercontent.com/u/4558966?v=4", "events_url": "https://api.github.com/users/dkavraal/events{/privacy}", "followers_url": "https://api.github.com/users/dkavraal/followers", "following_url": "https://api.github.com/users/dkavraal/following{/other_user}", "gists_url": "https://api.github.com/users/dkavraal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dkavraal", "id": 4558966, "login": "dkavraal", "node_id": "MDQ6VXNlcjQ1NTg5NjY=", "organizations_url": "https://api.github.com/users/dkavraal/orgs", "received_events_url": "https://api.github.com/users/dkavraal/received_events", "repos_url": "https://api.github.com/users/dkavraal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dkavraal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkavraal/subscriptions", "type": "User", "url": "https://api.github.com/users/dkavraal", "user_view_type": "public" }
[]
closed
true
null
[]
null
5
2014-01-28T22:04:04Z
2021-09-08T12:01:00Z
2014-01-28T22:06:12Z
NONE
resolved
If the main logger of the application has "extra" fields on, the error loggings inside urllib3 makes the error "KeyError" to occur. Here is a sample snippet to try: ``` python import requests import logging logger = logging.getLogger() logStdOut = logging.StreamHandler() LOGFORMATCNSL=logging.Formatter("%(asctime)s %(message)s %(aVar)s %(bVar)s") logStdOut.setFormatter(LOGFORMATCNSL) logStdOut.setLevel(logging.DEBUG) logger.setLevel(logging.NOTSET) logger.addHandler(logStdOut) def tryThis(): logger.error("deneme", extra={"aVar": "aVal", "bVar": "bVal"}) conn = requests.get("http://www.google.com") conn.close() tryThis() ``` And I get this result: ``` 2014-01-28 23:54:16,270 deneme aVal bVal Traceback (most recent call last): File "/usr/lib/python3.3/logging/__init__.py", line 937, in emit msg = self.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 808, in format return fmt.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 549, in format s = self.formatMessage(record) File "/usr/lib/python3.3/logging/__init__.py", line 518, in formatMessage return self._style.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 364, in format return self._fmt % record.__dict__ KeyError: 'aVar' Logged from file connectionpool.py, line 172 Traceback (most recent call last): File "/usr/lib/python3.3/logging/__init__.py", line 937, in emit msg = self.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 808, in format return fmt.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 549, in format s = self.formatMessage(record) File "/usr/lib/python3.3/logging/__init__.py", line 518, in formatMessage return self._style.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 364, in format return self._fmt % record.__dict__ KeyError: 'aVar' Logged from file connectionpool.py, line 345 Traceback (most recent call last): File "/usr/lib/python3.3/logging/__init__.py", line 937, in emit msg = self.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 808, in format return fmt.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 549, in format s = self.formatMessage(record) File "/usr/lib/python3.3/logging/__init__.py", line 518, in formatMessage return self._style.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 364, in format return self._fmt % record.__dict__ KeyError: 'aVar' Logged from file connectionpool.py, line 172 Traceback (most recent call last): File "/usr/lib/python3.3/logging/__init__.py", line 937, in emit msg = self.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 808, in format return fmt.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 549, in format s = self.formatMessage(record) File "/usr/lib/python3.3/logging/__init__.py", line 518, in formatMessage return self._style.format(record) File "/usr/lib/python3.3/logging/__init__.py", line 364, in format return self._fmt % record.__dict__ KeyError: 'aVar' Logged from file connectionpool.py, line 345 [Finished in 1.6s] ``` However, if we change the formatting line of the test snippet into this, there is no exception like the above: ``` LOGFORMATCNSL=logging.Formatter("%(asctime)s %(message)s") ``` So, does the NullHandler() for the libraries logging make this, or there is sth else going on. Because, a library must not try to log anything into my application logging, without extra effort of mine I guess. Here is my system info: ``` # uname -a Linux myhostname 3.8.0-35-generic #50-Ubuntu SMP Tue Dec 3 01:24:59 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux # lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=13.04 DISTRIB_CODENAME=raring DISTRIB_DESCRIPTION="Ubuntu 13.04" # python Python 3.3.1 (default, Sep 25 2013, 19:29:01) [GCC 4.7.3] on linux # pip3 list -lxc (0.1) apturl (0.5.2ubuntu1) Brlapi (0.5.7) cchardet (0.3.5) chardet (2.2.1) command-not-found (0.3) defer (1.0.6) dirspec (4.2.0) distribute (0.6.34) friends (0.1) httplib2 (0.7.7) jusText (2.0.0) language-selector (0.1) louis (2.4.1) lxml (3.1.0) nose (1.3.0) oauthlib (0.3.7) onboard (0.99.0-alpha1-tr1190) oneconf (0.3.3) piston-mini-client (0.7.5) pycrypto (2.6) pygobject (3.8.2) pymongo (2.6.3) python-apt (0.8.8ubuntu6) python-debian (0.1.21-nmu2ubuntu1) pyxdg (0.25) requests (2.2.1) six (1.2.0) software-center-aptd-plugins (0.0.0) thin-client-config-agent (0.7) ubuntu-drivers-common (0.0.0) ufw (0.33-0ubuntu3) unattended-upgrades (0.1) unity-scope-gdrive (0.7) usb-creator (0.2.23) virtkey (0.63.0) Werkzeug (0.9.4) xdiagnose (3.5.1) xkit (0.0.0) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1889/reactions" }
https://api.github.com/repos/psf/requests/issues/1889/timeline
null
completed
null
null
false
[ "Hi there! Thanks for raising this!\n\nWe bring urllib3 in to Requests wholesale without changes, so can I ask you to open this issue over there? That will lead to a fix for Requests _and_ a fix for urllib3. =)\n", "BTW,\n\n``` python\nlogger.setLevel(logging.ERROR)\n```\n\nmakes that exceptions disappear, I know. However, this limits my usage as you may understand.\n", "Writing for next commers with same problem... After 3 days with this problem, I just (seconds ago) figured out \n\n1) First solution would be wrapping the requests.get() mehtod inside a try/except block itself.\n2) Would be with this patch for requests library 2.2.1:\n\n``` diff\n+++ __init__.py 2014-01-29 00:15:44.000000000 +0200\n@@ -74,4 +74,5 @@ except ImportError:\n def emit(self, record):\n pass\n\n-logging.getLogger(__name__).addHandler(NullHandler())\n+logger = logging.getLogger(__name__)\n+logger.addHandler(NullHandler())\n```\n\nand therefore, with small tweak in my own application:\n\n``` python\nimport requests\nimport python\nrequests.logger.setLevel(logging.ERROR)\n[...]\n```\n\nhope helps anyone.\n", "What is the status on this issue?", "Nothing has changed, but as discussed, this is technically a urllib3 bug and needs to be reproduced directly using that library." ]
https://api.github.com/repos/psf/requests/issues/1888
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1888/labels{/name}
https://api.github.com/repos/psf/requests/issues/1888/comments
https://api.github.com/repos/psf/requests/issues/1888/events
https://github.com/psf/requests/pull/1888
26,459,018
MDExOlB1bGxSZXF1ZXN0MTE5NTU1MDY=
1,888
Fix for 301 redirect and latest PyOpenSSL.
{ "avatar_url": "https://avatars.githubusercontent.com/u/456007?v=4", "events_url": "https://api.github.com/users/kouk/events{/privacy}", "followers_url": "https://api.github.com/users/kouk/followers", "following_url": "https://api.github.com/users/kouk/following{/other_user}", "gists_url": "https://api.github.com/users/kouk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kouk", "id": 456007, "login": "kouk", "node_id": "MDQ6VXNlcjQ1NjAwNw==", "organizations_url": "https://api.github.com/users/kouk/orgs", "received_events_url": "https://api.github.com/users/kouk/received_events", "repos_url": "https://api.github.com/users/kouk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kouk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kouk/subscriptions", "type": "User", "url": "https://api.github.com/users/kouk", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-01-28T17:18:12Z
2021-09-08T23:10:57Z
2014-01-28T20:14:38Z
CONTRIBUTOR
resolved
Fixes #1887
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1888/reactions" }
https://api.github.com/repos/psf/requests/issues/1888/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1888.diff", "html_url": "https://github.com/psf/requests/pull/1888", "merged_at": "2014-01-28T20:14:38Z", "patch_url": "https://github.com/psf/requests/pull/1888.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1888" }
true
[ "This seems like a reasonable fix to me. Unfortunately, the unit test is not a particularly good one because most people don't use the SNI build of Requests in their development systems. Additionally, we don't do it in our CI server (though we probably should, I'll bring it up with @kennethreitz). Leave it there for now: if we can get it into the CI server it'll be fine, otherwise we'll need another test.\n", "Yeah I figured you would say that. Makes sense actually. Funny thing is the main reason I get this is trying to use a SNI enabled private pypi server from my CI server. One CI server needs another :-)\n" ]
https://api.github.com/repos/psf/requests/issues/1887
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1887/labels{/name}
https://api.github.com/repos/psf/requests/issues/1887/comments
https://api.github.com/repos/psf/requests/issues/1887/events
https://github.com/psf/requests/issues/1887
26,458,175
MDU6SXNzdWUyNjQ1ODE3NQ==
1,887
301 redirect broken with latest pyopenssl/SNI
{ "avatar_url": "https://avatars.githubusercontent.com/u/456007?v=4", "events_url": "https://api.github.com/users/kouk/events{/privacy}", "followers_url": "https://api.github.com/users/kouk/followers", "following_url": "https://api.github.com/users/kouk/following{/other_user}", "gists_url": "https://api.github.com/users/kouk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kouk", "id": 456007, "login": "kouk", "node_id": "MDQ6VXNlcjQ1NjAwNw==", "organizations_url": "https://api.github.com/users/kouk/orgs", "received_events_url": "https://api.github.com/users/kouk/received_events", "repos_url": "https://api.github.com/users/kouk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kouk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kouk/subscriptions", "type": "User", "url": "https://api.github.com/users/kouk", "user_view_type": "public" }
[]
closed
true
null
[]
null
0
2014-01-28T17:07:07Z
2021-09-08T23:10:39Z
2014-01-28T20:14:38Z
CONTRIBUTOR
resolved
With the latest pyopenssl on Windows 64bit: ``` cryptography==0.2.dev1 ndg-httpsclient==0.3.2 pyOpenSSL==0.13 pyasn1==0.1.7 ``` I get an exception raised when `GET`ing a `301` response to a HTTPS request. I see that after the redirect is received the returned URL is [decoded to a Unicode string](https://github.com/kennethreitz/requests/blob/master/requests/adapters.py#L181). Then requests passes the response to `resolve_redirects` which uses the url to make a new request. This leads to a Unicode string being passed to urllib3 and eventually pyopenssl. And because in pyopenssl they now check that the data is of type bytes, an exception is thrown. I Wrote this test: ``` def test_pyopenssl_redirect(self): requests.get('https://httpbin.org/status/301') ``` and this is the result of py.test: ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <OpenSSL.SSL.Connection object at 0x000000000345CC50> buf = u'GET /redirect/1 HTTP/1.1\r\nHost: httpbin.org\r\nAccept-Encoding: gzip, defl...cept: */*\r\nUser-Agent: python-r equests/2.2.1 CPython/2.7.6 Windows/8\r\n\r\n' flags = 0 def sendall(self, buf, flags=0): """ Send "all" data on the connection. This calls send() repeatedly until all data is sent. If an error occurs, it's impossible to tell how much data has been sent. :param buf: The string to send :param flags: (optional) Included for compatibility with the socket API, the value is ignored :return: The number of bytes written """ if isinstance(buf, _memoryview): buf = buf.tobytes() if not isinstance(buf, bytes): > raise TypeError("buf must be a byte string") E TypeError: buf must be a byte string ..\testreq\lib\site-packages\OpenSSL\SSL.py:968: TypeError =================================== 117 tests deselected by '-kpyopenssl_redirect' ==================================== ====================================== 1 failed, 117 deselected in 4.47 seconds ======================================= ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1887/reactions" }
https://api.github.com/repos/psf/requests/issues/1887/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/psf/requests/issues/1886
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1886/labels{/name}
https://api.github.com/repos/psf/requests/issues/1886/comments
https://api.github.com/repos/psf/requests/issues/1886/events
https://github.com/psf/requests/pull/1886
26,424,623
MDExOlB1bGxSZXF1ZXN0MTE5Mzc3NjM=
1,886
Added info about posted files headers
{ "avatar_url": "https://avatars.githubusercontent.com/u/975689?v=4", "events_url": "https://api.github.com/users/meteozond/events{/privacy}", "followers_url": "https://api.github.com/users/meteozond/followers", "following_url": "https://api.github.com/users/meteozond/following{/other_user}", "gists_url": "https://api.github.com/users/meteozond/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meteozond", "id": 975689, "login": "meteozond", "node_id": "MDQ6VXNlcjk3NTY4OQ==", "organizations_url": "https://api.github.com/users/meteozond/orgs", "received_events_url": "https://api.github.com/users/meteozond/received_events", "repos_url": "https://api.github.com/users/meteozond/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meteozond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meteozond/subscriptions", "type": "User", "url": "https://api.github.com/users/meteozond", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2014-01-28T09:04:11Z
2021-09-08T23:08:14Z
2014-01-28T09:07:24Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1886/reactions" }
https://api.github.com/repos/psf/requests/issues/1886/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1886.diff", "html_url": "https://github.com/psf/requests/pull/1886", "merged_at": "2014-01-28T09:07:24Z", "patch_url": "https://github.com/psf/requests/pull/1886.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1886" }
true
[ "LGTM, thanks for this! :cake:\n" ]
https://api.github.com/repos/psf/requests/issues/1885
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1885/labels{/name}
https://api.github.com/repos/psf/requests/issues/1885/comments
https://api.github.com/repos/psf/requests/issues/1885/events
https://github.com/psf/requests/issues/1885
26,373,742
MDU6SXNzdWUyNjM3Mzc0Mg==
1,885
Redirect can expose netrc password
{ "avatar_url": "https://avatars.githubusercontent.com/u/212279?v=4", "events_url": "https://api.github.com/users/eriol/events{/privacy}", "followers_url": "https://api.github.com/users/eriol/followers", "following_url": "https://api.github.com/users/eriol/following{/other_user}", "gists_url": "https://api.github.com/users/eriol/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eriol", "id": 212279, "login": "eriol", "node_id": "MDQ6VXNlcjIxMjI3OQ==", "organizations_url": "https://api.github.com/users/eriol/orgs", "received_events_url": "https://api.github.com/users/eriol/received_events", "repos_url": "https://api.github.com/users/eriol/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eriol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eriol/subscriptions", "type": "User", "url": "https://api.github.com/users/eriol", "user_view_type": "public" }
[]
closed
true
null
[]
null
20
2014-01-27T17:10:13Z
2021-09-08T23:08:08Z
2014-09-12T20:03:14Z
CONTRIBUTOR
resolved
Hello, Jakub Wilk reported this on the Debian Bug Tracker[¹]: ``` If site A redirects to site B, and user had a password for site A in their ~/.netrc, then requests would send authorization information both to site A and to site B. ``` Jakub wrote also some tests to show the issue, you can find them attached on the Debian Bug Tracker. I have already updated requests Debian package (it's ready for the upload) and I can confirm that the issue is present also in requests 2.2.1. Cheers! [¹] http://bugs.debian.org/733108
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1885/reactions" }
https://api.github.com/repos/psf/requests/issues/1885/timeline
null
completed
null
null
false
[ "Thanks for this!\n\nRight now we don't strip authentication information of any kind on redirects. We could, but we can't ask the user for new credentials, we can only really go to `~/. Thoughts @sigmavirus24?\n", "[We delete any Cookies set.](https://github.com/kennethreitz/requests/blob/master/requests/sessions.py#L152) I'm not sure I see the benefit in sending the old credentials to the site we're redirected to. Is there anything in the RFC about this? I would guess it could be considered safe if we're redirecting to the same domain, but otherwise, I don't see a point in sending it along as well.\n\nThat said, we would be breaking backwards compatibility. Someone relying on this will need to do extra work.\n", "Agreed, this is backwards incompatible. I'd rather be secure(r) by default, though. Open question: do we reapply `~/.netrc` auth?\n", "The more accurate question is: Do we `del headers['Authorization']`?\n", "Yes, absolutely yes. What about Proxy-Authorization?\n\n**EDIT**: To be clear, we should _conditionally_ delete the Authorization header, only if we're being redirected to a new host.\n", "> What about Proxy-Authorization?\n\nEhhhhhhhh maybe? That's a grey area. My typical mental model of a proxy is on \nyour end so you shouldn't be concerned about redirects and shouldn't be \nconcerned about your proxy authentication being broken on redirects.\n\nOne thing that's nice is we don't have to worry about times when authorization \nis specified in the URL itself (e.g., `http://user:[email protected]`).\n", "> EDIT: To be clear, we should conditionally delete the Authorization header, only if we're being redirected to a new host.\n\n@Lukasa I agree!\n", "> To be clear, we should conditionally delete the Authorization header, only if we're being redirected to a new host.\n\nThis is going to be tricky. Some domains will not want you to keep them when redirecting to sub-domains. It's a safer thing to do than to preserve authorization for redirecting to different domains but it may still be tricky.\n", "> Some domains will not want you to keep them when redirecting to sub-domains.\n\nI think we should consider a sub-domain as a new host[¹]. Maybe we can check if user's .netrc has a stanza about the sub-domain when redirecting. Only if we find a stanza about the sub-domain, we send the credentials for the sub-domain we are redirected to. What do you think?\n\n[¹] For example, think about *.neocities.org: we can't presume if the owner of two sub-domains is the same.\n", "Yeah I talked to some of the security guys at Bendyworks about this and they agree with you @eriolv. So a simple check of \n\n``` python\noriginal_parsed = urlparse(respone.request.url)\nredirect_parsed = urlparse(url)\nif original_parsed.host != redirect_parsed.host:\n del headers['Authorization']\n```\n\nHave we decided about `Proxy-Authorization`?\n", "Agreed, subdomains have to be treated as new hosts.\n\nWe can be smart about proxy auth as well, I think. First thing to note is that our proxy model uses one proxy per scheme: if you don't get redirected to a different scheme you can safely keep sending `Proxy-Authorization` headers. Unfortunately, I don't think we can be smarter than that. I've considered whether we could attach the correct auth string for any proxy the user configures for the other scheme but I don't think we can: we need to know what came in on the original `Session.request()` call to be sure.\n", "Ok, take a look at #1892 for a possible fix.\n", "@eriolv do you have experience with CVEs? This was reported to debian but is not a debian specific issue. This is our responsibility to report to MITRE, right?\n", "@sigmavirus24 unfortunately I don't have experience with CVEs, but it's well described here:\nhttp://people.redhat.com/kseifrie/CVE-OpenSource-Request-HOWTO.html\n\nSending a request to [email protected] should be enough,\nbut maybe security folks at Red Hat made a request for a CVE since they are following the issue too: https://bugzilla.redhat.com/show_bug.cgi?id=1046626\n\nAs you can see Endi Sukma Dewata also asked me if there are tests for the Proxy-Authorization case.\nI'm going to reply on Red Hat tracker asking if they already sent a request for a CVE.\n", "Thanks for managing everything @eriolv. I'm going to guess none of you have submitted a CVE so I'm going to request one from MITRE since the CVE suggests just asking them.\n\nFWIW, if you would like you can tell Endi that we're going to address the Proxy-Authorization case before the next release but in a separate PR probably.\n", "Ok, let's summarise the Proxy-Authorization case while we're here and talking about it.\n\n**Problem**: Proxy-Authorization is never re-evaluated when we are redirected. This means that if we're redirected to a new proxy, or away from proxies entirely, we'll keep sending the Proxy-Authorization header. This is bad.\n\n**Interesting Notes**: A few things.\n1. In the primary Requests API, proxies are scheme-specific: that is, they apply to all requests to a given scheme. In principle this allows us to be clever and avoid doing anything unless a redirect takes us across a scheme.\n2. Unfortunately, our use of proxy environment variables (and particularly the NO_PROXY variable) breaks that limitation.\n3. #1727 is tracking the fact that we don't re-evaluate NO_PROXY on redirects. A fix for this is not yet made, but requires us re-evaluating our proxy configuration each time we're redirected. Since we expect proxy auth information to be in the proxy URL, we're probably able to fix both bugs in one move.\n\n**Interim Fix**: If we need to rush a 'fix' out the door, we can start by unconditionally removing the Proxy-Authorization header in all cases on redirect. This covers all cases where we don't hit NO_PROXY.\n\n**Better Fix**: Comes in two parts:\n1. Remove Proxy-Authorization during redirect.\n2. Re-evaluate proxy environment variables, fixing up the proxy dictionary as required.\n\nThat should be all we need: the HTTPAdapter will handle regenerating the Proxy-Authorization header as needed.\n", "I would _like_ to have everything fixed in one release but if we have to. Stripping out the headers on redirect is enough for me. That at least removes the exposure and makes our users safer even if it temporarily inconveniences them. I mean it's a freaking open source library. Not having a file parsed and used to authenticate for you is the very definition of a First World Problem and for that I'm a bit unsympathetic. I'm also grumpy after a conversation on twitter, so feel free to ignore my bitterness.\n", "I'm pretty convinced re-evaluating the proxy configuration is a minor amount of work, I'll take a swing at it this weekend.\n", "Is this issue done, given that re-evaluating is in place now, or is there something missing that has been discussed?\n(https://github.com/kennethreitz/requests/blob/master/requests/sessions.py#L175-177)\n", "Good catch @blueyed. This was fixed in [v2.3.0](https://github.com/kennethreitz/requests/releases/tag/v2.3.0).\n" ]
https://api.github.com/repos/psf/requests/issues/1884
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1884/labels{/name}
https://api.github.com/repos/psf/requests/issues/1884/comments
https://api.github.com/repos/psf/requests/issues/1884/events
https://github.com/psf/requests/pull/1884
26,341,350
MDExOlB1bGxSZXF1ZXN0MTE4OTQ4Njc=
1,884
Add initial Jython support
{ "avatar_url": "https://avatars.githubusercontent.com/u/737634?v=4", "events_url": "https://api.github.com/users/darjus-amzn/events{/privacy}", "followers_url": "https://api.github.com/users/darjus-amzn/followers", "following_url": "https://api.github.com/users/darjus-amzn/following{/other_user}", "gists_url": "https://api.github.com/users/darjus-amzn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/darjus-amzn", "id": 737634, "login": "darjus-amzn", "node_id": "MDQ6VXNlcjczNzYzNA==", "organizations_url": "https://api.github.com/users/darjus-amzn/orgs", "received_events_url": "https://api.github.com/users/darjus-amzn/received_events", "repos_url": "https://api.github.com/users/darjus-amzn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/darjus-amzn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darjus-amzn/subscriptions", "type": "User", "url": "https://api.github.com/users/darjus-amzn", "user_view_type": "public" }
[]
closed
true
null
[]
null
17
2014-01-27T09:52:43Z
2021-09-08T10:01:24Z
2014-01-27T17:38:53Z
CONTRIBUTOR
resolved
Currently only works for http as ssl module is not yet fully functional. Ran the tests, with some (expectedly) failing on ssl support
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1884/reactions" }
https://api.github.com/repos/psf/requests/issues/1884/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1884.diff", "html_url": "https://github.com/psf/requests/pull/1884", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1884.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1884" }
true
[ "Thanks for this! Jython is weird. =)\n\nI think before we merge this we should set up our CI server with a Jython build. If we're going to support Jython, let's do it properly. @kennethreitz do you want to do that yourself, or would you rather one of @sigmavirus24 or I did it?\n", "@kennethreitz controls the Jenkins CI server so I would guess he'd be able to set up Jython if desired. FWIW, I would like to support Jython, but I'm curious about some things in the PR. None are really merge blockers.\n\nThe only thing that _is_ a merge blocker to me is failing HTTPS tests. If we're going to support Jython, given that we're the only package that does SSL as correctly as possible, I would want that to be fully functional before we ship anything advertising that we support Jython. Otherwise, the greatest draw to requests (beyond its API) is invalid and horribly broken. That raises the question:\n\nHow far out is SSL support?\n", "SSL won't work with Jython 2.7b1 (http://bugs.jython.org/issue2016)\nI don't think Requests would support Jython 2.5.3.\n", "Yes, SSL support is broken in Jython, and should not require any changes to requests, once written/fixed.\nJim Baker has a branch for SSL support in Jython, but it's not yet complete: https://bitbucket.org/jimbaker/jython-ssl\nI'm not targeting Jython 2.5 for obvious reasons.\n", "Uhhhh. That makes me uncomfortable.\n\nAre we interested in raising a Warning on import?\n", "Thanks so much for the patch, and the work, but I think this is a step in the wrong direction for us at the moment. I'll continue to think about it, however. Thanks again :)\n", "Hey Kenneth,\nThanks for looking at it. Could you please elaborate a bit more on the \"wrong direction\"? I'd love to get Jython working for more libraries, but without knowing what's wrong with this patch, I can't fix it :(\n\nThanks!\nDarjus\n", "I'm generally not against making a few tweaks here and there for compatibility reasons, but these seems like really odd changes that taint a generally pure code base, and they are being made to support an obscure interpreter that doesn't even support SSL, one of our biggest features.\n\nIt's just not a good fit :)\n", "e.g. at this point, I'm afraid to say that all of these changes should be made on Jython's end :)\n", "FWIW, Jython 2.7 will soon have proper SSL support, both blocking and nonblocking; see https://github.com/jimbaker/socket-reboot, which reworks Jython's implementation of socket/select/ssl by layering on top of Netty 4. Supporting requests, and indirectly pip, is an explicit goal of this work.\n\nSome other work that should support requests without modification:\n- idna.encodings was added a few months ago to 2.7, as of 7137:a9283b590960\n- Large collection literals, to work around cases where a dict, set, tuple, or list was being initialized from a literal, causing the generated Java bytecode to exceed classfile limits on method size (~ 32K bytecodes). IIRC, used in character determination for certain east asian encodings in requests.\n\nThe one issue that may come up is that Jython does not support unicode literals that are not valid for UTF-16 (or for that matter any attempt to construct such strings - but in practice it's seen in literals in large sophisticated libraries that are doing this to efficiently detect invalid usage!). IIRC, this only came up in pip, not requests directly. The particular case this arises is the use of isolated surrogates.\n", "@jimbaker correct me if I am misunderstanding you but basically requests should not need to do anything after Jython 2.7 is released to support it. It sounds like the IDNA changes that were made here will be made unnecessary and that ssl will not be an issue. Also, requests does not use unicode string literals in the library at all, only in the tests. Can you ping us when Jython 2.7 is released so we can start testing with it then?\n", "@sigmavirus24 As of a few minutes ago, 100% of the unit tests for requests master now pass when run on this branch of Jython, which supports SSL and other goodies: https://bitbucket.org/jimbaker/jython-socket-reboot\n\nPlease note that the above branch still needs to be cleaned up (at the very least, remove copious print debugging!), but that's all pretty obvious in the FIXMEs and prints.\n\nI will ping again when we have this merged against Jython trunk, in prep for beta 3 - it would be great for us to have Jython as part of your testing.\n", "Jython 2.7 trunk now supports requests. This will be part of the forthcoming beta 3.\n", ":+1: That's awesome! If you let me know when you release beta 3, I'll tweet and blog and generally make positive noises everywhere I can.\n", "You're doing great work @jimbaker !\n", "Just because we can do something, doesn't mean we should. \n", "@kennethreitz sorry?\n" ]
https://api.github.com/repos/psf/requests/issues/1883
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1883/labels{/name}
https://api.github.com/repos/psf/requests/issues/1883/comments
https://api.github.com/repos/psf/requests/issues/1883/events
https://github.com/psf/requests/pull/1883
26,319,895
MDExOlB1bGxSZXF1ZXN0MTE4ODQ3Mjk=
1,883
Update PyOpenSSL contrib DEFAULT_SSL_CIPHER_LIST
{ "avatar_url": "https://avatars.githubusercontent.com/u/161495?v=4", "events_url": "https://api.github.com/users/reaperhulk/events{/privacy}", "followers_url": "https://api.github.com/users/reaperhulk/followers", "following_url": "https://api.github.com/users/reaperhulk/following{/other_user}", "gists_url": "https://api.github.com/users/reaperhulk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/reaperhulk", "id": 161495, "login": "reaperhulk", "node_id": "MDQ6VXNlcjE2MTQ5NQ==", "organizations_url": "https://api.github.com/users/reaperhulk/orgs", "received_events_url": "https://api.github.com/users/reaperhulk/received_events", "repos_url": "https://api.github.com/users/reaperhulk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/reaperhulk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reaperhulk/subscriptions", "type": "User", "url": "https://api.github.com/users/reaperhulk", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-01-26T19:01:39Z
2021-09-08T22:01:11Z
2014-01-26T19:03:16Z
NONE
resolved
The default cipher suites set in the pyopenssl contrib are now slightly out of date. SSLLabs has updated their recommendations and now suggests not using RC4 (see: [Is BEAST still a threat?](http://blog.ivanristic.com/2013/09/is-beast-still-a-threat.html). This particular suite has been lifted from the [current master of twisted](https://github.com/twisted/twisted/blob/trunk/twisted/internet/_sslverify.py#L995), but provides a whitelist that prefers forward secrecy for key exchange, AES-GCM for encryption (with fallbacks for CBC and 3DES if AES is unavailable), and disables null auth, etc. This was derived by @hynek, so if there questions about it we can drag him in to answer.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1883/reactions" }
https://api.github.com/repos/psf/requests/issues/1883/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1883.diff", "html_url": "https://github.com/psf/requests/pull/1883", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1883.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1883" }
true
[ "Thanks for this! This is an excellent addition, but you're providing it in the wrong place. We vendor that file straight in from urllib3. Can I get you to open this Pull Request over there?\n", "Whoops :) Will do.\n", "Thanks so much! :cake:\n" ]
https://api.github.com/repos/psf/requests/issues/1882
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1882/labels{/name}
https://api.github.com/repos/psf/requests/issues/1882/comments
https://api.github.com/repos/psf/requests/issues/1882/events
https://github.com/psf/requests/issues/1882
26,299,355
MDU6SXNzdWUyNjI5OTM1NQ==
1,882
ResourceWarning in python 3.2+
{ "avatar_url": "https://avatars.githubusercontent.com/u/48100?v=4", "events_url": "https://api.github.com/users/bboe/events{/privacy}", "followers_url": "https://api.github.com/users/bboe/followers", "following_url": "https://api.github.com/users/bboe/following{/other_user}", "gists_url": "https://api.github.com/users/bboe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bboe", "id": 48100, "login": "bboe", "node_id": "MDQ6VXNlcjQ4MTAw", "organizations_url": "https://api.github.com/users/bboe/orgs", "received_events_url": "https://api.github.com/users/bboe/received_events", "repos_url": "https://api.github.com/users/bboe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bboe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bboe/subscriptions", "type": "User", "url": "https://api.github.com/users/bboe", "user_view_type": "public" }
[]
closed
true
null
[]
null
15
2014-01-25T19:31:08Z
2021-09-08T04:00:41Z
2014-11-12T17:34:14Z
CONTRIBUTOR
resolved
Requests issues a ResourceWarning in python 3.2+ as sockets are not explicitly closed before garbage collection occurs. While ResourceWarnings are not displayed by default, it can be a distraction to some developers when working with warnings enabled. File: test.py ``` python import requests def make_request(): resp = requests.get('http://google.com') resp.close() # this appears to have no effect, even though the function exists make_request() ``` ``` $ python -Wall test.py test.py:7: ResourceWarning: unclosed <socket.socket object, fd=4, family=2, type=1, proto=6> make_request() test.py:7: ResourceWarning: unclosed <socket.socket object, fd=3, family=2, type=1, proto=6> make_request() ``` It would be great if there was a way to prevent the ResourceWarning from occurring, without issuing a `Connection:close` header.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/psf/requests/issues/1882/reactions" }
https://api.github.com/repos/psf/requests/issues/1882/timeline
null
completed
null
null
false
[ "Thanks for this!\n\nThis is actually a behaviour in urllib3. I'm going to summon @shazow into this conversation, but I believe this is a deliberate design decision. We can often reuse the socket object to reconnect to the same host without the overhead of recreating the object. This makes this deliberate. I wonder if we can silence the warning.\n", "I think the problem might be fixed (in urllib3's end) if when the connection pool is garbage collected, it explicitly closes all of its sockets, rather than using the previously okay (didn't issue warnings in python < 3.2) method of allowing garbage collection to close them.\n", "urllib3 has a [`pool.close()`](https://github.com/shazow/urllib3/blob/master/urllib3/connectionpool.py#L347) which does this. We decided explicit is better than automatically doing it on garbage collection because you may be manually hanging on to connections/requests outside of the pool (e.g. when streaming).\n", "I also ran into this in writing unit tests.\n\nAs a workaround, here are [docs on temporarily suppressing warnings](https://docs.python.org/3/library/warnings.html#temporarily-suppressing-warnings) on a case by case basis.\n\n```\nimport warnings\n\ndef fxn():\n warnings.warn(\"deprecated\", DeprecationWarning)\n\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n fxn()\n```\n", "Thanks @here \n", "after for some wired reason <a href=\"https://docs.python.org/3/library/warnings.html#temporarily-suppressing-warnings\" > suppressing warnings </a> didnt work for me (still dont know why)\ni came across this:\n\n<pre>\n req=requests.get('http://google.com')\n req.connection.close()\n</pre>\n\nand no more \"ResourceWarning\"....\nis it an actual solution? dose it work for you too?\n", "Is this fixed? I'm using \n\n```\nclosing(requests.post(\"something.com\", stream=True)) as r:\n # do stuff\n```\n\nand i'm still getting (while using this within a unittest.TestCase method)\n\n`/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/case.py:574: ResourceWarning: unclosed <socket.socket fd=7, family=AddressFamily.AF_INET, type=SocketType.SOCK_STREAM, proto=6, laddr=('someip', 57353), raddr=('someip', 81)>\n testMethod()`\n", "This is not fixed, because we're not entirely confident it's a bug. =) The warning is annoying but it's not symptomatic of a problem.\n", "@Lukasa: if this is not a bug, then what exactly is it? It seems to be that its complaining that a connection was left open after the unit tests ended, but how can that be if I'm wrapping the requests.post() call with contextlib.closing() like the documentation mentions here? http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow am I doing something incorrect?\n", "Nope, you're just running into the magic of requests. =)\n\nWe use connection pooling under the covers. This means we attempt to recycle socket objects as much as possible, by leaving them connected. This is an efficiency gain, as creating socket objects is expensive and connecting them is even more expensive.\n\nWhen you use `requests.X`, that creates a `Session` and `ConnectionPool` under the covers and then throws them away. This behaviour means the socket object owned by the `ConnectionPool` is cleaned up by the garbage collector, which will close the socket when it does so.\n\nWhat's important is that `response.close()` does not close all open sockets on the underlying `ConnectionPool` because that would be ridiculous, it just returns the socket to the pool. To get the behaviour you want (please close all sockets), you need to use an explicit `Session` object and then clean up with `Session.close()`.\n", "Would it be possible to expose a `requests.close()` method to access the global, shared `Session` hiding in the module? Or perhaps `requests.set_session()`? Some better alternative than plumbing a `Session` through everything and creating a leaky abstraction?\n", "@thejohnfreeman we create a new session every time you make a request with `requests.{get,post,put,delete,etc.}`. See [the relevant code in `requests/api.py`](https://github.com/kennethreitz/requests/blob/master/requests/api.py#L48)\n", "@here suppressing the warning with `warnings.catch_warnings()` is not guaranteed to work because the warning is issued when the connection pool is garbage collected which can happen after the context manager has closed.\n", "@siebenschlaefer That's correct, which is why we recommend you use explicit `Session` objects. =)\n", "Related: #3912 (_ResourceWarning: unclosed socket.socket when I run a unittest?_)" ]
https://api.github.com/repos/psf/requests/issues/1881
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1881/labels{/name}
https://api.github.com/repos/psf/requests/issues/1881/comments
https://api.github.com/repos/psf/requests/issues/1881/events
https://github.com/psf/requests/pull/1881
26,289,205
MDExOlB1bGxSZXF1ZXN0MTE4NzI0MTc=
1,881
s/soley/solely
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2014-01-25T08:21:26Z
2021-09-08T11:00:50Z
2014-01-25T13:38:39Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1881/reactions" }
https://api.github.com/repos/psf/requests/issues/1881/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1881.diff", "html_url": "https://github.com/psf/requests/pull/1881", "merged_at": "2014-01-25T13:38:39Z", "patch_url": "https://github.com/psf/requests/pull/1881.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1881" }
true
[ ":cake: Thank you!\n" ]
https://api.github.com/repos/psf/requests/issues/1880
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1880/labels{/name}
https://api.github.com/repos/psf/requests/issues/1880/comments
https://api.github.com/repos/psf/requests/issues/1880/events
https://github.com/psf/requests/issues/1880
26,288,980
MDU6SXNzdWUyNjI4ODk4MA==
1,880
Document how to test Requests without going over the wire
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[ { "color": "fad8c7", "default": false, "description": null, "id": 136616769, "name": "Documentation", "node_id": "MDU6TGFiZWwxMzY2MTY3Njk=", "url": "https://api.github.com/repos/psf/requests/labels/Documentation" } ]
closed
true
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" } ]
null
9
2014-01-25T08:03:00Z
2023-03-31T00:02:59Z
2022-03-30T17:57:01Z
CONTRIBUTOR
resolved
If I am writing unit tests that exercise HTTP requests, it may be useful to mock or stub the external service call, so that the unit tests don't go over the network and introduce latency or fail on a train or a plane. It would be useful if Requests provided best practice documentation for: - how to intercept the request before it goes over the wire, and return a custom response - how to determine what Requests was actually ready to send over the wire (potentially by mocking urllib3.connectionpool.HTTP(S)Connection.urlopen?) Betamax may be appropriate here but it would also be good to explain how to do this without introducing dependencies.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5271761?v=4", "events_url": "https://api.github.com/users/nateprewitt/events{/privacy}", "followers_url": "https://api.github.com/users/nateprewitt/followers", "following_url": "https://api.github.com/users/nateprewitt/following{/other_user}", "gists_url": "https://api.github.com/users/nateprewitt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateprewitt", "id": 5271761, "login": "nateprewitt", "node_id": "MDQ6VXNlcjUyNzE3NjE=", "organizations_url": "https://api.github.com/users/nateprewitt/orgs", "received_events_url": "https://api.github.com/users/nateprewitt/received_events", "repos_url": "https://api.github.com/users/nateprewitt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateprewitt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateprewitt/subscriptions", "type": "User", "url": "https://api.github.com/users/nateprewitt", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/psf/requests/issues/1880/reactions" }
https://api.github.com/repos/psf/requests/issues/1880/timeline
null
completed
null
null
false
[ "Unless you're on 3.3+ mock is an external dependency so your last sentence is quite wrong. Frankly, if you're performing any parsing of the response and the service is subject to change, e.g. the GitHub API, then mocking is error prone and an awful idea. That aside I still use mocking but not to stub out responses in tests.\n", "Also regarding:\n\n> it may be useful to mock or stub the external service call, so that the unit tests don't go over the network and introduce latency or fail on a train or a plane\n\nYou can run HTTPBin locally and tell the tests to use that with an environment variable.\n\nI agree in principle though that by default it shouldn't be talking to the network though. I just thought I'd point out a separate alternative.\n", "I've written this package to test my network libraries: https://github.com/lorien/test_server\nIt is simple HTTP server based on tornado. It is running in separate python thread. You can configure response (headers, cookies, response body) for any request that your HTTP client sends to the server. Also you have access to details of request that the server received from HTTP client.\nYou can check out examples of tests here:\n- https://github.com/lorien/iob/tree/master/test\n- https://github.com/lorien/grab/tree/v06/test\n", "It took me a bit to figure this out too by looking through a lot of different tools while developing my own HTTP mock service framework (https://github.com/BenjamenMeyer/stackInABox) for testing.\n\nSo I do agree it could be useful to document how to use the Adapters to intercept stuff this way for testing. They are documented (http://docs.python-requests.org/en/latest/api/#requests.adapters.HTTPAdapter) but having a couple complete examples specific to how tests function would probably be a good thing.\n\nFWIW...:moneybag: \n", "i'd like this but not for unit tests.\r\n\r\nin local dev, we don't want our apiclient (using requests) to visit our \"hard to containerise api server\"\r\n", "@airtonix not sure what you mean.\r\n\r\nFor unit tests you don't want it to hit a real service, so mocking or using the HTTPAdapter to incept (ala requests-mock, responses) is a good thing. You can then control the test and the API Contract being tested against.\r\n\r\nFor integration/api testing, then yes you want it to go against some kind of services - at one level probably testing against a controlled local environment (e.g docker) but also against live services (staging, production).\r\n\r\nIn either respect, what you do in the testing - especially unit testing - shouldn't interfere with a developer's ability to run it against live infrastructure (dev, staging, prod environments). They're different use-cases and will be triggered in different manners. Testing needs to be controlled or you lose all value of the test, especially over long term. Dev can be whatever you need.\r\n\r\nSo if you have a \"hard to containerise api server\"...\r\n\r\nYou'll want a tool like StackInABox - or even just using requests-mock, responses, HTTPretty directly - to do your unit testing; but then you'll also have a dev environment running the server for devs to work against.\r\n\r\nHTH, and hope I didn't ramble too much.", "@BenjamenMeyer Nah sometimes things don't have to be dogmatic. It's actually really stiffling and annoying.\r\n\r\nWe're doing this instead now... since it's obviously too hard to mess with requests;\r\n\r\nin prod:\r\n```\r\nfrontend view > mobxStore.getFoos > networkClient.getFoos `GET \"/foos\"` > DjangoApiView `/foos` > Django.ServerToServerClient.getFoos > requests `GET https://remoteserver/foos`\r\n```\r\n\r\nin staging/dev:\r\n```\r\nfrontend view > mobxStore.getFoos > networkClient.getFoos `GET \"/foos\"` > DjangoApiView `/foos` > Django.ServerToServerClient.getFoos > FakeResponse FactoryBoy.generateFoo\r\n```\r\n\r\nWe do this by having a module name in `settings.dev` like : `settings.API_CLIENT_STRATEGY='apiclient.mocks' ` which is a bunch of actions that as you see above returns http like responses with factoryboy or faker.\r\n\r\nIn `settings.prod` its `settings.API_CLIENT_STRATEGY='apiclient.http'` which is a bunch of actions that use `requests` and returns real responses from our 'hard-to-containerise' service.\r\n\r\nAnd in our `app.apiclient` we use importlib to \r\n\r\nBut hey, thanks for trying but you missed the part where I **do not want to do this in a unit test**. \r\n\r\nalso not every reply is about you.", "We have well supported framework for mocking up responses:\r\nhttps://github.com/getsentry/responses\r\n\r\n@nateprewitt \r\nI believe this issue could be closed\r\nor if you think this information should be needed in the docs, then I would be glad to submit a PR", "I think there are multiple options at this point. The original task here was to look at documenting how to do this through the adapters interface. Given there's been no progress on this in ~8 years though, we'll resolve this in favor of the alternatives. Thanks everyone!" ]
https://api.github.com/repos/psf/requests/issues/1879
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1879/labels{/name}
https://api.github.com/repos/psf/requests/issues/1879/comments
https://api.github.com/repos/psf/requests/issues/1879/events
https://github.com/psf/requests/issues/1879
26,274,964
MDU6SXNzdWUyNjI3NDk2NA==
1,879
params no longer work for non-http URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/1754002?v=4", "events_url": "https://api.github.com/users/ibuildthecloud/events{/privacy}", "followers_url": "https://api.github.com/users/ibuildthecloud/followers", "following_url": "https://api.github.com/users/ibuildthecloud/following{/other_user}", "gists_url": "https://api.github.com/users/ibuildthecloud/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ibuildthecloud", "id": 1754002, "login": "ibuildthecloud", "node_id": "MDQ6VXNlcjE3NTQwMDI=", "organizations_url": "https://api.github.com/users/ibuildthecloud/orgs", "received_events_url": "https://api.github.com/users/ibuildthecloud/received_events", "repos_url": "https://api.github.com/users/ibuildthecloud/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ibuildthecloud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ibuildthecloud/subscriptions", "type": "User", "url": "https://api.github.com/users/ibuildthecloud", "user_view_type": "public" }
[]
closed
true
null
[]
null
26
2014-01-24T22:09:40Z
2021-09-09T00:10:15Z
2014-01-27T00:05:23Z
NONE
resolved
Commit b149be5d specifically removed all processing of non-HTTP(s) URLs. Previous to this commit if you were to pass in params, it would be appended to the URL. Since this commit it no longer works. Specifically I ran into this issue when running docker-py against a unix socket. I understand that for non standard URLs you don't want to parse them. It seems that if the params were passed, then you should still append them. I'll gladly put in a pull request for this if it seems like a valid issue. A simple fix would be the below ``` python # Don't do any URL preparation for oddball schemes if ':' in url and not url.lower().startswith('http'): encoded_params = self._encode_params(params) if len(encoded_params) > 0: if url.find('?') == -1: url = '%s?%s' % (url, encoded_params) else: url = '%s&%s' % (url, encoded_params) self.url = url return ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1879/reactions" }
https://api.github.com/repos/psf/requests/issues/1879/timeline
null
completed
null
null
false
[ "Hmmm.\n\nHmmm.\n\nI'm very much on the fence here. On the one hand, I can see your argument, and it makes plenty of sense. There's definitely utility in being able to pass 'params' to requests and have it do the right thing.\n\nUnfortunately, it violates the distinction we drew when we accepted that change. The nature of that case was to say \"if this URL isn't for HTTP, we have no idea what it means and we shouldn't touch it\". That's because the URL syntax is not very heavily specified. In particular, there's no well-defined syntax for how the query string should look. It is _generally_ written as a set of kvps, like in HTTP, but [RFC 3986 does not go into detail](https://tools.ietf.org/html/rfc3986#section-3.4) about how these should look, instead leaving them to be scheme specific.\n\nWith that in mind, I think I'm -0.5 on this. We just can't be sure that we're doing the right thing with parameters.\n", "@ibuildthecloud can you provide an example of the URL you were requesting with docker-py?\n", "Totally understand the perspective, but since the url and params are both\nspecified by the users, it seems like acceptable behaviour. Basically\nrequests is just saying, \"I have no clue what this funky URL is, but you\ntold me to add params, so I'll do the documented behaviour of adding ?k=v\"\nIf your funky non-standard URL is not compatible with params, then the user\nshouldn't pass them in.\n\nThe problem I specifically have is that between versions 2.0.1 and 2.1.0\nthis behaviour changed. So either requests should respect the old\nbehaviour, or consumers of requests need to change. Specifically I need to\nget docker-py to change. But that change for them doesn't see to pretty\nIMO. docker-py can run against HTTP or a unix socket. So the code is\nrelatively agnostic to which it is using. But asking them to change this\nmeans that every they use params they must do \"if http: params={} else:\nurlencode(...)\"\n\nOn Fri, Jan 24, 2014 at 3:24 PM, Cory Benfield [email protected]:\n\n> Hmmm.\n> \n> Hmmm.\n> \n> I'm very much on the fence here. On the one hand, I can see your argument,\n> and it makes plenty of sense. There's definitely utility in being able to\n> pass 'params' to requests and have it do the right thing.\n> \n> Unfortunately, it violates the distinction we drew when we accepted that\n> change. The nature of that case was to say \"if this URL isn't for HTTP, we\n> have no idea what it means and we shouldn't touch it\". That's because the\n> URL syntax is not very heavily specified. In particular, there's no\n> well-defined syntax for how the query string should look. It is\n> _generally_ written as a set of kvps, like in HTTP, but RFC 3986 does not\n> go into detail https://tools.ietf.org/html/rfc3986#section-3.4 about\n> how these should look, instead leaving them to be scheme specific.\n> \n> With that in mind, I think I'm -0.5 on this. We just can't be sure that\n> we're doing the right thing with parameters.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1879#issuecomment-33267892\n> .\n", "The URL looks like 'unix://var/run/docker.sock/v1.6/containers/json' and the code is passing in params so that the url becomes 'unix://var/run/docker.sock/v1.6/containers/json?all=1&limit=-1&trunc_cmd=0'\n", "I absolutely see that problem, but I feel very uncomfortable with the behaviour of assuming a certain logic for params.\n\nWhat are docker-py doing to route over the unix domain socket? At the very least they'll have to be mounting a Transport adapter.\n", "> What are docker-py doing to route over the unix domain socket? At the very \n> least they'll have to be mounting a Transport adapter.\n\nI was thinking the same thing, but the Transport adapter doesn't handle \nparams. That can only be handled by a pepared request object. I'm not sure \nthey could pass on the params they need to tack on with the adapter in this \ncase. Unless, you had another idea how to go about that @Lukasa, I think that \nmay be a dead end. I agree though that the parameter handling should only be \nfor HTTP(S) URLs since those are the only ones we are really positioned or \nlikely to support.\n\ndocker-py could use URI templates to bypass having to use the `params` \nparameter to requests, but that would likely introduce a new dependency which \nthey might not want.\n", "What I was asking, actually, was whether they had a `TransportAdapter` subclass. If they do, it's not the end of the world to subclass/monkeypatch the `PreparedRequest` object.\n", "This is getting a bit past my knowledge of the internals of requests, but they seem to have extended requests.adapters.HTTPAdapter and created a UNIX socket one. When you create a docker Client object, that extends requests.Session and calls mount() to register the UNIXAdapter. You can look at the specific details at https://github.com/dotcloud/docker-py/blob/master/docker/unixconn/unixconn.py\n\nSo maybe this is a good thing, I wasn't aware that docker-py was doing specific logic to make unix sockets work. Perhaps there's a different way about this then that docker-py can do. I'll have to look further into docker-py or pull in one of the maintainers. Let me dig a bit into the internals and see what they are doing because obviously they are ignoring the appended query string when opening the actual socket, but I think the current issue is that the \"GET /url\" also doesn't have the query string. So I'm assuming there needs to be some way to get the params passed to requests into the GET line.\n", "So, the issue is that as @sigmavirus24 mentioned, Requests handles parsing the URL about two layers higher than the Transport Adapter (in the PreparedRequest object). Certainly docker-py _can_ work around this by playing about with that object (either monkeypatching it or using the [explicit PreparedRequest flow](http://docs.python-requests.org/en/latest/user/advanced/#prepared-requests)), but the real meat of this issue is whether they should have to.\n\nI remain as I was before, at -0.5. It feels like we'd be doing the wrong thing. I am, however, open to being convinced on this issue.\n", "I took a look to see how simple that would be for them to implement but they're code seems quite full of misdirection. That said, the way they're currently doing everything, I find it hard to believe they'll be up for using the explicit flow @Lukasa linked to.\n", "So, million-dollar question: is it our job to un-break docker-py, or should we apologise for not spotting that this could happen but assert that we shouldn't change our current behaviour?\n\nMy worry is that `unix:` as the URI scheme actually doesn't seem to specify _anything_ about how to encode the URI: at least, I can't find a doc. This isn't unexpected, since there's no requirement that unix domain sockets use HTTP as the application-layer protocol, they can do whatever the hell they want. This is a rare situation where the application-layer isn't specified in the name. For instance, LDAP appears to define an `ldapi:` scheme that is for use over unix domain sockets, and has defined semantics. Sadly, HTTP does not.\n\nWith that in mind, we can do a number of things.\n1. Assume that whenever the user points Requests to a unix domain socket they want to run HTTP over it, and behave accordingly. That's not totally unreasonable, but it runs the risk of breaking third-party libraries for other application layer protocols that may _also_ want to use bare unix sockets through Requests. This is the reason we stopped parsing the URLs in the first place (see #1717).\n2. Allow users the possibility of shooting themselves in the foot by saying that we will always prepare `params` as though they're HTTP-like, even when we don't know what the hell you're doing (i.e. non `https?` URI).\n3. Don't change our current behaviour: apologise, but say that guessing is bad.\n\nOn balance, my order of preference is 3 -> 1 -> 2. I think 2 is dangerous, violates the Zen of Python and undoes the genuinely good change that was in #1717. I think that 1 is OK: it's not unreasonable to do that, but I'm _convinced_ we'll break another third-party library. That might not be the end of the world, but it's not ideal either.\n\n3 continues to be the argument that convinces me most. We made a decision that we should do a bit less guessing about what the user wants from us in situations we don't understand. I think that was a good decision, and I think we should stick with it. I can certainly be convinced towards 1, but right now I'm inclined towards doing 3.\n", "I'm in favor of 3. I just looked at docker-py to see if I could give @ibuildthecloud a hand in preparing a PR. That said, 1 is plenty dangerous in my opinion. Just because someone is using requests to talk to something does not mean we can assume it is not translating a PreparedRequest into some other format. I'm not convinced that it is reasonable (or not totally unreasonable) to assume that they're using `HTTP/1.1`.\n", "How about a fourth option. I totally agree that requests should treat non-HTTP URLs as opaque strings and allowing somebody to shoot themselves in the foot by appending ?k=v might not be desirable either. My hang up though is that if you use a non-HTTP URL the params argument to requests becomes useless and ignored. It's nice that you can plug in a TransportAdapter for a non-standard transport but the side effect now is that if your TransportAdapter works off of non-HTTP URLs the params argument can't be used anymore. Instead of effectively dropping the params argument, what if it was saved and passed onto the TransportAdapter to do with it what it wants. So the change I'm thinking is basically to add the below\n\n``` python\n def prepare_url(self, url, params):\n \"\"\"Prepares the given HTTP URL.\"\"\"\n\n enc_params = self._encode_params(params)\n\n # Save params for non-HTTP URL based TransportAdapters\n self.params = params\n self.encoded_params = enc_params\n```\n\nSo basically just save both the url and the params in the PreparedRequest. Now I already tried to modify docker-py to use this approach but then realized the logic to encode the params is in an internal method in PreparedRequest. So in order to not duplicate the logic (and not call an internal method), I though prepare_url could save the encoded params. Or move PreparedRequests._encode_params to be some non-internal util method.\n", "@ibuildthecloud the short answer is no.\n\nThe long answer is this: Transport Adapters _only_ handle the _transmission_ of requests and responses. Prepared Requests are the representation of a request ready to be sent. Sessions encapsulate the logic (and only the logic) of a user-agent session (lightly speaking) including handling of cookies, and determining which transport adapter to use to perform an interaction.\n\nThe pattern, is this: Each object only knows what it has to know about. Parameters are related to requests only, transport adapters should care not what parameters are sent. They should concern themselves only with processing a request and generating a response.\n\nPerhaps the best solution for docker-py is to monkeypatch the `PreparedRequest` object's `prepare_url` method to not check the scheme of the URL. With that in mind, you will be duplicating some code unfortunately.\n", "Hmmm...\n\nWho's responsibility is it to generate and write the line \"GET /x?k=b\"\nline in the HTTP request? Isn't that the transport that does that?\n\nOn Sun, Jan 26, 2014 at 2:42 PM, Ian Cordasco [email protected]:\n\n> @ibuildthecloud https://github.com/ibuildthecloud the short answer is\n> no.\n> \n> The long answer is this: Transport Adapters _only_ handle the\n> _transmission_ of requests and responses. Prepared Requests are the\n> representation of a request ready to be sent. Sessions encapsulate the\n> logic (and only the logic) of a user-agent session (lightly speaking)\n> including handling of cookies, and determining which transport adapter to\n> use to perform an interaction.\n> \n> The pattern, is this: Each object only knows what it has to know about.\n> Parameters are related to requests only, transport adapters should care not\n> what parameters are sent. They should concern themselves only with\n> processing a request and generating a response.\n> \n> Perhaps the best solution for docker-py is to monkeypatch the\n> PreparedRequest object's prepare_url method to not check the scheme of\n> the URL. With that in mind, you will be duplicating some code unfortunately.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1879#issuecomment-33331429\n> .\n", "Yes, but that job is trivial. It's _not_ the transport's job to work out what the URL should be, just to put it in between the word GET and the word HTTP/1.1. It's the job of higher layers to work out what the URL should be.\n", "But for non-HTTP URLs you are already saying that requests isn't responsible for understanding the URL, so who is then? Can't there be a hook for this? Again, it doesn't seem right that params is just ignored and dropped. That means the caller to requests is not fully abstracted away from the TransportAdapter because in a non-HTTP URL based approach it changes the nature of the params argument.\n", "Additionally, its not trivial to generate \"GET /x?k=b\" In order to do that you must parse the URL, so it does require specific knowledge of the URL format to create the GET line\n", "Right. The transport adapter expects to only be talking to HTTP services so it is fair to make assumptions about the URL being passed to it. One of those assumptions is that the URL can be parsed to find the path. No such assumptions can be made by the HTTP adapter about the unix protocol. You would also note that we are not generating that line in requests or in the adapter but instead that takes place in urllib3.\n", "You've misunderstood my argument, I think. =)\n\nRequests isn't responsible for understanding URLs on non-HTTP schemes because Requests is a HTTP library. The `HTTPAdapter` is not the only part of Requests that understands HTTP: the whole Requests stack does. However, people want the freedom to use the Requests API on non-HTTP schemes, which Requests allows (I've written one for FTP, for example).\n\nAllow me to break out Requests design. There are three layers: Request/PreparedRequest/Response, Session, TransportAdapter. Each of these knows about HTTP in a different way. \n\nThe first thinks in terms of individual HTTP messages. It builds up the structure for these messages, but in a way that can be mutated. Headers are a dictionary, URLs are components, etc. The PreparedRequest is an intermediate step that does some parsing of data as HTTP, but not much. These layers understand HTTP _messages_.\n\nThe Session layer thinks in terms of HTTP as state: in essence, it acts a bit like a user agent (though it isn't). It knows how to maintain cookies and when to apply them, and it provides room for persistent attributes. It understands the persistence layer of HTTP.\n\nThe TransportAdapter layer thinks about HTTP in terms of connections and connection pools. It knows how HTTP messages are structured on the wire, and how to handle chunked data (which is in effect an unusual way of representing HTTP bodies on the wire). This layer also handles TLS and connection maintenance. This layer understands how to serialize HTTP messages into their three basic components: request header, header block, body.\n\nIn this model, building URLs from their component structures is a job for the top layer. It knows how HTTP URLs work and provides convenience attributes for working with them. The Session object doesn't touch them at all, and all the TransportAdapter does is work out whether or not it needs to include the full URL in the request URI or whether it can omit everything before the path. Doing that is _trivial_: look for the first '/' character and split on that. This is not the same as parsing a URL.\n\nFor non HTTP URLs the _user_ of Requests is responsible for understanding how to build the URL, because it's the only party involved that knows how.\n", "Okay, let me digest this a bit. Let me point out that what docker-py is doing is HTTP. The only difference is that instead of sending the request over a TCP socket, it is sending it over a UNIX socket. So were not talking about a non-HTTP service. The problem though is that the URL is overloaded in HTTP. It is used as both a means of describing the transport (IP host and TCP port or SSL) and the HTTP path to be put in the request line. So for docker-py a unix scheme URL is used because we are not looking for IP host and TCP port, but instead the file system path to the unix socket.\n\nWould it make sense to have docker-py use the url format http+unix://socketpath/path?k=v to undicate that this is a http URL but has a custom transport. (I have no clue if the http+unix scheme will work with urllib and things, but I've seen that format used in other things)\n", "That scheme may work but the Transport Adapter would need to likely strip off the `http+`.\n\nAlso you're in luck because it seems urlparse will handle it well:\n\n``` pycon\n>>> import requests\n>>> requests.compat\n<module 'requests.compat' from '/usr/lib64/python2.7/site-packages/requests/compat.pyc'>\n>>> requests.compat.urlparse\n<function urlparse at 0x7f3d60b9ec80>\n>>> requests.compat.urlparse('http+unix://host/path')\nParseResult(scheme='http+unix', netloc='host', path='/path', params='', query='', fragment='')\n```\n", "The UnixAdapter in docker-py already messes with the url to determine the correct path_url and socket path. I actually quickly modified docker-py and http+unix:// seemed to work. I'll go down the path of putting in a PR for docker-py to use http+unix. So hopefully that will be accepted and I'll stop bothering you.\n\nI still think its a gap that for not http scheme URL you just toss params in the trash. There should be a nice hook to provide custom logic to deal with params. But I'm pretty satisfied that http+unix seems to work. \n", "> There should be a nice hook to provide custom logic to deal with params.\n\nI think it's fairly obvious that @Lukasa and I disagree with this sentiment. I'm not sure we can say it enough times but requests is an HTTP library. While docker-py is using it to perform HTTP over a unix socket, not every client using it on a Unix socket will be doing HTTP necessarily. There is no reasoning beyond some extraordinary uses for us to enable a hook or any other simpler means when there are already documented ways around this.\n\nFrankly, it is far from impossible to accomplish but just because we make it possible does not mean we should encourage it by making it simple.\n\nI'm glad you've found one way around it and are working on a pull request to docker-py.\n\nCheers!\n", "Thanks for your help and quick replies. I should have clarified from the\nbeginning that I was talking about HTTP over a custom transport. It never\neven occurred to me that requests could be used for non-HTTP.\n\nOn Sun, Jan 26, 2014 at 5:05 PM, Ian Cordasco [email protected]:\n\n> There should be a nice hook to provide custom logic to deal with params.\n> \n> I think it's fairly obvious that @Lukasa https://github.com/Lukasa and\n> I disagree with this sentiment. I'm not sure we can say it enough times but\n> requests is an HTTP library. While docker-py is using it to perform HTTP\n> over a unix socket, not every client using it on a Unix socket will be\n> doing HTTP necessarily. There is no reasoning beyond some extraordinary\n> uses for us to enable a hook or any other simpler means when there are\n> already documented ways around this.\n> \n> Frankly, it is far from impossible to accomplish but just because we make\n> it possible does not mean we should encourage it by making it simple.\n> \n> I'm glad you've found one way around it and are working on a pull request\n> to docker-py.\n> \n> Cheers!\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1879#issuecomment-33335146\n> .\n", "@ibuildthecloud we love to help when possible. \n\n> It never even occurred to me that requests could be used for non-HTTP.\n\nAs @Lukasa mentioned, he wrote an FTP adapter. User's can implement whatever backend they want with Transport Adapters. They do not have to use urllib3. They can also process a Prepared Request however they like. We cannot facilitate all of their needs even when they're performing HTTP over a different transport.\n" ]
https://api.github.com/repos/psf/requests/issues/1878
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1878/labels{/name}
https://api.github.com/repos/psf/requests/issues/1878/comments
https://api.github.com/repos/psf/requests/issues/1878/events
https://github.com/psf/requests/issues/1878
26,238,927
MDU6SXNzdWUyNjIzODkyNw==
1,878
Used method of including license breaks distutils bdist
{ "avatar_url": "https://avatars.githubusercontent.com/u/2114993?v=4", "events_url": "https://api.github.com/users/verwilst/events{/privacy}", "followers_url": "https://api.github.com/users/verwilst/followers", "following_url": "https://api.github.com/users/verwilst/following{/other_user}", "gists_url": "https://api.github.com/users/verwilst/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/verwilst", "id": 2114993, "login": "verwilst", "node_id": "MDQ6VXNlcjIxMTQ5OTM=", "organizations_url": "https://api.github.com/users/verwilst/orgs", "received_events_url": "https://api.github.com/users/verwilst/received_events", "repos_url": "https://api.github.com/users/verwilst/repos", "site_admin": false, "starred_url": "https://api.github.com/users/verwilst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/verwilst/subscriptions", "type": "User", "url": "https://api.github.com/users/verwilst", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-01-24T12:56:31Z
2021-09-09T00:10:10Z
2014-02-03T11:03:09Z
NONE
resolved
Hi, Because the LICENSE file is included in its total form in setup.py, "python setup.py bdist --format=rpm" for example bails out with the following errors: copying dist/requests-2.2.1.tar.gz -> build/bdist.linux-x86_64/rpm/SOURCES building RPMs error: line 14: Unknown tag: Licensed under the Apache License, Version 2.0 (the "License"); error: query of specfile build/bdist.linux-x86_64/rpm/SPECS/requests.spec failed, can't parse error: Failed to execute: "rpm -q --qf '%{name}-%{version}-%{release}.src.rpm %{arch}/%{name}-%{version}-%{release}.%{arch}.rpm\n' --specfile 'build/bdist.linux-x86_64/rpm/SPECS/requests.spec'" The fix is to 'hardcode' the license to a small string, and not the whole text. Fix for same problem in other project: https://github.com/nedap/pypuppetdb/commit/4ae1338f39a220bc96f8c5ee33bc3100428bebda
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1878/reactions" }
https://api.github.com/repos/psf/requests/issues/1878/timeline
null
completed
null
null
false
[ "Thanks for this! We'll fix it up. =)\n", "Resolved by #1893.\n" ]
https://api.github.com/repos/psf/requests/issues/1877
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1877/labels{/name}
https://api.github.com/repos/psf/requests/issues/1877/comments
https://api.github.com/repos/psf/requests/issues/1877/events
https://github.com/psf/requests/issues/1877
26,208,046
MDU6SXNzdWUyNjIwODA0Ng==
1,877
Add a .multipart() method to Response for auto-decoding multipart/* responses
{ "avatar_url": "https://avatars.githubusercontent.com/u/55704?v=4", "events_url": "https://api.github.com/users/mythguided/events{/privacy}", "followers_url": "https://api.github.com/users/mythguided/followers", "following_url": "https://api.github.com/users/mythguided/following{/other_user}", "gists_url": "https://api.github.com/users/mythguided/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mythguided", "id": 55704, "login": "mythguided", "node_id": "MDQ6VXNlcjU1NzA0", "organizations_url": "https://api.github.com/users/mythguided/orgs", "received_events_url": "https://api.github.com/users/mythguided/received_events", "repos_url": "https://api.github.com/users/mythguided/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mythguided/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mythguided/subscriptions", "type": "User", "url": "https://api.github.com/users/mythguided", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-01-23T23:30:44Z
2021-09-09T00:10:16Z
2014-01-24T06:52:54Z
NONE
resolved
It would be useful if `Response` objects provided a `.multipart()` method analogous to the `.json()` method for dealing with responses whose `Content-Type` is `multipart/*`, such as is common for geospatial web services.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1877/reactions" }
https://api.github.com/repos/psf/requests/issues/1877/timeline
null
completed
null
null
false
[ "Considering you're the first person in over 2 years to request this, it seems as though this is not something that is popular enough to belong in the core of requests. A feature like this would need to be requested a whole lot more for us to even consider adding this method.\n\nOn the other hand though, you could totally get this into the [requests-toolbelt](https://github.com/sigmavirus24/requests-toolbelt) which is designed for exactly this kind of thing. This is something that would be awesome to have and if you can put together a PR to handle the parsing, that would be awesome! I'd love to merge it over there provided you also add tests around it.\n\nI'll leave this open to make sure that @Lukasa has a chance to weigh in, but I have a hunch he'll have the same opinion as me.\n", "Fair enough.\n", "@sigmavirus24 is exactly right.\n\nThe `Response.json()` method has caused us a lot of trouble. In almost every situation we reject features that involve manipulating content on the grounds that requests is an HTTP library, not an \"anything-else\" library, and every time we do that `Response.json()` gets pointed out. The fact is, we only implemented that because it's an overwhelmingly common use-case, which parsing multipart data is not.\n\nThat said, I encourage you to to whack it in the toolbelt. That would be an excellent place for it.\n\nThanks for raising this issue!\n" ]
https://api.github.com/repos/psf/requests/issues/1876
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1876/labels{/name}
https://api.github.com/repos/psf/requests/issues/1876/comments
https://api.github.com/repos/psf/requests/issues/1876/events
https://github.com/psf/requests/pull/1876
26,115,734
MDExOlB1bGxSZXF1ZXN0MTE3NzE1Mjk=
1,876
Update urllib3 to 9346c5c
{ "avatar_url": "https://avatars.githubusercontent.com/u/145979?v=4", "events_url": "https://api.github.com/users/dstufft/events{/privacy}", "followers_url": "https://api.github.com/users/dstufft/followers", "following_url": "https://api.github.com/users/dstufft/following{/other_user}", "gists_url": "https://api.github.com/users/dstufft/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dstufft", "id": 145979, "login": "dstufft", "node_id": "MDQ6VXNlcjE0NTk3OQ==", "organizations_url": "https://api.github.com/users/dstufft/orgs", "received_events_url": "https://api.github.com/users/dstufft/received_events", "repos_url": "https://api.github.com/users/dstufft/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dstufft/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dstufft/subscriptions", "type": "User", "url": "https://api.github.com/users/dstufft", "user_view_type": "public" }
[ { "color": "009800", "default": false, "description": null, "id": 44501218, "name": "Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTIxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge" }, { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
2
2014-01-22T19:28:54Z
2021-09-08T23:08:28Z
2014-01-23T18:22:21Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1876/reactions" }
https://api.github.com/repos/psf/requests/issues/1876/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1876.diff", "html_url": "https://github.com/psf/requests/pull/1876", "merged_at": "2014-01-23T18:22:21Z", "patch_url": "https://github.com/psf/requests/pull/1876.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1876" }
true
[ "Awesome, let's do this.\n", "@kennethreitz :cake: :+1: :shipit: \n" ]
https://api.github.com/repos/psf/requests/issues/1875
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1875/labels{/name}
https://api.github.com/repos/psf/requests/issues/1875/comments
https://api.github.com/repos/psf/requests/issues/1875/events
https://github.com/psf/requests/issues/1875
26,092,793
MDU6SXNzdWUyNjA5Mjc5Mw==
1,875
Provide 2.2.1 release.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
7
2014-01-22T14:23:52Z
2021-09-08T23:10:48Z
2014-01-23T18:22:35Z
MEMBER
resolved
Needed for `pip`, because they need [this](https://github.com/shazow/urllib3/pull/318) to fix [this](https://github.com/pypa/pip/issues/1488). Mandatory: - [x] Bring in a version of urllib3 that contains the above fix ~~(revision not yet known)~~. This is as simple as merging #1876 - [x] Update changelog using [this](https://github.com/kennethreitz/requests/compare/v2.2.0...master). (done in 2a6b835a5e08c7365b8a4d3bfea4dcd98ad2d205).
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1875/reactions" }
https://api.github.com/repos/psf/requests/issues/1875/timeline
null
completed
null
null
false
[ ":heart: our release manager @Lukasa \n", "Awesome, thanks. Y'all are awesome.\n", "https://github.com/kennethreitz/requests/pull/1876 verified that it fixes the pip issue as well.\n", "Edited the issue to make it clear that the first TODO could be accomplished by merging #1876.\n", "Unofficial official \n", ":cake:\n", "Great job! Thanks everyone :)\n" ]
https://api.github.com/repos/psf/requests/issues/1874
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1874/labels{/name}
https://api.github.com/repos/psf/requests/issues/1874/comments
https://api.github.com/repos/psf/requests/issues/1874/events
https://github.com/psf/requests/pull/1874
26,021,269
MDExOlB1bGxSZXF1ZXN0MTE3MjQ0ODA=
1,874
Typo in History
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2014-01-21T18:18:22Z
2021-09-08T22:01:10Z
2014-01-21T18:24:37Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1874/reactions" }
https://api.github.com/repos/psf/requests/issues/1874/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1874.diff", "html_url": "https://github.com/psf/requests/pull/1874", "merged_at": "2014-01-21T18:24:37Z", "patch_url": "https://github.com/psf/requests/pull/1874.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1874" }
true
[ "Wait, credentions isn't a word?\n\nThanks! :cake:\n" ]
https://api.github.com/repos/psf/requests/issues/1873
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1873/labels{/name}
https://api.github.com/repos/psf/requests/issues/1873/comments
https://api.github.com/repos/psf/requests/issues/1873/events
https://github.com/psf/requests/pull/1873
26,020,979
MDExOlB1bGxSZXF1ZXN0MTE3MjQzMTY=
1,873
Disable TCP_NODELAY for requests to non-proxies
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[ { "color": "009800", "default": false, "description": null, "id": 44501218, "name": "Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTIxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge" }, { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
4
2014-01-21T18:14:14Z
2021-09-08T22:01:10Z
2014-01-22T19:31:01Z
CONTRIBUTOR
resolved
Adds a short discussion of the change to HISTORY.rst
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1873/reactions" }
https://api.github.com/repos/psf/requests/issues/1873/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1873.diff", "html_url": "https://github.com/psf/requests/pull/1873", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1873.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1873" }
true
[ "I have no objection to bringing this in. =) LGTM. :+1:\n", "This is actually going to be superseded by #1876, and we don't really need the substantial header comment, so I'm closing this.\n\nThanks for your work on urllib3 though! :cake:\n", "I suppose, though anyone doing a tcpdump or ngrep on a requests request might be surprised at the difference in packets going over the wire. Maybe this is not a common case though.\n", "I think realistically the difference is irrelevant to most people, and advantageous to a small number. =) I'm not concerned.\n" ]
https://api.github.com/repos/psf/requests/issues/1872
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1872/labels{/name}
https://api.github.com/repos/psf/requests/issues/1872/comments
https://api.github.com/repos/psf/requests/issues/1872/events
https://github.com/psf/requests/pull/1872
25,959,662
MDExOlB1bGxSZXF1ZXN0MTE2OTc0NTg=
1,872
Manage urllib3 and chardet with git submodule
{ "avatar_url": "https://avatars.githubusercontent.com/u/1488134?v=4", "events_url": "https://api.github.com/users/douglarek/events{/privacy}", "followers_url": "https://api.github.com/users/douglarek/followers", "following_url": "https://api.github.com/users/douglarek/following{/other_user}", "gists_url": "https://api.github.com/users/douglarek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/douglarek", "id": 1488134, "login": "douglarek", "node_id": "MDQ6VXNlcjE0ODgxMzQ=", "organizations_url": "https://api.github.com/users/douglarek/orgs", "received_events_url": "https://api.github.com/users/douglarek/received_events", "repos_url": "https://api.github.com/users/douglarek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/douglarek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/douglarek/subscriptions", "type": "User", "url": "https://api.github.com/users/douglarek", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-01-21T01:54:24Z
2021-09-08T22:01:10Z
2014-01-21T02:33:18Z
NONE
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/1488134?v=4", "events_url": "https://api.github.com/users/douglarek/events{/privacy}", "followers_url": "https://api.github.com/users/douglarek/followers", "following_url": "https://api.github.com/users/douglarek/following{/other_user}", "gists_url": "https://api.github.com/users/douglarek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/douglarek", "id": 1488134, "login": "douglarek", "node_id": "MDQ6VXNlcjE0ODgxMzQ=", "organizations_url": "https://api.github.com/users/douglarek/orgs", "received_events_url": "https://api.github.com/users/douglarek/received_events", "repos_url": "https://api.github.com/users/douglarek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/douglarek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/douglarek/subscriptions", "type": "User", "url": "https://api.github.com/users/douglarek", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1872/reactions" }
https://api.github.com/repos/psf/requests/issues/1872/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1872.diff", "html_url": "https://github.com/psf/requests/pull/1872", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1872.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1872" }
true
[ "Ok, ignore it; git can not checkout directory\n", "Thanks for trying at least. For what it is worth, we probably would not have accepted the change but we appreciate your effort! :cake: \n" ]
https://api.github.com/repos/psf/requests/issues/1871
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1871/labels{/name}
https://api.github.com/repos/psf/requests/issues/1871/comments
https://api.github.com/repos/psf/requests/issues/1871/events
https://github.com/psf/requests/issues/1871
25,912,107
MDU6SXNzdWUyNTkxMjEwNw==
1,871
Our use of urllib3's ConnectionPools is not threadsafe.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[ { "color": "e10c02", "default": false, "description": null, "id": 117744, "name": "Bug", "node_id": "MDU6TGFiZWwxMTc3NDQ=", "url": "https://api.github.com/repos/psf/requests/labels/Bug" }, { "color": "e102d8", "default": false, "description": null, "id": 117745, "name": "Planned", "node_id": "MDU6TGFiZWwxMTc3NDU=", "url": "https://api.github.com/repos/psf/requests/labels/Planned" } ]
closed
true
null
[]
{ "closed_at": null, "closed_issues": 29, "created_at": "2013-11-17T11:29:34Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }, "description": null, "due_on": null, "html_url": "https://github.com/psf/requests/milestone/20", "id": 487518, "labels_url": "https://api.github.com/repos/psf/requests/milestones/20/labels", "node_id": "MDk6TWlsZXN0b25lNDg3NTE4", "number": 20, "open_issues": 12, "state": "open", "title": "3.0.0", "updated_at": "2024-05-19T18:43:00Z", "url": "https://api.github.com/repos/psf/requests/milestones/20" }
27
2014-01-20T11:09:18Z
2021-08-30T00:06:12Z
2018-02-24T14:55:18Z
MEMBER
resolved
We use a `PoolManager` to provide us with connection pools from urllib3. However, we don't actually run our requests through it: instead we get a suitable `ConnectionPool` and then use `urlopen` on that instead. This _can_ lead to bugs when using threads and hitting a large number of hosts from one `Session` object. Each host creates a new `ConnectionPool` in the `PoolManager`: if too many are created, the least recently used one is evicted. If that eviction happens in between us getting hold of the `ConnectionPool` and us actually trying to send a request, we can try to send on a closed connection pool. That's pretty lame. You can see the discussion on IRC that led to this issue [here](https://botbot.me/freenode/python-requests/msg/9980059/). We can either try to make that section thread safe, or try to reconnect when we hit closed pool errors. Thoughts?
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/psf/requests/issues/1871/reactions" }
https://api.github.com/repos/psf/requests/issues/1871/timeline
null
completed
null
null
false
[ "Note that we may not want to do anything.\n\nMy general advice has always been \"one session per thread\". If that advice remains true we don't have to do anything.\n", "Another option is to use a `threading.local` internally for some critical state, that way you effectively, automaticaly and trivially have one `Session` per tread.\n", "Oh, this probably explains why I got Pool closed errors in pip before I ripped the threading out.\n", "> Oh, this probably explains why I got Pool closed errors in pip before I ripped the threading out.\n\nThis makes me wonder if we should be doing something.\n", "When I was trying to integrate requests into pip before I ripped threading out, I was able to get the Pool closed errors almost constantly. Pip's threading was a little jacked up is it created a new pool for every dependency and used it to do parallel discovery of the links from /simple/ and external sites.I just assumed we were doing something wrong and didn't feel like debugging it. Add in the changes we had been making to where, for a lot of packages, there would be no external sites I just ended up ripping out the threading code instead.\n\nBut I figured I'd comment so y'all know I did run into that problem, and we did use one global session.\n", "Yeah, I'm not sure we should be encouraging using one session object across threads, but I can see how sharing cookies across threads would be useful to a user.\n", "To be honest I didn't really decide to do that explicitly. Just it was the most obvious thing to do with the pip code base so it's what I did. It's possible that's not a reasonable thing to do too ;)\n", "Realistically, fixing up the threadsafety of our `Session` object is substantial work. Realistically, one `Session` per thread is probably the best way to go at this stage.\n\nThe architectural cost of fixing up the thread safety of a `Session` is probably quite significant, at least if we want to do it well. At the very least, we need to lock on `cookie` access. Not sure how much more work would be required.\n", "Perhaps that should be documented? Or if it is I missed it :)\n", "Is thread safety only an issue when one session connects to multiple hosts? I'm curious if multi-threaded applications would be safe if they use a policy of one Session object per host.\n", "One session per host may not be safe if it's shared across threads, because I don't believe the stdlib cookie jar is thread safe. \n", "@Lukasa If the issue is limited to the stdlib cookie jar, then using a requests.Session across multiple threads should be fine for APIs (without cookies).\n\nIs this correct?\n", "@pior It's my belief that that should be safe.\n", "Thanks!\n", "I think using a Session is only safe in the very specific single host case, and only because there will be only one connection in the pool. As soon as the pool grows too large, you run into race conditions where a connection might be dropped before it is used.\n", "@pepijndevos That should not happen. The connection pool is thread-safe, and when a connection is removed from the pool it is owned entirely by the object that withdrew it. As a result, the connection should not be dropped before use. It is _possible_ that there is a TCP FIN packet in flight when the connection is being handled before use, and as a result the connection is torn down: in that situation, a simple retry is a good idea (and something that should be being done anyway).\n", "According to [this StackOverflow answer](https://stackoverflow.com/a/20457621/5705174), it seems that it is thread-safe in certain cookielib implementations. Is it true? @Lukasa ", "I would not depend on any undocumented thread-safety. If cookielib is thread-safe and it says so in the docs, then yes, we should be mostly fine. ", "The main problem with thread safety (other than perhaps the cookiejar) seems to be the PoolManager's use of RecentlyUsedContainer which ignores whether or not the pool is in use when deciding to throw away pools. When it does dispose of a pool it aggressively calls pool.close(). If, instead, the PoolManager didn't call close itself and the pool implemented __del__ to close itself, wouldn't this theoretically solve the problem? I realize that relying on the GC means we can't make any assertions about the number of TCP connections that might still be open, but this seems like the simplest potential solution to this particular problem.", "More discussion of thread safety on https://github.com/reversefold/urllib3/commit/f10336da3340cbd56257d89217e6dcb44930f734\r\n\r\nThe issue with the connection pool manager appears to be one in urllib3, not in requests proper.", "It doesn't appear there's any actual work for us to do here. Closing this for now.", "I'm sorry if this is the wrong place for the question, but this appears to be the only official place that such a thing is discussed that I can find. \r\n\r\nIs requests.Session thread-safe or not?\r\n\r\nThe [frontpage](http://docs.python-requests.org/en/v2.1.0/#feature-support) says that **requests** is thread safe, but does not mention anything about Session specifically.\r\n\r\nI understand that urllib3 connection pool is thread safe. Is Session?", "> I'm sorry if this is the wrong place for the question, but this appears to be the only official place that such a thing is discussed that I can find.\r\n> \r\n> Is requests.Session thread-safe or not?\r\n> \r\n> The [frontpage](http://docs.python-requests.org/en/v2.1.0/#feature-support) says that **requests** is thread safe, but does not mention anything about Session specifically.\r\n> \r\n> I understand that urllib3 connection pool is thread safe. Is Session?\r\n\r\nI can confirm that requests.Session() is not thread safe.\r\nI am running one session per thread (on 3-4 threads) and it is still corrupting data. I think you are limited to about one session per Python instance.\r\n\r\nGranted I was running 11 sessions for one host, but I ran all the sessions through a proxy, and only ran one session per thread, so it should have been thread safe.\r\n\r\nBe extremely careful when using threading with requests.Session().\r\n\r\nr = requests.get() is threadsafe, but, r = requests.Sesssion().get() is not threadsafe.", "@Arbi717 the only difference between those two is that you're not closing the session properly in the latter https://github.com/requests/requests/blob/master/requests/api.py#L59", "Forget everything I said. I had a global variable (a dictionary) that should have been local. Requests is amazing.", "Note you can use triple-backticks to surround code segments in Markdown.", "I think there are other race conditions when I do multithread calls with Python requests I get a ClosedPoolError and had to patch to return a new connection\r\n\r\nhttps://github.com/urllib3/urllib3/issues/951#issuecomment-685450123\r\n\r\nany clue on the potential impacts of doing this ?" ]
https://api.github.com/repos/psf/requests/issues/1870
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1870/labels{/name}
https://api.github.com/repos/psf/requests/issues/1870/comments
https://api.github.com/repos/psf/requests/issues/1870/events
https://github.com/psf/requests/pull/1870
25,874,498
MDExOlB1bGxSZXF1ZXN0MTE2NTM5NzE=
1,870
Add a small note about requests-toolbelt
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-01-19T04:48:47Z
2021-09-09T00:01:14Z
2014-01-23T18:54:08Z
CONTRIBUTOR
resolved
We can add more info to discuss the inclusion of @lukasa's `SSLAdapter`. Are there any objections to adding this information to requests' documentation?
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1870/reactions" }
https://api.github.com/repos/psf/requests/issues/1870/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1870.diff", "html_url": "https://github.com/psf/requests/pull/1870", "merged_at": "2014-01-23T18:54:08Z", "patch_url": "https://github.com/psf/requests/pull/1870.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1870" }
true
[ "None from me.\n", ":cake:\n", "We may want to have a better \"community projects\" showcase in the future, but this looks great for now. \n" ]
https://api.github.com/repos/psf/requests/issues/1869
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1869/labels{/name}
https://api.github.com/repos/psf/requests/issues/1869/comments
https://api.github.com/repos/psf/requests/issues/1869/events
https://github.com/psf/requests/issues/1869
25,819,295
MDU6SXNzdWUyNTgxOTI5NQ==
1,869
Why don't we send `strict` to urllib3?
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[ { "color": "fbca04", "default": false, "description": null, "id": 44501249, "name": "Needs BDFL Input", "node_id": "MDU6TGFiZWw0NDUwMTI0OQ==", "url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input" }, { "color": "eb6420", "default": false, "description": null, "id": 44501256, "name": "Breaking API Change", "node_id": "MDU6TGFiZWw0NDUwMTI1Ng==", "url": "https://api.github.com/repos/psf/requests/labels/Breaking%20API%20Change" }, { "color": "5319e7", "default": false, "description": null, "id": 67760318, "name": "Fixed", "node_id": "MDU6TGFiZWw2Nzc2MDMxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Fixed" } ]
closed
true
null
[]
{ "closed_at": null, "closed_issues": 29, "created_at": "2013-11-17T11:29:34Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }, "description": null, "due_on": null, "html_url": "https://github.com/psf/requests/milestone/20", "id": 487518, "labels_url": "https://api.github.com/repos/psf/requests/milestones/20/labels", "node_id": "MDk6TWlsZXN0b25lNDg3NTE4", "number": 20, "open_issues": 12, "state": "open", "title": "3.0.0", "updated_at": "2024-05-19T18:43:00Z", "url": "https://api.github.com/repos/psf/requests/milestones/20" }
12
2014-01-17T16:37:25Z
2021-09-08T23:06:10Z
2015-01-18T20:36:17Z
MEMBER
resolved
I was trawling the Python core bugtracker when I spotted [this](http://bugs.python.org/issue17849) beauty. It looks like it should be impossible to hit this if `strict` is set to `True`, but we don't pass that parameter to `urllib3`. Is there a good design reason we don't do this? If there isn't, are we open to a PR that does set that argument?
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1869/reactions" }
https://api.github.com/repos/psf/requests/issues/1869/timeline
null
completed
null
null
false
[ "If it doesn't cause issues, I don't see why not.\n", "`strict` was deprecated in Python3. With that the `LineAndFileWrapper` was removed because HTTP/1.x is assumed. The fact that the `LineAndFileWrapper` is created means that the proxy must be returning some malformed status line or is using HTTP/0.9. Or at least that sounds correct to me.\n\nI might be a bit thick but what are people using to easily setup a proxy? I'd like to take a look at the raw response.\n", "@jschneier I'm not so worried about that particular issue: we've never hit it in Requests. I'm just wondering whether there's a good technical reason we shouldn't just limit ourselves to well-formed HTTP/1.1.\n", "I think printing an annoying deprecation warning on python3 is the only drawback if you only want to deal with HTTP/1.x\n", "Deprecation warnings don't print by default on 3.x I don't think. So it's not even that bad.\n", "Let's do it unless @kennethreitz has an object to restricting the library to HTTP/1.1\n", "I don't see how this is useful.\n", "I don't see how this is unuseful.\n", "You baffle me\n", "I suppose I'm now +0 (while before I was -0) :)\n", "Doing this would solve #2322.\n", "Alrighty, patch in #2323.\n" ]
https://api.github.com/repos/psf/requests/issues/1868
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1868/labels{/name}
https://api.github.com/repos/psf/requests/issues/1868/comments
https://api.github.com/repos/psf/requests/issues/1868/events
https://github.com/psf/requests/issues/1868
25,768,601
MDU6SXNzdWUyNTc2ODYwMQ==
1,868
Cookie Not Set on 303
{ "avatar_url": "https://avatars.githubusercontent.com/u/1467590?v=4", "events_url": "https://api.github.com/users/creese/events{/privacy}", "followers_url": "https://api.github.com/users/creese/followers", "following_url": "https://api.github.com/users/creese/following{/other_user}", "gists_url": "https://api.github.com/users/creese/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/creese", "id": 1467590, "login": "creese", "node_id": "MDQ6VXNlcjE0Njc1OTA=", "organizations_url": "https://api.github.com/users/creese/orgs", "received_events_url": "https://api.github.com/users/creese/received_events", "repos_url": "https://api.github.com/users/creese/repos", "site_admin": false, "starred_url": "https://api.github.com/users/creese/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/creese/subscriptions", "type": "User", "url": "https://api.github.com/users/creese", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2014-01-16T21:54:24Z
2021-09-09T00:10:16Z
2014-01-18T22:31:26Z
NONE
resolved
I'm making a request to a server that sets an httponly cookie and returns a 303. When I look in `response.headers`, I see the cookie. When I look in `session.cookies`, I don't see the cookie. Why isn't the cookie getting set?
{ "avatar_url": "https://avatars.githubusercontent.com/u/1467590?v=4", "events_url": "https://api.github.com/users/creese/events{/privacy}", "followers_url": "https://api.github.com/users/creese/followers", "following_url": "https://api.github.com/users/creese/following{/other_user}", "gists_url": "https://api.github.com/users/creese/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/creese", "id": 1467590, "login": "creese", "node_id": "MDQ6VXNlcjE0Njc1OTA=", "organizations_url": "https://api.github.com/users/creese/orgs", "received_events_url": "https://api.github.com/users/creese/received_events", "repos_url": "https://api.github.com/users/creese/repos", "site_admin": false, "starred_url": "https://api.github.com/users/creese/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/creese/subscriptions", "type": "User", "url": "https://api.github.com/users/creese", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1868/reactions" }
https://api.github.com/repos/psf/requests/issues/1868/timeline
null
completed
null
null
false
[ "Thanks for raising this issue!\n\nCan you be a bit more specific? There are a number of things that could affect this behaviour. We need to know what version of Requests you're using and whether you're setting `allow_redirects` to False. If you aren't, can you also provide the output of `for h in r.history: print(h.headers)`?\n", "@creese is it possible that you could share the URL with us as well so we can see if we can reproduce the behaviour? It would also be very helpful if you could answer @Lukasa's questions. Thanks for helping us help you!\n", "@sigmavirus24 There was an issue with the form data in the initial request. It's resolved.\n", "Thanks for updating us @creese \n" ]
https://api.github.com/repos/psf/requests/issues/1867
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1867/labels{/name}
https://api.github.com/repos/psf/requests/issues/1867/comments
https://api.github.com/repos/psf/requests/issues/1867/events
https://github.com/psf/requests/pull/1867
25,710,070
MDExOlB1bGxSZXF1ZXN0MTE1NjkzODk=
1,867
Document contextlib.closing.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2014-01-16T08:39:35Z
2021-09-09T00:01:24Z
2014-01-16T22:16:05Z
MEMBER
resolved
From #1493. Provides a way to use the `Response` object as a context manager without adding the conceptual overhead to the `Response` class. I'm still on the fence about whether we should document this or just make `Response`s context managers, but this seems the least controversial option so lets do this first. Review: @sigmavirus24, @pepijndevos.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1867/reactions" }
https://api.github.com/repos/psf/requests/issues/1867/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1867.diff", "html_url": "https://github.com/psf/requests/pull/1867", "merged_at": "2014-01-16T22:16:05Z", "patch_url": "https://github.com/psf/requests/pull/1867.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1867" }
true
[ "I dig it. \n" ]
https://api.github.com/repos/psf/requests/issues/1866
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1866/labels{/name}
https://api.github.com/repos/psf/requests/issues/1866/comments
https://api.github.com/repos/psf/requests/issues/1866/events
https://github.com/psf/requests/issues/1866
25,662,134
MDU6SXNzdWUyNTY2MjEzNA==
1,866
Digest auth for proxy
{ "avatar_url": "https://avatars.githubusercontent.com/u/38861?v=4", "events_url": "https://api.github.com/users/oinopion/events{/privacy}", "followers_url": "https://api.github.com/users/oinopion/followers", "following_url": "https://api.github.com/users/oinopion/following{/other_user}", "gists_url": "https://api.github.com/users/oinopion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/oinopion", "id": 38861, "login": "oinopion", "node_id": "MDQ6VXNlcjM4ODYx", "organizations_url": "https://api.github.com/users/oinopion/orgs", "received_events_url": "https://api.github.com/users/oinopion/received_events", "repos_url": "https://api.github.com/users/oinopion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/oinopion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oinopion/subscriptions", "type": "User", "url": "https://api.github.com/users/oinopion", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2014-01-15T16:57:55Z
2021-09-09T00:10:14Z
2014-01-29T19:24:15Z
NONE
resolved
It would be nice to have digest auth support for proxies. I've hit same problem as described in this SO post: http://stackoverflow.com/questions/13506455/how-to-pass-proxy-authentication-requires-digest-auth-by-using-python-requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1866/reactions" }
https://api.github.com/repos/psf/requests/issues/1866/timeline
null
completed
null
null
false
[ "Hi @oinopion, thanks for raising this issue!\n\nIt's an understandable thing to want. However, putting this in Requests by default is a trade-off: each additional authentication option added to Requests is an additional bit of code and maintenance cost. It also adds confusion to the API.\n\nFor this reason, Requests undertook to remove some of its authentication helpers in V1.0, as you can see in the [changelog](https://github.com/kennethreitz/requests/blob/1720e4bb87f5db29e9d9f42da1b0cbb1f52a7171/HISTORY.rst#100-2012-12-17). These got moved to helper modules under the requests organisation ([OAuth](https://github.com/requests/requests-oauthlib), [Kerberos](https://github.com/requests/requests-kerberos) and [NTLM](https://github.com/requests/requests-ntlm)), indicating their status as 'blessed' first-party solutions, while reducing the weight and complexity of the main library.\n\nCandidates for the main library need two things: they need to be simple, and they need to be popular. Digest auth for proxies is simple, but it's not popular. I'd argue that it's dramatically less popular than OAuth, and probably less popular than Kerberos. For this reason, I'm disinclined to add support to the main Requests library. Interestingly, because it's so simple, I'm also disinclined to add a whole additional package for it under the requests organisation. I think the best place for it is actually the nascent [requests toolbelt](https://github.com/sigmavirus24/requests-toolbelt), which is the intended home for useful utilities like this kind of thing.\n\nDoes that sound like an acceptable approach?\n", "I have a feeling that it should be either included in the core or both proxy and non-proxy digest auth's should be a separate package. Having one here and there is, in my opinion, not... \"symmetrical\", if you will. This however is just me. I will applaud any solution that saves me from digging into digest spec.\n\nIn any case, the most important thing is discoverability: wherever it ends up, there should be mention of it in in core library documentation. \n", "Yeah I'm thinking of sending a PR about the toolbelt.\n", "We've agreed that core requests isn't taking any action here, so I'll close this down.\n" ]
https://api.github.com/repos/psf/requests/issues/1865
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1865/labels{/name}
https://api.github.com/repos/psf/requests/issues/1865/comments
https://api.github.com/repos/psf/requests/issues/1865/events
https://github.com/psf/requests/issues/1865
25,648,329
MDU6SXNzdWUyNTY0ODMyOQ==
1,865
Support Certificate Revocation Lists where possible
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[ { "color": "02e10c", "default": false, "description": null, "id": 76800, "name": "Feature Request", "node_id": "MDU6TGFiZWw3NjgwMA==", "url": "https://api.github.com/repos/psf/requests/labels/Feature%20Request" }, { "color": "e102d8", "default": false, "description": null, "id": 117745, "name": "Planned", "node_id": "MDU6TGFiZWwxMTc3NDU=", "url": "https://api.github.com/repos/psf/requests/labels/Planned" } ]
closed
true
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" } ]
null
14
2014-01-15T13:51:50Z
2021-09-09T00:10:16Z
2014-01-23T18:57:09Z
MEMBER
resolved
In 3.4 the standard library's previously woeful `ssl` module is being vastly improved, and one of the nice things it is getting is [support for Certificate Revocation Lists](http://docs.python.org/3.4/library/ssl.html#ssl.SSLContext.load_verify_locations). I'd like us to investigate supporting CRLs in versions of Python where they are available. If we can do this, it'll continue the ongoing trend of Requests being the single most secure way to do HTTP in Python. It's highly probable that this will need to be a two-pronged approach: urllib3 will probably want to support some genericised CRL options, with Requests pasting over the genericness with a built-in "correct" behaviour. For that reason, I want to start a discussion with the relevant people to work out how we can do this. Requests people: @kennethreitz, @sigmavirus24 urllib3 people: @shazow Our SSL guy: @t-8ch. People who might be interested in either helping or providing insight and code review (to make sure we don't get this wrong) are @tiran (heavily involved with the standard library's `ssl` module) and @alex (currently on a big crypto kick). If either of you two are actually not interested that's totally fine. =) Initial views: how should we split this work up, and how safe can Requests' defaults be?
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1865/reactions" }
https://api.github.com/repos/psf/requests/issues/1865/timeline
null
completed
null
null
false
[ "Also, which version(s) of python are we targetting? Just 3.4? Perhaps the better question to ask is: \"Which version(s) of Python can we reasonably target?\" Is 2.6 an option?\n", "With the standard library we're limited to 3.4.\n", "With the caveat that I know nothing of how the CRLs have been (or should be) implemented, is there no possible way to backport them via a package like certifi?\n", "I don't _think_ so. I'm pretty sure that we need to pass the [`VERIFY_CRL_CHECK_CHAIN`](http://docs.python.org/3.4/library/ssl.html#ssl.VERIFY_CRL_CHECK_CHAIN) to the [`SSLContext.verify_flags`](http://docs.python.org/3.4/library/ssl.html#ssl.SSLContext.verify_flags). That option was only added in 3.4, so unless we think we can work around that I don't think OpenSSL will check the CRLs.\n", "Correct! You need to set that option.\n\nI'm currently traveling and will reply in a couple of days.\n\nCory Benfield [email protected] schrieb:\n\n> I don't _think_ so. I'm pretty sure that we need to pass the\n> [`VERIFY_CRL_CHECK_CHAIN`](http://docs.python.org/3.4/library/ssl.html#ssl.VERIFY_CRL_CHECK_CHAIN)\n> to the\n> [`SSLContext.verify_flags`](http://docs.python.org/3.4/library/ssl.html#ssl.SSLContext.verify_flags).\n> That option was only added in 3.4, so unless we think we can work\n> around that I don't think OpenSSL will check the CRLs.\n> \n> ---\n> \n> Reply to this email directly or view it on GitHub:\n> https://github.com/kennethreitz/requests/issues/1865#issuecomment-32693638\n\nSent from my Android phone with K-9 Mail.\n", "No, although you can load CRLs with standard methods, the SSL module has no way to enable CRL checks prior to 3.4.\n\nIan Cordasco [email protected] schrieb:\n\n> With the caveat that I know nothing of how the CRLs have been (or\n> should be) implemented, is there no possible way to backport them via a\n> package like certifi?\n> \n> ---\n> \n> Reply to this email directly or view it on GitHub:\n> https://github.com/kennethreitz/requests/issues/1865#issuecomment-32693484\n\nSent from my Android phone with K-9 Mail.\n", "You might be able to backport it via pyopenssl, much like how SNI and TLS Compression being disabled was done in urllib3.\n", "I recommend that you don't invest too much time into CRL. I implemented it for Python 3.4 because it was a low hanging fruit. I only had to add the distribution points from X509v3 and expose one simple getter/setter.\n\nNowadays OCSP is used in favor of CRL because OCSP is much more lightweight. A TLS server can acquire a temporary OCSP token from its CA and send this OCSP info to its client. It's cheap and fast. OCSP is harder to handle on the client side, thought. :/\n\nDonald Stufft [email protected] schrieb:\n\n> You might be able to backport it via pyopenssl, much like how SNI and\n> TLS Compression being disabled was done in urllib3.\n> \n> ---\n> \n> Reply to this email directly or view it on GitHub:\n> https://github.com/kennethreitz/requests/issues/1865#issuecomment-33153172\n\nSent from my Android phone with K-9 Mail.\n", "Mm, though OCSP requires us to trust the server. ;)\n", "CRLs kinda suck in general. \n- They require access to the CRL lists, most implementations will \"soft\" fail, so if you can't reach the CRL lists it just won't validate against the CRL, making it trivially easy for a MITM attacker to block access to a CRL verification in a lot of cases.\n- I think there have been problems with the CRL hosts not being on the greatest connections and going down/being slow but I can't find a source for this.\n", "All agreed, but do they suck to a degree that outweighs their usefulness? =D\n", "Well Firefox deprecated them in favor of OCSP in some version of Firefox (28?), I don't think Google Chrome has had CRL checking on by default in a long time if ever, No idea about Safari or IE.\n", "In that case, this is totally not worth it. =D\n\nThanks all!\n", "No, the OCSP response is signed by the CA not the TLS server. The server just retrieves and caches the OCSP response for a short time and forwards the token as part of the handshake. The client needs to verify the time stamp and CA signature. The client is free to ignore the token and retrieve a new one from the CA's OCSP server.\n\nCory Benfield [email protected] schrieb:\n\n> Mm, though OCSP requires us to trust the server. ;)\n> \n> ---\n> \n> Reply to this email directly or view it on GitHub:\n> https://github.com/kennethreitz/requests/issues/1865#issuecomment-33155170\n\nSent from my Android phone with K-9 Mail.\n" ]
https://api.github.com/repos/psf/requests/issues/1864
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1864/labels{/name}
https://api.github.com/repos/psf/requests/issues/1864/comments
https://api.github.com/repos/psf/requests/issues/1864/events
https://github.com/psf/requests/issues/1864
25,624,842
MDU6SXNzdWUyNTYyNDg0Mg==
1,864
HTTPDigestAuth Does not send Authorization headers
{ "avatar_url": "https://avatars.githubusercontent.com/u/6405968?v=4", "events_url": "https://api.github.com/users/luflores/events{/privacy}", "followers_url": "https://api.github.com/users/luflores/followers", "following_url": "https://api.github.com/users/luflores/following{/other_user}", "gists_url": "https://api.github.com/users/luflores/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luflores", "id": 6405968, "login": "luflores", "node_id": "MDQ6VXNlcjY0MDU5Njg=", "organizations_url": "https://api.github.com/users/luflores/orgs", "received_events_url": "https://api.github.com/users/luflores/received_events", "repos_url": "https://api.github.com/users/luflores/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luflores/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luflores/subscriptions", "type": "User", "url": "https://api.github.com/users/luflores", "user_view_type": "public" }
[]
closed
true
null
[]
null
10
2014-01-15T03:54:34Z
2021-09-09T00:28:13Z
2014-01-15T14:30:47Z
NONE
resolved
Hi I am working on a simple program to get a token-id from a router using REST API. The problem that I am facing, is that I do not see the Authorization headers when I use HTTPDigestAuth. When I use the Google App POSTMAN, I can see the headers and it work. What I am missing in my code? My code: ``` import requests from requests.auth import HTTPBasicAuth, HTTPDigestAuth user = 'pod1u1' passwd = 'pass' url = 'https://10.0.236.188/api/v1/auth/token-services' auth = HTTPDigestAuth(user, passwd) r = requests.post(url, auth=auth, verify=False) print 'Request headers:', r.request.headers print 'Status Code: ', r.status_code print 'response Headers: ', r.headers print '######################################' auth = HTTPBasicAuth(user, passwd) r = requests.post(url, auth=auth, verify=False) print 'Request headers:', r.request.headers print 'Status Code: ', r.status_code print 'response Headers: ', r.headers ``` Shell commands w/ output: My script -- ``` $python digest.py Request headers: CaseInsensitiveDict({'Content-Length': '0', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.0 CPython/2.7.5 Darwin/13.0.0'}) Status Code: 401 response Headers: CaseInsensitiveDict({'date': 'Tue, 14 Jan 2014 00:28:27 GMT', 'content-length': '83', 'content-type': 'application/json', 'connection': 'keep-alive', 'server': 'nginx/1.4.2'}) ###################################### Request headers: CaseInsensitiveDict({'Accept': '*/*', 'Content-Length': '0', 'Accept- Encoding': 'gzip, deflate, compress', 'Authorization': u'Basic cG9kMXUxOkMxc2NvTDF2Mw==', 'User-Agent': 'python-requests/2.2.0 CPython/2.7.5 Darwin/13.0.0'}) Status Code: 401 response Headers: CaseInsensitiveDict({'date': 'Tue, 14 Jan 2014 00:28:27 GMT', 'content-length': '448', 'content-type': 'text/html', 'connection': 'keep-alive', 'server': 'nginx/1.4.2'}) POSTMAN POST /api/v1/auth/token-services HTTP/1.1 Host: 10.0.236.188 Authorization: Digest username="pod1u1", realm="[email protected]", nonce="", uri="/api/v1/auth/token-services", response="08ac88b7f5e0533986e9fc974f132258", opaque="" Cache-Control: no-cache { "kind": "object#auth-token", "expiry-time": "Tue Jan 14 00:09:27 2014", "token-id": "Vj7mYUMTrsuljaiXEPoNJNiXLzf8UeDsRnEgh3DvQcU=", "link": "https://10.0.236.188/api/v1/auth/token-services/9552418862" } ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1864/reactions" }
https://api.github.com/repos/psf/requests/issues/1864/timeline
null
completed
null
null
false
[ "Since we can't examine the service you're running, can you provide similar logging information like so:\n\n``` python\nimport requests\nfrom requests.auth import HTTPDigestAuth\n\nuser = 'pod1u1'\npasswd = 'pass'\n\ndef log_info(response):\n print 'Request headers: ', r.request.headers\n print 'Status Code: ', r.status_code\n print 'Response Headers: ', r.headers\n\nurl = 'https://10.0.236.188/api/v1/auth/token-services'\nauth = HTTPDigestAuth(user, passwd)\nr = requests.post(url, auth=auth, verify=False)\nlog_info(r)\n\nprint 'History:\nfor response in r.history:\n log_info(response)\n```\n\nThe server could be doing some bizarre things that we have no knowledge of without being able to examine it.\n", "I ran the following and this is what I got:\n\nimport requests\nfrom requests.auth import HTTPDigestAuth\n\nuser = 'pod1u1'\npasswd = 'passwd'\n\ndef log_info(response):\n print 'Request headers: ', r.request.headers\n print 'Status Code: ', r.status_code\n print 'Response Headers: ', r.headers\n print 'Response Text: ', r.text\n\nurl = 'https://10.0.236.188/api/v1/auth/token-services'\nauth = HTTPDigestAuth(user, passwd)\nr = requests.post(url, auth=auth, verify=False)\nlog_info(r)\n\nprint 'History:'\nfor response in r.history:\n log_info(response)\n\n/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python /Volumes/luflores-data/Users/luflores/Copy/Cisco/CiscoLive/Scripts/RESTAPI/test.py\nRequest headers: CaseInsensitiveDict({'Content-Length': '0', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '_/_', 'User-Agent': 'python-requests/2.2.0 CPython/2.7.5 Darwin/13.0.0'})\nStatus Code: 401\nResponse Headers: CaseInsensitiveDict({'date': 'Wed, 15 Jan 2014 14:10:53 GMT', 'content-length': '83', 'content-type': 'application/json', 'connection': 'keep-alive', 'server': 'nginx/1.4.2'})\nResponse Text: {\"error-code\": -1, \"error-message\": \"No username or password found\", \"detail\": \" \"}\nHistory:\n\nProcess finished with exit code 0\n\n-luis\n", "That doesn't look like a service that requires Digest Auth. If Digest Auth is required, the 401 should contain a header like this: `WWW-Authenticate: Digest qop=\"auth\"`. This does not. Instead, you're being returned a JSON body that contains an error message.\n\nYou need to reread the documentation for the webservice you're accessing to see how you're expected to get a token.\n", "Why it is not sending the Authorization headers to begin with and why it works when I use Google Postman ?\n\n-luis\n\nOn Jan 15, 2014, at 9:31 AM, Cory Benfield [email protected] wrote:\n\n> That doesn't look like a service that requires Digest Auth. If Digest Auth is required, the 401 should contain a header like this: WWW-Authenticate: Digest qop=\"auth\". This does not. Instead, you're being returned a JSON body that contains an error message.\n> \n> You need to reread the documentation for the webservice you're accessing to see how you're expected to get a token.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "Digest Auth should not send headers on the initial message, because the server needs to inform you how to generate the digest. I invite you to open up the [section of code](https://github.com/kennethreitz/requests/blob/master/requests/auth.py#L69-L150) that generates the digest. We require the realm, nonce and qop from the server before we can correctly generate the header.\n\nAs to why it works when you use Postman, I advise you either capture the HTTP message Postman generates (using a tool like Wireshark or tcpdump), or you demonstrate to us how you're configuring Postman so that we can attempt to reproduce the behaviour on our own machines.\n", "On Jan 15, 2014, at 12:03 PM, Cory Benfield [email protected] wrote:\n\n> Digest Auth should not send headers on the initial message, because the server needs to inform you how to generate the digest. I invite you to open up the section of code that generates the digest. We require the realm, nonce and qop from the server before we can correctly generate the header.\n\nThanks, I did not knew that.\n\n> As to why it works when you use Postman, I advise you either capture the HTTP message Postman generates (using a tool like Wireshark or tcpdump), or you demonstrate to us how you're configuring Postman so that we can attempt to reproduce the behaviour on our own machines.\n\nI could try to do wireshark, but it us https :( . I just got Postman on google chrome and :\n\n```\n1. fill url with : https://10.0.236.188/api/v1/auth/token-services\n2. Click on Digest Auth and fill the following\n Username: myusername\n Real: [email protected]\n Password: mypassword\n Algorithm: MD5\n The rest I left blank\n3. Click Refresh Headers\n4. Click Preview\n\n POST /api/v1/auth/token-services HTTP/1.1\n Host: 10.0.236.188\n Authorization: Digest username=\"myusername\", realm=\"[email protected]\", \n nonce=\"\", uri=\"/api/v1/auth/token-services\", \n response=\"5a309d4b4fdfd18168c84bcecde10971\", opaque=\"\"\n Cache-Control: no-cache\n\n5. Click send\n\n {\n \"kind\": \"object#auth-token\",\n \"expiry-time\": \"Tue Jan 14 23:31:47 2014\",\n \"token-id\": \"//EBXzohJKOVFV/1BIeWBPf59GAkrvHgCZFOtslzB78=\",\n \"link\": \"https://10.0.236.188/api/v1/auth/token-services/8812089588\"\n }\n```\n\n-luis\n\n> —\n> Reply to this email directly or view it on GitHub.\n", "See, that's really interesting: you're providing Postman with all the header information we expect to get from the server on the 401. If you know who's running the server, you need to try to find out why the server isn't responding with a proper 401 challenge.\n", "Thanks Cory, the server is on a Cisco router running REST API.\n\n-luis\n\nOn Jan 15, 2014, at 4:09 PM, Cory Benfield [email protected] wrote:\n\n> See, that's really interesting: you're providing Postman with all the header information we expect to get from the server on the 401. If you know who's running the server, you need to try to find out why the server isn't responding with a proper 401 challenge.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "I know someone who's an expert with Cisco routers. Can you give me some more details, he might be able to help me debug this.\n", "I am in contact with Cisco DEs and will followup with them. I think the problem is that it does not support Digest Auth, just Basic Auth. http://www.cisco.com/en/US/docs/routers/csr1000/software/restapi/RESTAPIclient.html . I will keep this thread updated.\n\n-luis\n\nOn Jan 15, 2014, at 10:08 PM, Ian Cordasco [email protected] wrote:\n\n> I know someone who's an expert with Cisco routers. Can you give me some more details, he might be able to help me debug this.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n" ]
https://api.github.com/repos/psf/requests/issues/1863
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1863/labels{/name}
https://api.github.com/repos/psf/requests/issues/1863/comments
https://api.github.com/repos/psf/requests/issues/1863/events
https://github.com/psf/requests/issues/1863
25,527,437
MDU6SXNzdWUyNTUyNzQzNw==
1,863
Error catch/reraise loses some info about the stack
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2014-01-13T20:55:35Z
2021-09-09T00:28:14Z
2014-01-14T05:31:01Z
CONTRIBUTOR
resolved
On my mac, ``` python import requests r = requests.get("https://www.howsmyssl.com/a/check") ``` generates this stack trace: ``` python Traceback (most recent call last): File "<stdin>", line 1, in <module> File "requests/api.py", line 55, in get return request('get', url, **kwargs) File "requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "requests/sessions.py", line 383, in request resp = self.send(prep, **send_kwargs) File "requests/sessions.py", line 486, in send r = adapter.send(request, **kwargs) File "requests/adapters.py", line 389, in send raise SSLError(e) requests.exceptions.SSLError: [Errno 1] _ssl.c:507: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version ``` which seems to skip a few lines that raised the exception in urllib3. Perhaps we could use `six.reraise()` here for python2/python3 compatibility?
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1863/reactions" }
https://api.github.com/repos/psf/requests/issues/1863/timeline
null
completed
null
null
false
[ "Thanks for raising this @kevinburke!\n\nThat area is very carefully designed. We aren't reraising exceptions, we're wrapping them. This means we don't want the urllib3 exception to be raised, we want to raise the `requests.exceptions.SSLError`. In principle we could attach the traceback from the previous exception to this one, but that's unfortunately somewhat misleading. I don't really see any particular problem with this traceback, if I'm honest.\n\nDoes anyone disagree?\n", "I'm in complete agreement with @Lukasa. FYI @kevinburke there has been a recent push to catch the last few exceptions that we didn't realize were spuriously being raised by urllib3 and bleeding through.\n\n+1 to close this.\n", "ok\n" ]
https://api.github.com/repos/psf/requests/issues/1862
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1862/labels{/name}
https://api.github.com/repos/psf/requests/issues/1862/comments
https://api.github.com/repos/psf/requests/issues/1862/events
https://github.com/psf/requests/pull/1862
25,494,344
MDExOlB1bGxSZXF1ZXN0MTE0NDk1MDM=
1,862
Fixed parsing of username and password encoded in the URI
{ "avatar_url": "https://avatars.githubusercontent.com/u/1894878?v=4", "events_url": "https://api.github.com/users/sybeck2k/events{/privacy}", "followers_url": "https://api.github.com/users/sybeck2k/followers", "following_url": "https://api.github.com/users/sybeck2k/following{/other_user}", "gists_url": "https://api.github.com/users/sybeck2k/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sybeck2k", "id": 1894878, "login": "sybeck2k", "node_id": "MDQ6VXNlcjE4OTQ4Nzg=", "organizations_url": "https://api.github.com/users/sybeck2k/orgs", "received_events_url": "https://api.github.com/users/sybeck2k/received_events", "repos_url": "https://api.github.com/users/sybeck2k/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sybeck2k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sybeck2k/subscriptions", "type": "User", "url": "https://api.github.com/users/sybeck2k", "user_view_type": "public" }
[ { "color": "009800", "default": false, "description": null, "id": 44501218, "name": "Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTIxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge" } ]
closed
true
null
[]
null
6
2014-01-13T12:39:59Z
2021-09-08T23:08:28Z
2014-01-14T19:56:04Z
CONTRIBUTOR
resolved
As per #1856, fixes the parsing of URI encoded usernames and password. Also fixes a bogus test where the username in the URI was not correctly encoded.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1862/reactions" }
https://api.github.com/repos/psf/requests/issues/1862/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1862.diff", "html_url": "https://github.com/psf/requests/pull/1862", "merged_at": "2014-01-14T19:56:04Z", "patch_url": "https://github.com/psf/requests/pull/1862.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1862" }
true
[ "Did you not notice #1858 \n", "This looks fine to me. =)\n", "thanks @Lukasa sorry I didn't notice your weekend work..!\n", "@sybeck2k It's really not a problem at all. =) Ego is a dangerous thing in open source so I try to check mine; and besides, you did the very generous thing of merging my changes with yours, so I think you handled this very well. Thankyou. =)\n", "You two rock! :metal: I'll give this another once over when I get into work\n", "Could we just get some comments around the test you added with the percent encoded chars?\n" ]
https://api.github.com/repos/psf/requests/issues/1861
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1861/labels{/name}
https://api.github.com/repos/psf/requests/issues/1861/comments
https://api.github.com/repos/psf/requests/issues/1861/events
https://github.com/psf/requests/pull/1861
25,474,818
MDExOlB1bGxSZXF1ZXN0MTE0Mzk0Njc=
1,861
Guess content type
{ "avatar_url": "https://avatars.githubusercontent.com/u/290496?v=4", "events_url": "https://api.github.com/users/lepture/events{/privacy}", "followers_url": "https://api.github.com/users/lepture/followers", "following_url": "https://api.github.com/users/lepture/following{/other_user}", "gists_url": "https://api.github.com/users/lepture/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lepture", "id": 290496, "login": "lepture", "node_id": "MDQ6VXNlcjI5MDQ5Ng==", "organizations_url": "https://api.github.com/users/lepture/orgs", "received_events_url": "https://api.github.com/users/lepture/received_events", "repos_url": "https://api.github.com/users/lepture/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lepture/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lepture/subscriptions", "type": "User", "url": "https://api.github.com/users/lepture", "user_view_type": "public" }
[ { "color": "02e10c", "default": false, "description": null, "id": 76800, "name": "Feature Request", "node_id": "MDU6TGFiZWw3NjgwMA==", "url": "https://api.github.com/repos/psf/requests/labels/Feature%20Request" }, { "color": "fbca04", "default": false, "description": null, "id": 44501249, "name": "Needs BDFL Input", "node_id": "MDU6TGFiZWw0NDUwMTI0OQ==", "url": "https://api.github.com/repos/psf/requests/labels/Needs%20BDFL%20Input" }, { "color": "eb6420", "default": false, "description": null, "id": 44501256, "name": "Breaking API Change", "node_id": "MDU6TGFiZWw0NDUwMTI1Ng==", "url": "https://api.github.com/repos/psf/requests/labels/Breaking%20API%20Change" }, { "color": "e11d21", "default": false, "description": null, "id": 44501305, "name": "Not Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTMwNQ==", "url": "https://api.github.com/repos/psf/requests/labels/Not%20Ready%20To%20Merge" } ]
closed
true
null
[]
null
27
2014-01-13T01:59:16Z
2021-09-08T22:01:09Z
2014-01-16T12:53:31Z
NONE
resolved
Guess content type for posting files, just like what urllib3 does.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1861/reactions" }
https://api.github.com/repos/psf/requests/issues/1861/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1861.diff", "html_url": "https://github.com/psf/requests/pull/1861", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1861.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1861" }
true
[ "Thanks for this!\n\nIn principle this looks fine. My only worry is that technically this changes what our multipart requests look like on the wire, and I worry about breaking currently functional code. I'm interested to see if @sigmavirus24 is as worried about that as me.\n", "@Lukasa \n\n> My only worry is that technically this changes what our multipart requests look like on the wire\n\nI don't think it changed anything. A proper guess of file content type shouldn't break anything (I hope).\n\nping @sigmavirus24 for a code review.\n", "I mean, in the literal sense it definitely changed something.\n\nThis:\n\n```\nContent-Disposition: form-data; name=\"file\"; filename=\"name\"\n```\n\n is not equal to this:\n\n```\nContent-Disposition: form-data; name=\"file\"; filename=\"name\"\nContent-Type: text/plain\n```\n\nYou're right, this _shouldn't_ break things, but this is the web we're talking about, it definitely has a chance to. =) That's why I want @sigmavirus24 to take a look.\n", "@Lukasa Yes, you are definitely right. But it has a chance to do such thing:\n\n```\n{'file': (filename, data, None)}\n```\n\nI think we are here to discuss which is the best default way to handle file uploading.\n", "I'm totally in agreement, and you'll note that right now I'm sitting on the fence until another of the core developers joins the discussion. =) However, in the interest of openness, I should note that my default position would be to _not_ change things unless necessary. I'm prepared to be swayed, but right now I'm -0.\n", "@lepture are you suggesting that in order to achieve the old behaviour (that probably most of our users rely on) they'll have to pass a tuple of `(filename, data, None)`? To me that is entirely backwards incompatible behaviour and would indicate this change would need to wait until 3.0.0.\n\nI share @Lukasa's concern that while this is technically valid and should not break anything that it might. I'm trying to imagine a file type for which `mimetools` could return the wrong `Content-Type` header. All I can imagine is something entirely proprietary that isn't necessary publicly available and that a corporate client is posting to a service they own. It's entirely plausible for them to not use a `Content-Type` header in that case because their server will know how to handle it. In that case, since they control both ends, I can't imagine this would necessarily break anything but it would certainly cause them a headache.\n\nThat aside, my larger sticking point is that this is a backwards incompatible change. To achieve the same behaviour as before, users have to pass extra parameters that they didn't before. That's not good.\n", "@sigmavirus24 I am not so sure now. I think it would be better for a smart guessing of content type. Because when I first use this lib for posting files, I thought it should handle smarter.\n\nI had a look at the code. And I found that urllib3 handles smart, but requests not. I thought maybe requests just missed it. I didn't expect that you intended to handle it this way.\n\nHowever, if you find this patch is meaningless, just ignore it.\n", "@lepture like I said, it _should not_ break anything and I'm sure it will improve some user's experience with requests. The issue is that it is not backwards compatible because to keep the same behaviour a user has to pass extra parameters. The key part is that I **never** said this pull request was \"meaningless\". I think @Lukasa and I both agree that it is an improvement, but it is an improvement we cannot make until we start considering a 3.0.0 release which seems to me to be very far off.\n", "Mm, 3.0 is a very long way away indeed. I'm trying to work out how we can keep track of this idea without having Kenneth accidentally merge it. Maybe raise an issue to keep track of \"things we'd like in a hypothetical 3.0 release\"?\n", "The pre-3.0 solution of course would be to use the toolbelt, but I still haven't released that yet.\n", "This seems like a lovely feature to me, and when I think of backwards compatibility, I mostly consider API changes (of which this isn't). \n\nI can go either way on this. \n", "@shazow how accurate is this detection?\n", "@kennethreitz As accurate as Python's stdlib mimetype guessing. http://docs.python.org/2/library/mimetypes.html#mimetypes.guess_type\n", "It does get some stuff laughably wrong: for instance, files whose filenames end in `.jpg` or `.png` get detected as `image/pjpeg` and `image/x-png` on Windows.\n", "@Lukasa Time for another Python core patch? :P\n", "[There already is one.](http://bugs.python.org/issue15207). =)\n", "The summary of that issue, for those who don't want to wade through it:\n\n`guess_type` is really stupid on Windows on versions of Python before 2.7.6/3.3.3rc1, and gets a number of quite common types wrong. This situation makes me officially -1 on this patch _until_ we drop support for Python 2.6. Once that happens we can say that modern versions of Python will do the right thing on all platforms.\n", "@Lukasa That's weird. I never knew it would be a disaster on Windows.\n", "@lepture No worries, I'd have been amazed if you did know. I only happen to know because I bumped into this totally by accident [when working on urllib3](https://github.com/shazow/urllib3/issues/256#issuecomment-26619884).\n", "I'm with @Lukasa until we can drop 2.6, let's leave this feature out. A point of inquiry though - hasn't 2.6 seen the last of it's bug/security releases? In other words, I think 2.6 has reached its end of life. We could start planning 3.0 to abandon 2.6 and introduce this feature.\n", "@sigmavirus24 Is python 2.7 the default python on every linux distribution now? If so, I think 2.6 has reached its end of life.\n", "2.6 _has_ stopped receiving security releases, but sadly we can't drop support for it. This is for three reasons. Firstly, the more conservative OSes still ship with it (hello RHEL and CentOS!). Secondly, even if they start shipping with 2.7, they'll still have LTS releases that have 2.6. Finally, now that we're vendored into `pip` we need to do our best to support what `pip` supports to avoid leaving ourselves between a rock and a hard place. Sadly, that includes 2.6.\n", "Actually, that's a point. @dstufft, can you give us an idea of what `pip`s release cycle looks like from the perspective of supporting 2.6?\n", "That isn't the definition of EOL and the answer to that question is no. RHEL still ships with a version prior to 2.7. Others do too but I just woke up\n", "Historically we've dropped support for a language whenever someone suggested we should and the rest of the core team agreed on it. Actually quite recently we've started work on putting all download events on PyPI into a SQL database so it can be queried, here's the results of that:\n\n```\nSELECT substring(python_version for 3), COUNT(*) FROM downloads WHERE package_name = 'pip' GROUP BY substring(python_version for 3) ORDER BY COUNT(*) DESC;\n\n substring | count\n-----------+--------\n 2.7 | 177841\n | 109074\n 2.6 | 73848\n 3.3 | 9736\n 3.2 | 1840\n 2.5 | 921\n 1.1 | 569\n 2.4 | 278\n 3.4 | 81\n 3.1 | 42\n 3.0 | 1\n 2.1 | 1\n(12 rows)\n```\n\nAnd because this is requests!\n\n```\nSELECT substring(python_version for 3), COUNT(*) FROM downloads WHERE package_name = 'requests' GROUP BY substring(python_version for 3) ORDER BY COUNT(*) DESC;\n\n substring | count\n-----------+--------\n 2.7 | 377312\n 2.6 | 53098\n 3.3 | 9057\n | 3798\n 3.2 | 3044\n 1.1 | 314\n 2.5 | 87\n 3.4 | 69\n 2.4 | 62\n 3.1 | 11\n 3.0 | 1\n(11 rows)\n```\n\nThe above is starting on 1/2 and through 1/15. I can give different information if it's wanted FWIW but I doubt pip is going to be willing to drop 2.6 with numbers like that being put up.\n", "Agreed. So the consensus is that we need to support 2.6 for the foreseeable future. This means we'll close this issue rather than have it sit around indefinitely. Thanks for your contributions everyone!\n", "Lol, I never would have let you guys do this. Nice try though :P\n\n## \n\nKenneth Reitz\n\n> On Jan 16, 2014, at 4:53 AM, Cory Benfield [email protected] wrote:\n> \n> Closed #1861.\n> \n> —\n> Reply to this email directly or view it on GitHub.\n" ]
https://api.github.com/repos/psf/requests/issues/1860
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1860/labels{/name}
https://api.github.com/repos/psf/requests/issues/1860/comments
https://api.github.com/repos/psf/requests/issues/1860/events
https://github.com/psf/requests/pull/1860
25,469,172
MDExOlB1bGxSZXF1ZXN0MTE0MzcxMDc=
1,860
Use calendar.timegm when calculating cookie expiration
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
[]
closed
true
null
[]
null
12
2014-01-12T20:28:11Z
2021-09-08T05:00:59Z
2015-12-18T09:18:26Z
CONTRIBUTOR
resolved
Fixes #1859 Credit: @lukasa
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1860/reactions" }
https://api.github.com/repos/psf/requests/issues/1860/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1860.diff", "html_url": "https://github.com/psf/requests/pull/1860", "merged_at": "2015-12-18T09:18:26Z", "patch_url": "https://github.com/psf/requests/pull/1860.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1860" }
true
[ "We need to confirm that this will match the actual parsing logic of cookies before we merge this: we don't want to start messing up cookies created in this way.\n", "That's a good point @Lukasa. I'm guessing @gazpachoking might have a good idea. I'm just too tired to think this hard right now =P\n", "Me too, I'll take a look at this tomorrow.\n", "Any update?\n", "Sorry I never got around to that. @Lukasa how about you?\n", "Uh, no, totally dropped the ball here. I'll put it on my list. =)\n", "@Lukasa how should we make sure this doesn't mess anything up?\n", "I'm not sure, to be honest. I'm wondering if there's a good way we can confirm our cookie-parsing code works in all locales.\n", "Closing due to inactivity.\n", "Ok, I'm reopening this because I've finally validated that the stdlib does this. That suggests that it's the right thing to do. I'll have to do the merge manually, but I'd like to have this.\n\n@sigmavirus24, one question: could we meaningfully add this to a 2.9.1, or should it wait for 2.10.0?\n", "I think this could be included in 2.9.1\n", "\\o/ Finally fixed, nearly two years after the original patch was proposed!\n" ]
https://api.github.com/repos/psf/requests/issues/1859
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1859/labels{/name}
https://api.github.com/repos/psf/requests/issues/1859/comments
https://api.github.com/repos/psf/requests/issues/1859/events
https://github.com/psf/requests/issues/1859
25,459,684
MDU6SXNzdWUyNTQ1OTY4NA==
1,859
Brittle test
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
25
2014-01-12T09:52:49Z
2021-09-08T23:10:55Z
2014-02-03T13:18:46Z
MEMBER
resolved
The test `test_expires_valid_str` fails on my OS X box, in Python 2.7: ``` python ============================= test session starts ============================== platform darwin -- Python 2.7.5 -- pytest-2.3.4 plugins: cov collected 116 items test_requests.py .................................................................................................................F.. =================================== FAILURES =================================== _______________ TestMorselToCookieExpires.test_expires_valid_str _______________ self = <test_requests.TestMorselToCookieExpires testMethod=test_expires_valid_str> def test_expires_valid_str(self): """Test case where we convert expires from string time.""" morsel = Morsel() morsel['expires'] = 'Thu, 01-Jan-1970 00:00:01 GMT' cookie = morsel_to_cookie(morsel) > assert cookie.expires == 1 E AssertionError: assert -3599 == 1 E + where -3599 = Cookie(version=0, name=None, value=None, port=None, port_specified=False, domain='', domain_specified=False, domain_in...False, secure=False, expires=-3599, discard=False, comment='', comment_url=False, rest={'HttpOnly': ''}, rfc2109=False).expires test_requests.py:1111: AssertionError ==================== 1 failed, 115 passed in 23.32 seconds ===================== ``` I've not yet got a good theory for this, though I think it's telling that the error is one hour. I don't know _what_ it's telling though, because time is complicated. Anyway, this test needs to be rewritten to be more accepting of breakage. It's also possible that the intermittent failure of this test represents a problem with the `morsel_to_cookie` function itself, in which case that needs rewriting.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1859/reactions" }
https://api.github.com/repos/psf/requests/issues/1859/timeline
null
completed
null
null
false
[ "Is that test case fails anytime in your OS X box?\n", "Currently it fails consistently on my OS X box. I'll try my other systems as well to see if it's OS X-specific or something more general.\n", "It passes for me on OSX right now, and I'll try my Linux box later. I expect it passes on Jenkins since we've had several successful PR builds.\n", "Passes for me on my Windows box. Hmmmmmm.\n", "Passes for me in OS X, Linux, Windows box. It may not OS dependent problem.\n", "Weeeeeird. Weird. I'll rebuild my virtualenv and try again.\n", "Nope, still failing with a new virtualenv.\n", "I wonder if this is related to the `locale` variables:\n\n```\nLANG=\"en_GB.UTF-8\"\nLC_COLLATE=\"en_GB.UTF-8\"\nLC_CTYPE=\"en_GB.UTF-8\"\nLC_MESSAGES=\"en_GB.UTF-8\"\nLC_MONETARY=\"en_GB.UTF-8\"\nLC_NUMERIC=\"en_GB.UTF-8\"\nLC_TIME=\"en_GB.UTF-8\"\nLC_ALL=\n```\n", "Probably not, `time.localtime()` is the same on my OS X box (that exhibits the problem) and on my Windows box, that does not.\n", "What the fuck OS X?\n\n``` python\n>>> time.mktime((1970, 1, 1, 0, 0, 1, 3, 1, 0))\n-3599.0\n>>> time.gmtime(1)\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=1, tm_wday=3, tm_yday=1, tm_isdst=0)\n```\n\nIt doesn't roundtrip!\n\n``` python\n>>> time.gmtime(time.mktime((1970, 1, 1, 0, 0, 1, 3, 1, 0)))\ntime.struct_time(tm_year=1969, tm_mon=12, tm_mday=31, tm_hour=23, tm_min=0, tm_sec=1, tm_wday=2, tm_yday=365, tm_isdst=0)\n```\n", "``` pycon\nPython 2.7.5 (default, Aug 25 2013, 00:04:04)\n[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import time\n>>> time.mktime((1970, 1, 1, 0, 0, 1, 3, 1, 0))\n21601.0\n>>> time.gmtime(1)\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=1, tm_wday=3, tm_yday=1, tm_isdst=0)\n>>> time.gmtime(21601.0)\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=6, tm_min=0, tm_sec=1, tm_wday=3, tm_yday=1, tm_isdst=0)\n>>> time.mktime((1970, 1, 1, 0, 0, 1, 3, 1, 0))\n21601.0\n>>> time.gmtime(time.mktime((1970, 1, 1, 0, 0, 1, 3, 1, 0)))\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=6, tm_min=0, tm_sec=1, tm_wday=3, tm_yday=1, tm_isdst=0)\n```\n", "Yeah, that's what I thought, there's some horrible timezone related stuff here. This makes me very sad. =(\n", "When i set timezone of my OS X machine to `Europe/London`, my test fails eventually\nit exactly is timezone related problem :(\n", "So while the test doesn't use `'maxage'` I wonder if [this line](https://github.com/kennethreitz/requests/ac4e05874a1a983ca126185a0e4d4e74915f792e/master/requests/cookies.py#L393) should account for `time.timezone`. If it should not, then why does [the line related to the test](https://github.com/kennethreitz/requests/blob/ac4e05874a1a983ca126185a0e4d4e74915f792e/requests/cookies.py#L397) accounting for timezones?\n\nI'm very skeptical that our morsel handling here is correct.\n\nFWIW not handling timezones seems like the right thing in the second link I posted:\n\n``` pycon\nPython 2.7.5 (default, Aug 25 2013, 00:04:04)\n[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import time\n>>> time.time()\n1389554633.231499\n>>> time.time() - time.timezone\n1389533037.127436\n>>> time.strptime('Thu, 01-Jan-1970 00:00:01 GMT', '%a, %d-%b-%Y %H:%M:%S GMT')\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=1, tm_wday=3, tm_yday=1, tm_isdst=-1)\n>>> time.gmtime(1)\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=1, tm_wday=3, tm_yday=1, tm_isdst=0)\n```\n\nThe only difference I see between `time.strptime` and `time.gmtime` is that the former has a different value for `tm_isdst`.\n", "FWIW, the change I'm referencing was introduced around [this discussion](https://github.com/kennethreitz/requests/pull/1772#issuecomment-29958406)\n", "Should we get someone in here who knows something about how the stdlib handles time? I just don't know anything like enough about what the correct behaviour is here.\n\nBTW, `time.timezone` on my system is currently `0`, as it should be.\n", "Can you post the result of the `time.mktime(time.strptime(...))` call from morsel_to_cookie?\n", "Nevermind you just answered me in IRC, it's `-3599.0`.\n", "Based on reading the `stdlib`, it looks like we're overcomplicating this. `time.mktime()` \"expresses the time in _local_ time, not UTC\", from which we subtract the timezone. Why not just use `time.gmtime()` and remove the timezone subtraction?\n", "Ah, because they aren't the same, that's why. =P\n", "This seems relevant:\n\n``` python\n>>> time.gmtime(0)\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=1, tm_isdst=0)\n>>> time.localtime(0)\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=1, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=1, tm_isdst=0)\n```\n", "Why the hell is the relevant function [here](http://docs.python.org/2/library/calendar.html#calendar.timegm)?\n\n``` python\n>>> import calendar\n>>> import time\n>>> calendar.timegm(time.strptime('Thu, 01-Jan-1970 00:00:01 GMT', '%a, %d-%b-%Y %H:%M:%S GMT'))\n1\n```\n\n@sigmavirus24 Can you confirm that works OK for you too?\n", "Confirmation successful. All lights green wildcard\n", "Should we close this to centralize discussion on #1860 ?\n", "Yes. =)\n" ]
https://api.github.com/repos/psf/requests/issues/1858
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1858/labels{/name}
https://api.github.com/repos/psf/requests/issues/1858/comments
https://api.github.com/repos/psf/requests/issues/1858/events
https://github.com/psf/requests/pull/1858
25,440,856
MDExOlB1bGxSZXF1ZXN0MTE0MjYyNTk=
1,858
Unquote the auth after splitting the url.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
10
2014-01-11T10:00:52Z
2021-09-08T22:01:09Z
2014-01-13T13:10:10Z
MEMBER
resolved
This _should_ resolve #1856. /cc @sigmavirus24 @t-8ch.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1858/reactions" }
https://api.github.com/repos/psf/requests/issues/1858/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1858.diff", "html_url": "https://github.com/psf/requests/pull/1858", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1858.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1858" }
true
[ "Just left one note that's a matter of stylistic preference more than anything else. That is not a reason to prevent anyone from merging this though. \n\n:+1: :shipit: :cake: :beer: :smiley_cat: \n", "#rewrite\n", "So as part of my research I found this: https://github.com/kennethreitz/requests/blob/ac4e05874a1a983ca126185a0e4d4e74915f792e/requests/adapters.py#L289..L290 which means that we can fix up `adapters.py` to no longer need to unquote the username/password pair. Granted, unquoting something that is already unquoted should return the exact same thing, but I bet by removing the calls to `unquote` will grant ever the slightest overall performance benefit. Also, that comment is just plain wrong now. =P\n", "And the other place it's used is here: https://github.com/kennethreitz/requests/blob/ac4e05874a1a983ca126185a0e4d4e74915f792e/requests/models.py#L456 and notice that there's an optional param to that method that is never used if it is ever passed. We should either use it or remove it (not necessarily in this PR though).\n\nFinally, given that the call to `HTTPAdapter#proxy_headers` is wrapped inside an `if proxy:` block, the param it sends to `get_auth_from_url` should never be `None` or `''`. And `prepare_auth` on the `PreparedRequest` is called after `prepare_url` which should blow up if `url` is not a valid `url`. I think we're safe making `get_auth_from_url` a bit less paranoid. As with all of my reviews though, this is totally up to the discretion of @kennethreitz and @Lukasa \n", "Thanks for that @sigmavirus24, that's a really helpful set of information! I'll update this PR to reflect it.\n", "Hooray more changes!\n", "@kennethreitz :shipit: :+1:\n", "@Lukasa @sybeck2k found a problem with this test: https://github.com/kennethreitz/requests/pull/1862/files#diff-56c2d754173a4a158ce8f445834c8fe8R705 can you fix it here too?\n", "@sigmavirus24 @Lukasa sorry I didn't notice the issue was tracked also here. I've tried to merge all the changes of Lukasa into my pull request - the main difference is about the test `test_get_auth_from_url_percent_chars` that on my opinion was flawed as the input url was not url-encoded.\n", "There's no reason to have this PR open twice, so I'll close this one. =)\n" ]
https://api.github.com/repos/psf/requests/issues/1857
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1857/labels{/name}
https://api.github.com/repos/psf/requests/issues/1857/comments
https://api.github.com/repos/psf/requests/issues/1857/events
https://github.com/psf/requests/issues/1857
25,412,094
MDU6SXNzdWUyNTQxMjA5NA==
1,857
Disable SSL compression for security reasons.
{ "avatar_url": "https://avatars.githubusercontent.com/u/717901?v=4", "events_url": "https://api.github.com/users/t-8ch/events{/privacy}", "followers_url": "https://api.github.com/users/t-8ch/followers", "following_url": "https://api.github.com/users/t-8ch/following{/other_user}", "gists_url": "https://api.github.com/users/t-8ch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/t-8ch", "id": 717901, "login": "t-8ch", "node_id": "MDQ6VXNlcjcxNzkwMQ==", "organizations_url": "https://api.github.com/users/t-8ch/orgs", "received_events_url": "https://api.github.com/users/t-8ch/received_events", "repos_url": "https://api.github.com/users/t-8ch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/t-8ch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/t-8ch/subscriptions", "type": "User", "url": "https://api.github.com/users/t-8ch", "user_view_type": "public" }
[ { "color": "f7c6c7", "default": false, "description": null, "id": 167537670, "name": "Propose Close", "node_id": "MDU6TGFiZWwxNjc1Mzc2NzA=", "url": "https://api.github.com/repos/psf/requests/labels/Propose%20Close" } ]
closed
true
null
[]
null
15
2014-01-10T18:14:42Z
2021-09-08T23:06:06Z
2015-01-19T09:22:27Z
CONTRIBUTOR
resolved
After the recent discussion about SSL compression both in shazow/urllib3#109 and kennethreitz/requests#1583 (this issue was about performance reason, it got me to look at other projects, performance is _not_ the reason for this issue) I looked at the behaviour of other popular open source projects. It turned out that the following projects disable the compression by default for [security reasons](https://en.wikipedia.org/wiki/CRIME_(security_exploit\)): - Nginx, July 2012 ([version 1.2.2](http://nginx.org/en/CHANGES-1.2), [version 1.3.2, the development version to 1.4](http://nginx.org/en/CHANGES-1.4) - Apache2, ([version 2.2.25](https://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslcompression), [version 2.4.4](https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslcompression)) - The upcoming version 2.7 of the Pound reverse proxy (the `CHANGELOG` file in the source archive: http://www.apsis.ch/pound/Pound-2.7b.tgz) - [Curl 7.28.1](http://curl.haxx.se/changes.html#7_28_1) July 2012 - CPython >= 3.4 (http://hg.python.org/cpython/rev/98eb88d3d94e) This are the only projects I have looked at. I am sure we can find more if necessary. As the stdlib `ssl` module does not allow us to change this parameter before 3.3 I propose to raise the issue on the CPython bug tracker, so that SSL compression will be disabled by default (with the possibility to manually enable it on 3.3 and later). The current handlich of CPython 3.4 and up only disables compression on openssl 1.0 and up, as the relevant constant has not introduced before. However the nginx changelog claims to also disable compression on earlier versions. I will look into this. This issue is meant to gather feedback and momentum before raising the issue with CPython (and maybe also the other implementations) /cc @lukasa @sigmavirus24 @shazow @alex @jmhodges
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1857/reactions" }
https://api.github.com/repos/psf/requests/issues/1857/timeline
null
completed
null
null
false
[ "Fwiw, urllib3 _just_ merged a PR which disables this by default in Py32+ and PyOpenSSL, big thanks to @dbrgn for bringing it up and writing the patch. https://github.com/shazow/urllib3/pull/309\n", "Ah, I linked the wrong urllib3 issue. This should of course have been issue 309.\n", "I'm 100% in favour of this. 2.7 is clearly accepting some SSL related changes in the next version (see [20207](http://bugs.python.org/issue20207) from @alex), so it doesn't seem unreasonable to make a push for this as well.\n\nUnfortunately, it doesn't save requests/urllib3 _entirely_, because both projects support 2.6 and 2.6 is deader-than-dead.\n", "If the enterprise distributions still supporting 2.6 see that the (security) changes are applied to the still maintained versions of Python they know they should backport those patches.\n", "I've resolved to letting 2.6 get whatever subset of functionality we're able to reasonably provide, until it becomes too much of a hassle and we have to drop it.\n", "Fine then. So we just need to work out how to do it. It's clear that @t-8ch has some plans to investigate nginx, so I'm open to doing that for now.\n", "+1 on disabling TLS compression.\n\nOn Fri, Jan 10, 2014 at 10:54 AM, Cory Benfield [email protected]:\n\n> Fine then. So we just need to work out how to do it. It's clear that\n> @t-8ch https://github.com/t-8ch has some plans to investigate nginx, so\n> I'm open to doing that for now.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1857#issuecomment-32054215\n> .\n\n## \n\n\"I disapprove of what you say, but I will defend to the death your right to\nsay it.\" -- Evelyn Beatrice Hall (summarizing Voltaire)\n\"The people's good is the highest law.\" -- Cicero\nGPG Key fingerprint: 125F 5C67 DFE9 4084\n", "Here is [what nginx is doing](http://trac.nginx.org/nginx/browser/nginx/src/event/ngx_event_openssl.c?rev=4aa64f6950313311e0d322a2af1788edeb7f036c#L106):\n(if `SSL_OP_NO_COMPRESSION` is not available, openssl < 1.0.0)\n\n``` c\nint n;\nSTACK_OF(SSL_COMP) *ssl_comp_methods;\n\nssl_comp_methods = SSL_COMP_get_compression_methods();\nn = sk_SSL_COMP_num(ssl_comp_methods);\n\nwhile (n--) {\n (void) sk_SSL_COMP_pop(ssl_comp_methods);\n}\n```\n\n(Nginx is 2-clause BSD licensed, so we should be fine)\n", "Yeah, that looks reasonable to me. Any idea when the relevant functions were introduced to OpenSSL?\n", "Nginx gates this code behind\n\n```\n#if OPENSSL_VERSION_NUMBER >= 0x0090800fL\n```\n\nIt seems compression itself has been implemented some versions earlier but openssl >= 0.9.8 should cover most installations.\n", "Am I right to understand that a CPython 2.7 ticket needs to be made to disable TLS compression?\n", "From OS X 10.9 with Python HEAD I get:\n\nu'tls_compression_supported': False\n\nOn Fri, Jan 10, 2014 at 2:32 PM, Jeff Hodges [email protected]:\n\n> Am I right to understand that a CPython 2.7 ticket needs to be made to\n> disable TLS compression?\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1857#issuecomment-32073294\n> .\n\n## \n\n\"I disapprove of what you say, but I will defend to the death your right to\nsay it.\" -- Evelyn Beatrice Hall (summarizing Voltaire)\n\"The people's good is the highest law.\" -- Cicero\nGPG Key fingerprint: 125F 5C67 DFE9 4084\n", "@jmhodges This is the plan.\n@alex I guess your are running it with openssl > 1.0.0. If yes, the last point of my initial list applies.\n", "What's the state of this?\n", "Closed for massive inactivity. =D\n" ]
https://api.github.com/repos/psf/requests/issues/1856
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1856/labels{/name}
https://api.github.com/repos/psf/requests/issues/1856/comments
https://api.github.com/repos/psf/requests/issues/1856/events
https://github.com/psf/requests/issues/1856
25,400,819
MDU6SXNzdWUyNTQwMDgxOQ==
1,856
Broken parsing of authenticated URI for Proxy
{ "avatar_url": "https://avatars.githubusercontent.com/u/1894878?v=4", "events_url": "https://api.github.com/users/sybeck2k/events{/privacy}", "followers_url": "https://api.github.com/users/sybeck2k/followers", "following_url": "https://api.github.com/users/sybeck2k/following{/other_user}", "gists_url": "https://api.github.com/users/sybeck2k/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sybeck2k", "id": 1894878, "login": "sybeck2k", "node_id": "MDQ6VXNlcjE4OTQ4Nzg=", "organizations_url": "https://api.github.com/users/sybeck2k/orgs", "received_events_url": "https://api.github.com/users/sybeck2k/received_events", "repos_url": "https://api.github.com/users/sybeck2k/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sybeck2k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sybeck2k/subscriptions", "type": "User", "url": "https://api.github.com/users/sybeck2k", "user_view_type": "public" }
[]
closed
true
null
[]
null
10
2014-01-10T15:18:35Z
2021-09-09T00:10:17Z
2014-01-16T09:07:50Z
CONTRIBUTOR
resolved
Hello, when passing a proxy with authentication, and where the password contains the character '#', the parsing of the URI is broken. For instance: ``` proxy = "http://user:%23Cpassword%23C:server:port" ParseResult(scheme='http', netloc='user:', path='', params='', query='', fragment=password#@server:port0') ``` This is caused by the `url = unquote(url)` done before the parsing in `utils.py`. I believe the URI should not be unquoted before the parsing, rather the username and password should be quoted after the parsing. What do you think? Thank you
{ "avatar_url": "https://avatars.githubusercontent.com/u/1894878?v=4", "events_url": "https://api.github.com/users/sybeck2k/events{/privacy}", "followers_url": "https://api.github.com/users/sybeck2k/followers", "following_url": "https://api.github.com/users/sybeck2k/following{/other_user}", "gists_url": "https://api.github.com/users/sybeck2k/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sybeck2k", "id": 1894878, "login": "sybeck2k", "node_id": "MDQ6VXNlcjE4OTQ4Nzg=", "organizations_url": "https://api.github.com/users/sybeck2k/orgs", "received_events_url": "https://api.github.com/users/sybeck2k/received_events", "repos_url": "https://api.github.com/users/sybeck2k/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sybeck2k/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sybeck2k/subscriptions", "type": "User", "url": "https://api.github.com/users/sybeck2k", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/psf/requests/issues/1856/reactions" }
https://api.github.com/repos/psf/requests/issues/1856/timeline
null
completed
null
null
false
[ "Thanks for raising this issue!\n\nIt's hard to see how unquoting before parsing the URL is really helping us. We can almost certainly defer the unquote step until after parsing the URL, applying unquote only to the username and password. @sigmavirus24, can you think of any reason that's not going to be ok?\n", "I can't think of a reason it wouldn't be okay and in fact it's the only way I can see of doing this correctly.\n", "Fine then, let's do that. =) I'll fix this up.\n", "Ok, so `urlparse` chokes like a champ on a sample URL, and so does urllib3's superior `parse_url` function:\n\n``` python\n>>> urlparse('http://user:pass#[email protected]/path?query=yes')\nParseResult(scheme='http', netloc='user:pass', path='', params='', query='', fragment='[email protected]/path?query=yes')\n```\n\n``` python\n>>> parse_url('http://user:pass#[email protected]/path?query=yes')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"requests/packages/urllib3/util.py\", line 397, in parse_url\n raise LocationParseError(\"Failed to parse: %s\" % url)\nrequests.packages.urllib3.exceptions.LocationParseError: Failed to parse: Failed to parse: user:pass\n```\n\nThoughts? /cc @shazow\n", "@Lukasa I think the unquoted `#` is not valid at this position:\n\n[RFC 3986](https://tools.ietf.org/html/rfc3986)\n\n```\nuserinfo = *( unreserved / pct-encoded / sub-delims / \":\" )\n```\n\nWhile `#` is in the `gen-delims` group. Encoding it as `%23` makes both parse functions work for me.\n", "Oh, it's definitely not valid in a URL. I just need to be sure that, if a user provides it to us, it's in an acceptable state at that stage in Requests' execution flow.\n", "@Lukasa Could make urllib3's parser more forgiving if that's what you really want, or just encode things properly. :)\n", "@shazow something something if you change your parsers to accept bad URLs you're going to have a bad time something something. =P\n", "Nah @shazow, you're fine. We'll always have encoded the URL at the point we call `get_auth_from_url`, so we should never have a hash that doesn't represent the fragment. All is well. =)\n", "Fixed in #1856\n" ]
https://api.github.com/repos/psf/requests/issues/1855
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1855/labels{/name}
https://api.github.com/repos/psf/requests/issues/1855/comments
https://api.github.com/repos/psf/requests/issues/1855/events
https://github.com/psf/requests/issues/1855
25,371,488
MDU6SXNzdWUyNTM3MTQ4OA==
1,855
When Content-Length differs from received message body length, an exception should be raised
{ "avatar_url": "https://avatars.githubusercontent.com/u/577694?v=4", "events_url": "https://api.github.com/users/patricklaw/events{/privacy}", "followers_url": "https://api.github.com/users/patricklaw/followers", "following_url": "https://api.github.com/users/patricklaw/following{/other_user}", "gists_url": "https://api.github.com/users/patricklaw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patricklaw", "id": 577694, "login": "patricklaw", "node_id": "MDQ6VXNlcjU3NzY5NA==", "organizations_url": "https://api.github.com/users/patricklaw/orgs", "received_events_url": "https://api.github.com/users/patricklaw/received_events", "repos_url": "https://api.github.com/users/patricklaw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patricklaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patricklaw/subscriptions", "type": "User", "url": "https://api.github.com/users/patricklaw", "user_view_type": "public" }
[]
closed
true
null
[]
null
9
2014-01-10T02:26:22Z
2021-09-09T00:10:13Z
2014-01-29T21:03:30Z
NONE
resolved
The HTTP 1.1 RFC specifies: "When a Content-Length is given in a message where a message-body is allowed, its field value MUST exactly match the number of OCTETs in the message-body. HTTP/1.1 user agents MUST notify the user when an invalid length is received and detected." See: http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4 Here is a simple repro. It seems like the call to `requests.get` should raise an exception rather than silently succeed. Note that python 2.x `urllib2` has identical behavior to `requests` here, but I believe that constitutes a bug in `urllib2`. `curl`, on the other hand, outputs the message body but has a return code of 18 ("Partial file. Only a part of the file was transferred."). In python 3.3 (and presumably earlier releases of python 3.x) `urllib.request` raises `http.client.IncompleteRead: IncompleteRead(5 bytes read, 15 more expected)`, which is in line with the spec. ``` python import SocketServer as socketserver import threading import requests import time class MyTCPHandler(socketserver.BaseRequestHandler): def handle(self): self.data = self.request.recv(1024) self.request.sendall('HTTP/1.1 200 OK\r\n' 'Server: truncator/0.0\r\n' 'Content-Length: 20\r\n' 'Connection: close\r\n\r\n' '12345') server = None def background_server(): global server HOST, PORT = "localhost", 9999 server = socketserver.TCPServer((HOST, PORT), MyTCPHandler) server.serve_forever() if __name__ == "__main__": t = threading.Thread(target=background_server) t.daemon = True t.start() time.sleep(1) r = requests.get('http://localhost:9999') print(r.content) server.shutdown() ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/psf/requests/issues/1855/reactions" }
https://api.github.com/repos/psf/requests/issues/1855/timeline
null
completed
null
null
false
[ "Thanks for raising this issue!\n\nI have mixed feelings. Firstly, I'd argue that Requests is not technically a user-agent, it's a library. This frees us from some of the constraints of user-agent behaviour (and in fact we take that liberty elsewhere in the library, like with our behaviour on redirects).\n\nSecondly, if we throw an exception we irrevocably destroy the data we read. It becomes impossible to access. This means that situations where the user might want to 'muddle through', taking as much of the data as they were able to read and keeping hold of it, becomes a little bit harder. (They'd have to use the streaming API, on which we should not enforce this logic.)\n\nFinally, even if we did want this logic we'd need to implement it in urllib3. `Content-Length` refers to the number of bytes on the wire, not the decoded length, so if we get a gzipped (or DEFLATEd) response, we'd need to know how many bytes there were _before_ decoding. This is not typically information we have at the Requests level. So if you're still interested in having this behaviour, I suggest you open an issue over on [shazow/urllib3](https://github.com/shazow/urllib3).\n\nWith that said, I'm not totally opposed to doing it, it just feels excessive to throw an exception for what is a minor and easily detectable error.\n", "@Lukasa I agree wholeheartedly with everything you said. Your reservations (regarding data loss), however, make me wonder if we shouldn't be attaching the response object to the exceptions we throw. That's a different discussion which should take place on a different issue but I thought I might raise that idea before I forget it.\n\nOn the topic of this issue, I can understand that people may want this and that there are corner cases where the Content Length is not exactly a match for the response body (and in those cases it is okay). I can especially see the benefit where users use requests to perform tests on their servers and web apps. With that in mind, would something similar to `raise_for_status` be a reasonable compromise? We won't break backwards compatibility for existing users who do not realize their depending on our decision to not be strict and we give these extraordinary users (that I'm probably completely imagining) a way to reasonably achieve their goal.\n\nOne other alternative is to provide this functionality via the toolbelt. This way users have confidence in the implementation and they just need to import one extra thing.\n", "We do explicitly attach the response to a `raise_for_status()`-caused `HTTPError`, but for nothing else. It's possible that we could do that, but there are a number of ways that things could go wrong that would lead to that being a bad idea. In general I think that when exceptions are hit data _should_ be lost: after all, the exception will cause execution to abort in a way that can leave the state ill-defined. (`raise_for_status()` is an obvious exception [heh] to that statement.)\n\nAdditionally, I'm -1 on all extensions to the requests interface without good justification. This one doesn't have one, so a new `raise_for_status()`-like thing is out unless someone can strongly convince me of its value. The toolbelt is likely to be the best place for something. I'm not going to discuss implementation too much here, but either of these could work:\n\n``` python\nimport requests\nimport requests_toolbelt\n\nr = requests.get('http://stupid_buggy_server.com/')\nrequests_toolbelt.validate_response(r) # Throws exception, or returns some relevant return code.\n```\n\n``` python\nimport requests\nimport requests_toolbelt\n\nrequests_toolbelt.use_checked_responses() # Monkeypatches requests.response\nr = requests.get('http://stupid_buggy_server.com/')\nr.raise_if_invalid() # Throws exception.\n```\n", "`urllib3` does the correct thing on python 2.7:\n\n`*** MaxRetryError: HTTPConnectionPool(host='localhost', port=9999): Max retries exceeded with url: / (Caused by <class 'httplib.IncompleteRead'>: IncompleteRead(5 bytes read, 15 more expected))`\n\nIs this being suppressed somewhere in requests?\n", "Nope, we don't stop `MaxRetryError`s being thrown. What we _do_ do is use the `.stream()` method of the `HTTPResponse` instead of the `.read()` method. Give that a shot and see if you still hit the problem: I'm prepared to believe you don't.\n", "I get that exception during the call to `request()`, before I get back an `HTTPResponse` object. Is there a way to force urllib3 to give me back an `HTTPResponse`?\n", "Yeah, set the `preload_content` keyword argument to `False`. \n\n> On 10 Jan 2014, at 20:50, patricklaw [email protected] wrote:\n> \n> I get that exception during the call to request(), before I get back an HTTPResponse object. Is there a way to force urllib3 to give me back an HTTPResponse?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "Got it. You're right, the exception isn't raised when using `stream()`. I'll file an issue on urllib3.\n", "I'm closing this for the same reason we closed shazow/urllib3#311.\n" ]
https://api.github.com/repos/psf/requests/issues/1854
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1854/labels{/name}
https://api.github.com/repos/psf/requests/issues/1854/comments
https://api.github.com/repos/psf/requests/issues/1854/events
https://github.com/psf/requests/pull/1854
25,343,918
MDExOlB1bGxSZXF1ZXN0MTEzNzM1MzE=
1,854
Changelog for 2.2.0.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2014-01-09T19:11:21Z
2021-09-08T23:05:06Z
2014-01-09T19:14:45Z
MEMBER
resolved
As discussed in IRC.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1854/reactions" }
https://api.github.com/repos/psf/requests/issues/1854/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1854.diff", "html_url": "https://github.com/psf/requests/pull/1854", "merged_at": "2014-01-09T19:14:45Z", "patch_url": "https://github.com/psf/requests/pull/1854.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1854" }
true
[ ":100: \n" ]
https://api.github.com/repos/psf/requests/issues/1853
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1853/labels{/name}
https://api.github.com/repos/psf/requests/issues/1853/comments
https://api.github.com/repos/psf/requests/issues/1853/events
https://github.com/psf/requests/issues/1853
25,329,134
MDU6SXNzdWUyNTMyOTEzNA==
1,853
Allow to disable SSL compression.
{ "avatar_url": "https://avatars.githubusercontent.com/u/98980?v=4", "events_url": "https://api.github.com/users/chmouel/events{/privacy}", "followers_url": "https://api.github.com/users/chmouel/followers", "following_url": "https://api.github.com/users/chmouel/following{/other_user}", "gists_url": "https://api.github.com/users/chmouel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chmouel", "id": 98980, "login": "chmouel", "node_id": "MDQ6VXNlcjk4OTgw", "organizations_url": "https://api.github.com/users/chmouel/orgs", "received_events_url": "https://api.github.com/users/chmouel/received_events", "repos_url": "https://api.github.com/users/chmouel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chmouel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chmouel/subscriptions", "type": "User", "url": "https://api.github.com/users/chmouel", "user_view_type": "public" }
[]
closed
true
null
[]
null
14
2014-01-09T15:54:45Z
2021-09-09T00:28:15Z
2014-01-09T22:12:26Z
NONE
resolved
For certain use cases disabling SSL compression gets us better performances while staying secure, it would be nice if requests allow to do that
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1853/reactions" }
https://api.github.com/repos/psf/requests/issues/1853/timeline
null
completed
null
null
false
[ "I don't believe we offer SSL compression.\n\nI tested the following scenarios on my local machine (Windows 7) against a webserver I control ([here](https://pyrc-web.com/), running nginx v1.4.1 with default settings):\n- Python 2.7 default settings: offer uses SSLv2, server negotiates to TLSv1, chooses the null compression algorithm.\n- Python 2.7 using the SSLAdapter to force TLSv1: we only offer the null compression algorithm, server chooses it.\n- Python 3.3 using default settings: offer uses TLSv1, server negotiates to TLSv1.2, we only offer the null compression algorithm, server chooses it.\n\nSo it looks like we never negotiate TLS compression. Can you confirm that you've ever seen a situation where we _are_ using TLS compression, ideally with a reproducible test case?\n", "This finding is backed up by the uber-bitchy [HowsMySSL](https://howsmyssl.com/) website that also states that Requests doesn't offer TLS compression on my machine.\n", "Also, @t-8ch, our resident TLS guru, can you confirm that this is/isn't related to specific installs of OpenSSL?\n", "This has spun up from this openstack swiftclient review: https://review.openstack.org/#/c/33473/ I have pinged thomas if he can look over it and there is actually a pull request on urllib3 (that i have just seen before opening the bug here) to actually disable it https://github.com/shazow/urllib3/pull/309 but we may need the requests part as well\n", "Here's some numbers relating TLS compression tunings and speed-ups. It's targetted at OpenStack Swift but the principles are applicable. http://tomleaman.co.uk/swift_perf6.pdf\n", "To be clear, I accept the numbers, my question is about whether we have the problem to begin with. =)\n", "To follow up with the linked `urllib3` issue, I see `tls_compression_supported == False` for my copies of Requests on Windows.\n", "I also get `False` in Python 2.6 on my CentOS box.\n", "Nginx disables SSL compression since 1.2.2 (07/2012).\nLooking at the SSL client hello with wireshark requests announces support for `DEFLATE` compression. (openssl version 1.0.1f).\n\nGiven the fact, that SSL compression is a security risk (CRIME) we are trying to disable it altogether in urllib3, see the linked issue.\nUnfortunately the standard ssl module on older pythons doesn't allow us to do this.\n\n@chmouel @bugsduggan Wouldn't it be useful to disable SSL compression on the openstack-swift server-side as the performance impact would also hit other API clients besides the official one?\n", "@t-8ch Brilliant, if urllib3 disables it altogether that'll make me very happy, as we won't have to worry about the API. =) Give me a shout on that issue if you think I can help.\n", "Yeah I was reading the bug report, when we are talking old version of python how old is that?\n", "@t-8ch sorry my answer was a bit short, we could indeed disable SSL comprssion in swift (and openstack in general) but that's not how people usally runs it they would have usually things like (since this is how we advise to run it) pound/nginx doing that for us. \n", "On Arch Linux and both Python 3.3 and 2.7:\n\n``` python\n>>> import requests\n>>> r = requests.get('https://www.howsmyssl.com/a/check')\n>>> data = r.json()\n>>> data['tls_compression_supported']\nTrue\n```\n\nI think this is something that has to do with OpenSSL defaults. Unfortunately it doesn't seem to be easily possible to change the defaults before Python 3.2... For py3.2+ this should now soon be fixed in urllib3, where it's explicitly disabled. (I actually created that PR because of requests...)\n\nI'm not sure if we can do something about it from a higher level. But the default should be \"off\".\n", "I doubt this is something Requests can do any easier that urllib3. For the moment, we'll close this issue, but everyone should keep a close eye on urllib3 and revisit this when a decision has been made over there.\n" ]
https://api.github.com/repos/psf/requests/issues/1852
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1852/labels{/name}
https://api.github.com/repos/psf/requests/issues/1852/comments
https://api.github.com/repos/psf/requests/issues/1852/events
https://github.com/psf/requests/pull/1852
25,296,608
MDExOlB1bGxSZXF1ZXN0MTEzNTAwODU=
1,852
TA example for SSL version.
{ "avatar_url": "https://avatars.githubusercontent.com/u/715626?v=4", "events_url": "https://api.github.com/users/dsoprea/events{/privacy}", "followers_url": "https://api.github.com/users/dsoprea/followers", "following_url": "https://api.github.com/users/dsoprea/following{/other_user}", "gists_url": "https://api.github.com/users/dsoprea/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dsoprea", "id": 715626, "login": "dsoprea", "node_id": "MDQ6VXNlcjcxNTYyNg==", "organizations_url": "https://api.github.com/users/dsoprea/orgs", "received_events_url": "https://api.github.com/users/dsoprea/received_events", "repos_url": "https://api.github.com/users/dsoprea/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dsoprea/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dsoprea/subscriptions", "type": "User", "url": "https://api.github.com/users/dsoprea", "user_view_type": "public" }
[]
closed
true
null
[]
null
12
2014-01-09T05:44:17Z
2021-09-08T22:01:08Z
2014-01-09T19:46:49Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1852/reactions" }
https://api.github.com/repos/psf/requests/issues/1852/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1852.diff", "html_url": "https://github.com/psf/requests/pull/1852", "merged_at": "2014-01-09T19:46:49Z", "patch_url": "https://github.com/psf/requests/pull/1852.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1852" }
true
[ "Thanks for this!\n\nI've made a couple of suggestions inline in the diff. Can you take a look at them for me? =)\n", "You can also rebase all of those commits to one. If you're unfamiliar with rebasing I can send instructions \n", "I didn't think I could rebase if I already pushed. Not the case?\n\nDustin\n\nOn Thu, Jan 9, 2014 at 7:38 AM, Ian Cordasco [email protected]:\n\n> You can also rebase all of those commits to one. If you're unfamiliar with\n> rebasing I can send instructions\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/pull/1852#issuecomment-31926780\n> .\n", "Not the case. You can safely rebase so long as no-one has pulled from your branch. We have not. =)\n", "I thought that me pushing to GitHub could also be considered to have the\nsame effect as GitHub pulling from me, in which cases rebases should not be\ndone. Why does this apply to GitHub client repos and not GitHub itself?\n\nDustin\nOn Jan 9, 2014 10:59 AM, \"Cory Benfield\" [email protected] wrote:\n\n> Not the case. You can safely rebase so long as no-one has pulled from your\n> branch. We have not. =)\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/pull/1852#issuecomment-31945935\n> .\n", "To be clear, you _will_ have to force push to GitHub, but that's totally fine. =)\n", "Done. I didn't know what else you were referring to by \"style\", other than\nthe column width (which I missed since I was working in text and not code).\nSo, if there's something else, then let me know.\n\nOn Thu, Jan 9, 2014 at 11:19 AM, Cory Benfield [email protected]:\n\n> To be clear, you _will_ have to force push to GitHub, but that's totally\n> fine. =)\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/pull/1852#issuecomment-31948301\n> .\n", "That was exactly what I meant, so this looks good to me. =) I'll merge when I get home and can deal with the conflicts.\n", "Much appreciated! Will merge as soon as the conflicts are resolved :)\n", "Thanks, Ken. I'm not trying to be a burden. I just always find SSL-related\nhiccups to be a unwelcomed headache.\n\nI appreciate it.\nOn Jan 9, 2014 2:14 PM, \"Kenneth Reitz\" [email protected] wrote:\n\n> Much appreciated! Will merge as soon as the conflicts are resolved :)\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/pull/1852#issuecomment-31965690\n> .\n", "Oh not at all! This is a fantastic pull request, and one that I've wanted us to have for a long time. \n\n:sparkles: :cake: :sparkles:\n", "Brilliant, I've merged it in. =) All is well with the world. Thanks so much!\n" ]
https://api.github.com/repos/psf/requests/issues/1851
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1851/labels{/name}
https://api.github.com/repos/psf/requests/issues/1851/comments
https://api.github.com/repos/psf/requests/issues/1851/events
https://github.com/psf/requests/pull/1851
25,215,633
MDExOlB1bGxSZXF1ZXN0MTEzMTE0MDI=
1,851
implement "send" hook for right before sending a Request
{ "avatar_url": "https://avatars.githubusercontent.com/u/2734?v=4", "events_url": "https://api.github.com/users/eklitzke/events{/privacy}", "followers_url": "https://api.github.com/users/eklitzke/followers", "following_url": "https://api.github.com/users/eklitzke/following{/other_user}", "gists_url": "https://api.github.com/users/eklitzke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eklitzke", "id": 2734, "login": "eklitzke", "node_id": "MDQ6VXNlcjI3MzQ=", "organizations_url": "https://api.github.com/users/eklitzke/orgs", "received_events_url": "https://api.github.com/users/eklitzke/received_events", "repos_url": "https://api.github.com/users/eklitzke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eklitzke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eklitzke/subscriptions", "type": "User", "url": "https://api.github.com/users/eklitzke", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2014-01-08T04:33:20Z
2021-09-08T22:01:08Z
2014-01-08T07:54:40Z
NONE
resolved
This implements a `send` hook, which is called right before a `Request` object is sent. My motivation here is for a profiler I've written that automatically records request timing information (by recording a timestamp before a request is sent, and after), but I think there are probably a lot of other use cases, such as automagically munging request headers or whatnot.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1851/reactions" }
https://api.github.com/repos/psf/requests/issues/1851/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1851.diff", "html_url": "https://github.com/psf/requests/pull/1851", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1851.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1851" }
true
[ "Thanks for this work!\n\nWe used to have a `send` hook (and a number of others), but we got rid of almost all of them. This is because the recommended way of implementing things like this is now to use [custom Transport Adapters](http://docs.python-requests.org/en/latest/user/advanced/#transport-adapters). If you need an example of how to implement one, I've written a number of examples: [here](https://lukasa.co.uk/2012/12/Writing_A_Transport_Adapter/), [here](https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/) and [here](https://lukasa.co.uk/2013/05/Caching_In_Python_Requests/).\n\nThis new position means we no longer want to have `send` hooks. Unfortunately, this means that we won't merge this PR. Thanks so much for doing the work though!\n", "We also already provide the `elapsed_time` attribute on a response object that does exactly what you want\n", "So is the general idea that I'd subclass `HTTPAdapter`, and do something approximately like this?\n\n``` python\nclass MyHTTPAdapter(requests.adapters.HTTPAdapter):\n\n def send(self, request, **kwargs):\n my_custom_thing_here()\n response = super(MyHTTPAdapter, self).send(request, **kwargs)\n my_other_custom_thing()\n return response\n\ns = requests.Session()\na = MyHTTPAdapter()\ns.mount('http://', a)\ns.mount('https://', a)\n```\n\n(And in response to @sigmavirus24 , it's a bit more complicated than just elapsed time, since the profiler is also profiling memory usage and things from `resource.getrusage()`.)\n", "Absolutely right, yes. =)\n" ]
https://api.github.com/repos/psf/requests/issues/1850
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1850/labels{/name}
https://api.github.com/repos/psf/requests/issues/1850/comments
https://api.github.com/repos/psf/requests/issues/1850/events
https://github.com/psf/requests/pull/1850
25,195,426
MDExOlB1bGxSZXF1ZXN0MTEyOTk0NTE=
1,850
Force SSLv3.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
9
2014-01-07T20:38:25Z
2021-09-09T00:01:12Z
2014-01-07T20:50:21Z
MEMBER
resolved
This is a _possible_ fix for #1847.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1850/reactions" }
https://api.github.com/repos/psf/requests/issues/1850/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1850.diff", "html_url": "https://github.com/psf/requests/pull/1850", "merged_at": null, "patch_url": "https://github.com/psf/requests/pull/1850.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1850" }
true
[ "According to the table on http://docs.python.org/2/library/ssl.html#ssl.wrap_socket this is going to blow up on more mordern servers. (At least it is supposed to)\n", "Is that true? http://docs.python.org/2/library/ssl.html#ssl.PROTOCOL_SSLv3\n", "Based on my reading of that table, selecting `SSLv3` will work on servers that don't offer `SSLv2`, but `SSLv23` will not work. Conversely, `SSLv3` will not work on servers that don't offer anything anything better than `SSLv2`, and the prevailing attitude seems to be \"who cares about those people\".\n", "Please try: `https://t-8ch.de` I disabled everything except TLS.\nUsing requests with `SSLv3` doesn't work for me.\n", "Yup, you're right. Goddamn I hate SSL so much.\n", "And `SSLv23` works with all version of python (still openssl 1.0.1f)\n", "Doesn't with old versions of OpenSSL though.\n", "Those probably don't support TLS at all\n", "Thanks @t-8ch \n" ]
https://api.github.com/repos/psf/requests/issues/1849
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1849/labels{/name}
https://api.github.com/repos/psf/requests/issues/1849/comments
https://api.github.com/repos/psf/requests/issues/1849/events
https://github.com/psf/requests/pull/1849
25,195,068
MDExOlB1bGxSZXF1ZXN0MTEyOTkyNTA=
1,849
Update urllib3 to 232f496
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2014-01-07T20:32:39Z
2021-09-08T11:00:50Z
2014-01-07T23:22:24Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1849/reactions" }
https://api.github.com/repos/psf/requests/issues/1849/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1849.diff", "html_url": "https://github.com/psf/requests/pull/1849", "merged_at": "2014-01-07T23:22:24Z", "patch_url": "https://github.com/psf/requests/pull/1849.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1849" }
true
[ "This fixes issue #1842.\n", "This isn't necessary since @kennethreitz pulls in a fresh copy of urllib3 before each release I think. But thanks, this will hopefully ensure he pulls in the right version again.\n", "No, this is great! I normally get urllib3 updates from PRs actually :)\n", "Whoops! I misspoke! :cake:\n" ]
https://api.github.com/repos/psf/requests/issues/1848
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1848/labels{/name}
https://api.github.com/repos/psf/requests/issues/1848/comments
https://api.github.com/repos/psf/requests/issues/1848/events
https://github.com/psf/requests/issues/1848
25,193,046
MDU6SXNzdWUyNTE5MzA0Ng==
1,848
API for reporting failing URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
[ { "color": "f7c6c7", "default": false, "description": null, "id": 167537670, "name": "Propose Close", "node_id": "MDU6TGFiZWwxNjc1Mzc2NzA=", "url": "https://api.github.com/repos/psf/requests/labels/Propose%20Close" } ]
closed
true
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" } ]
null
9
2014-01-07T20:02:23Z
2021-09-08T18:00:52Z
2016-04-16T04:16:15Z
CONTRIBUTOR
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1848/reactions" }
https://api.github.com/repos/psf/requests/issues/1848/timeline
null
completed
null
null
false
[ "Hm?\n", "Like a place to POST to http://report.python-requests.org/ when you have a URL that doesn't behave properly.\n", "Ah okay\n", "Failing in what sense? Non-conformant, serving 500's...\n", "On Fri, Jan 10, 2014 at 04:48:02PM -0800, Kevin Burke wrote:\n\n> Failing in what sense? Non-conformant, serving 500's...\n\nWe're not looking for twilio's list of horrible servers and websites that \ncause y'all headaches :P\n\nI think this issue was motivated by the fact that we have been getting a lot \nof bug reports which focus more around one specific URL misbehaving and ends \nwith us debugging the service for the user.\n", "How would this be implemented? Is it for Requests developers, users, or someone else?\n", "How about adding a report functionality, where you specify the reporting, which defaults to report.python-requests.org ? I would like to implement this, once we have a clear idea on exactly what we want to report.\n", "I mean where we specify the reporting URL.\n", "The primary use-case here is to avoid Kenneth getting spammed with bug reports in his inbox that basically go: \"This website doesn't work for me in Requests\". I'm open to this being a more general service like @shshank suggested, but right now our interest is in getting that crud off Kenneth's plate and onto mine and @sigmavirus24's. =)\n" ]
https://api.github.com/repos/psf/requests/issues/1847
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1847/labels{/name}
https://api.github.com/repos/psf/requests/issues/1847/comments
https://api.github.com/repos/psf/requests/issues/1847/events
https://github.com/psf/requests/issues/1847
25,192,314
MDU6SXNzdWUyNTE5MjMxNA==
1,847
SSL error when trying to open a webpage
{ "avatar_url": "https://avatars.githubusercontent.com/u/772?v=4", "events_url": "https://api.github.com/users/alex/events{/privacy}", "followers_url": "https://api.github.com/users/alex/followers", "following_url": "https://api.github.com/users/alex/following{/other_user}", "gists_url": "https://api.github.com/users/alex/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex", "id": 772, "login": "alex", "node_id": "MDQ6VXNlcjc3Mg==", "organizations_url": "https://api.github.com/users/alex/orgs", "received_events_url": "https://api.github.com/users/alex/received_events", "repos_url": "https://api.github.com/users/alex/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex/subscriptions", "type": "User", "url": "https://api.github.com/users/alex", "user_view_type": "public" }
[]
closed
true
null
[]
null
31
2014-01-07T19:53:54Z
2021-09-08T23:00:49Z
2014-01-07T21:26:03Z
MEMBER
resolved
``` pycon >>> requests.get("https://www.howsmyssl.com/a/check") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/alex_gaynor/.virtualenvs/tempenv-6827228677281/lib/python2.7/site-packages/requests/api.py", line 55, in get return request('get', url, **kwargs) File "/Users/alex_gaynor/.virtualenvs/tempenv-6827228677281/lib/python2.7/site-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "/Users/alex_gaynor/.virtualenvs/tempenv-6827228677281/lib/python2.7/site-packages/requests/sessions.py", line 382, in request resp = self.send(prep, **send_kwargs) File "/Users/alex_gaynor/.virtualenvs/tempenv-6827228677281/lib/python2.7/site-packages/requests/sessions.py", line 485, in send r = adapter.send(request, **kwargs) File "/Users/alex_gaynor/.virtualenvs/tempenv-6827228677281/lib/python2.7/site-packages/requests/adapters.py", line 379, in send raise SSLError(e) requests.exceptions.SSLError: [Errno 1] _ssl.c:504: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version ``` This page loads fine in my browser or when using `treq` with Twisted.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1847/reactions" }
https://api.github.com/repos/psf/requests/issues/1847/timeline
null
completed
null
null
false
[ "Yeah, discussed this on Twitter already. It's worth noting that I'm fairly sure that anyone who uses SSL in Python from the stdlib will encounter this problem. I'll confirm by testing with httplib.\n", "Yes, you can reproduce this with `urllib` as well -- I believe this does not affect Python 3.\n", "@kennethreitz has previously stated that he views requests as a product and anything else as an implementation detail, so I assumed he'd want to know about this to address this in requests, if at all possible.\n", "Sorry, you're right, Python 3 is immune. And I didn't mean \"we don't care about this bug\", just that I was aware of it. =P Opening this is the right call, Kenneth will want to know about it.\n", "You know, I should build an API for people to report failing URLs to us. \n", "So, @Lukasa what's causing this? \n", "Based on the discussion I had on Twitter, it's probably because in Python 2.7 we attempt to establish a SSLv2 connection first, and negotiate up. The endpoint server is refusing the SSLv2 connection entirely.\n\nIt's possible that fixing this would mean setting a higher minimum SSL/TLS version. I'd be interested to know if we can force everyone to use SSLv3 without breaking anything for anyone.\n", "I'll also grab some tcpdump to confirm that that's the difference between SSLv2 and SSLv3.\n", "As far as I know, there is no good reason to be attempting to use SSLv2 on\nthe internet know, it's completely broken.\n\nOn Tue, Jan 7, 2014 at 12:05 PM, Cory Benfield [email protected]:\n\n> I'll also grab some tcpdump to confirm that that's the difference between\n> SSLv2 and SSLv3.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1847#issuecomment-31773798\n> .\n\n## \n\n\"I disapprove of what you say, but I will defend to the death your right to\nsay it.\" -- Evelyn Beatrice Hall (summarizing Voltaire)\n\"The people's good is the highest law.\" -- Cicero\nGPG Key fingerprint: 125F 5C67 DFE9 4084\n", "Ah, this always bites me in the ass. I thought we implemented something for auto-negotitaion ~1 year ago?\n", "@alex In principle I agree, but currently Requests just uses whatever the Python version has as the default in the `ssl` module.\n\n@kennethreitz We _are_ autonegotiating (I'm pretty sure), but the remote server refuses the initial version we try.\n", "Fucking C extensions ruin everything :)\n", "Yup, in Python 2 we initially send a SSLv2 request, in Python 3 we send TLSv1.0.\n", "I'll investigate whether forcing to SSLv3 improves any of this\n", "It depends on the version of openssl in use. For me the reported url works just fine with Python 2.7.6 and OpenSSL 1.0.1f.\n", "Goddamn it I hate SSL so much.\n\nOk, so forcing SSLv3 improves _this particular website_, so that fix will work. That's probably the safest fix we can make in this situation.\n", "I'm on OS X -- which uses OpenSSL 0.9.8<some letter>\n\nOn Tue, Jan 7, 2014 at 12:30 PM, Thomas Weißschuh\[email protected]:\n\n> It depends on the version of openssl in use. For me the reported url works\n> just fine with Python 2.7.6 and OpenSSL 1.0.1f.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1847#issuecomment-31776093\n> .\n\n## \n\n\"I disapprove of what you say, but I will defend to the death your right to\nsay it.\" -- Evelyn Beatrice Hall (summarizing Voltaire)\n\"The people's good is the highest law.\" -- Cicero\nGPG Key fingerprint: 125F 5C67 DFE9 4084\n", "Same as me, which is why I can reproduce it. Gotta love Apple's totally bass-ackwards attitude to upgrading critical security software.\n", "Nope, that's not going to fly. Are we really going to have to implement fallback in requests/urllib3?\n", "This is going to suck with the servers that do not any response triggering the 30 second timeout.\nIf we do the automatic fallback and no exception/feedback people will have see really slow requests if they hit this.\n", "Can we mimic the Python 3 functionality? Shouldn't be that hard. Web browsers do it.\n", "MONKEYPATCH THE SYSTEM\n", "We're really in between a rock and a hard place. Is our current solution of pointing people to [here](https://lukasa.co.uk/2013/01/Choosing_SSL_Version_In_Requests/) really unacceptable?\n\n@kennethreitz I think it's harder than that, I just realised that my Python 3 is using a more recent OpenSSL:\n\n``` python\nPython 2.7.5 (default, Aug 29 2013, 18:40:14) \n[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import ssl\n>>> ssl.OPENSSL_VERSION\n'OpenSSL 0.9.8y 5 Feb 2013'\n```\n\n``` python\nPython 3.3.3 (default, Nov 20 2013, 19:35:16) \n[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import ssl\n>>> ssl.OPENSSL_VERSION\n'OpenSSL 1.0.1e 11 Feb 2013'\n```\n", "Yeah, that's almost always what the problem is. Honestly, I'm extremely proud of my SSL implementation and I think it's one of the most seamless SSL experiences a developer can ever have. Many users have absolutely no idea that they're even verifying SSL when they're using Requests until a verification fails, and we're making the world a much better and secure place because of it. \n\nA URL that fails every now and again because of an odd openssl + python combination is unfortunate, but there's always room for improvement :)\n", "I'm prepared to accept that, in general. We might want to document the `SSLAdapter` fallback. It's being implemented in `requests-toolbelt`, so it'll be easy to get hold of it if needed. Does that sound like an acceptable route?\n", "I think this particular server simply misbehaves and refuses connections from clients which announce _support_ for `SSLv2`.\n\nThis does not work:\n\n```\n$ openssl s_client -connect www.howsmyssl.com:443 -cipher 'ALL'\n```\n\nWhile this works:\n\n```\n$ openssl s_client -connect www.howsmyssl.com:443 -cipher 'ALL:!SSLv2'\n```\n\nIf try to connect with `ALL` to my own server I end up using TLSv1.2 which looks rather fine.\n", "Sounds like an intentionally bitchy server. The domain name is telling :)\n", "It's definitely intentionally bitchy. Observe the twitter conversation I had earlier [here](https://twitter.com/jmhodges/status/420614636606468096).\n", "Cross referencing the generic monkey-patching solution here: https://github.com/kennethreitz/requests/issues/606#issuecomment-45704671\n", "Hi @Lukasa ,\n\nI am trying to use python requests to send a requests to a gunicorn server that accepts tls_v1 protocol which means that the client will use tls_v1 only. I create an adapter like:\nclass TLSAdapter(HTTPAdapter):\n def init_poolmanager(self, connections, maxsize, block=False):\n self.poolmanager = PoolManager(num_pools=connections,\n maxsize=maxsize,\n block=block,\n ssl_version=ssl.PROTOCOL_TLSv1)\n\nAnd use it like:\nrequests_session = requests.Session()\nrequests_session.mount('https://', TLSAdapter())\n\nNow when I try to send a request to my server like:\nresp = requests_session.put(request.full_uri, data=request.data, headers=request_headers, verify=verify, cert=(cert, key))\n\nThen I get an error like:\nrequests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine('',))\n\nCould you please let me know if this issue has been seen before.\nI am running server on my mac and sending the request from the same mac.\n\nI am using python 2.6.9\n" ]
https://api.github.com/repos/psf/requests/issues/1846
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1846/labels{/name}
https://api.github.com/repos/psf/requests/issues/1846/comments
https://api.github.com/repos/psf/requests/issues/1846/events
https://github.com/psf/requests/pull/1846
25,176,672
MDExOlB1bGxSZXF1ZXN0MTEyOTA2NTk=
1,846
get_netrc_auth should handle os.path.expanduser failing
{ "avatar_url": "https://avatars.githubusercontent.com/u/46565?v=4", "events_url": "https://api.github.com/users/acdha/events{/privacy}", "followers_url": "https://api.github.com/users/acdha/followers", "following_url": "https://api.github.com/users/acdha/following{/other_user}", "gists_url": "https://api.github.com/users/acdha/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/acdha", "id": 46565, "login": "acdha", "node_id": "MDQ6VXNlcjQ2NTY1", "organizations_url": "https://api.github.com/users/acdha/orgs", "received_events_url": "https://api.github.com/users/acdha/received_events", "repos_url": "https://api.github.com/users/acdha/repos", "site_admin": false, "starred_url": "https://api.github.com/users/acdha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/acdha/subscriptions", "type": "User", "url": "https://api.github.com/users/acdha", "user_view_type": "public" }
[]
closed
true
null
[]
null
8
2014-01-07T16:05:34Z
2021-09-01T00:11:41Z
2014-01-08T18:57:09Z
CONTRIBUTOR
resolved
https://github.com/toastdriven/django-haystack/issues/924 has a problem report from a user who appears to be running inside a process which does not have `$HOME` defined and is running under a UID which is either not in /etc/passwd or does not have permission to access it. This causes [utils.get_netrc_auth](https://github.com/kennethreitz/requests/blob/v2.1.0/requests/utils.py#L73) to raise an unexpected `KeyError`. The easiest fix would be to simply add that to the except block at the bottom but that's probably too dangerous – I'd probably handle the `KeyError` right at the source until this can be fixed upstream.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1846/reactions" }
https://api.github.com/repos/psf/requests/issues/1846/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1846.diff", "html_url": "https://github.com/psf/requests/pull/1846", "merged_at": "2014-01-08T18:57:09Z", "patch_url": "https://github.com/psf/requests/pull/1846.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1846" }
true
[ "Upstream Python bug report: http://bugs.python.org/issue20164\n", "Well this makes me very sad. Thanks for raising this!\n\nGiven that upstream is not going to fix this on 2.6 _for sure_, even if they do implement a fix elsewhere, I think we need to handle this. The correct behaviour is that `get_netrc_auth` should fail and return nothing.\n\nI'm also inclined to be cautious, so we should restrict the scope of the `try...except` block.\n", "@Lukasa I took a first pass at a patch in 0b41cec\n", "Alright, I'm happy with this. You can ignore TravisCI (I have no idea why it's still running), the relevant CI results are these (all of which passed):\n\nhttp://ci.kennethreitz.org/job/requests-pr/PYTHON=2.6/167/\nhttp://ci.kennethreitz.org/job/requests-pr/PYTHON=2.7/167/\nhttp://ci.kennethreitz.org/job/requests-pr/PYTHON=3.3/167/\nhttp://ci.kennethreitz.org/job/requests-pr/PYTHON=pypy-1.8/167/\n\nWaiting on @sigmavirus24 to review.\n", "Great - thanks for reviewing it!\n", "WHY IS TRAVIS RUNNING AGAIN OH MY GOD\n", "I'm very sad that this requires so many lines of code, but so be it. \n", "This will at some point become unnecessary once the upstream fix has been widely deployed – hopefully in less than another 5 years: \r\n\r\nhttps://bugs.python.org/issue10496" ]
https://api.github.com/repos/psf/requests/issues/1845
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1845/labels{/name}
https://api.github.com/repos/psf/requests/issues/1845/comments
https://api.github.com/repos/psf/requests/issues/1845/events
https://github.com/psf/requests/pull/1845
25,156,352
MDExOlB1bGxSZXF1ZXN0MTEyNzg5NDA=
1,845
Resolve proxy_bypass problems.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2014-01-07T09:40:58Z
2021-09-09T00:01:18Z
2014-01-08T18:51:08Z
MEMBER
resolved
This resolves the sad, sad 2.6.x problems in `proxy_bypass` as reported in #1844 and #1841. The nature of this fix is that, fundamentally, we don't care enough to let this call stop us. Assume that if it fails we aren't bypassing the proxy (usually we won't be) and move on with our lives. I'm only catching the specific exceptions we've seen. I'm open to wrapping this in a generic `except` block if we want to.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1845/reactions" }
https://api.github.com/repos/psf/requests/issues/1845/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1845.diff", "html_url": "https://github.com/psf/requests/pull/1845", "merged_at": "2014-01-08T18:51:08Z", "patch_url": "https://github.com/psf/requests/pull/1845.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1845" }
true
[ "Will get the issue reporter to test this asap\n", "So the issue reporter said he got a different error, but the other old 2.6 issue (the _tunnel_host one). So I'd say that this is probably fixed with this change, and this should be fixed in entirity once both changes happen. I'm happy to have him test once this and the urllib3 updated mentioned in #1842 land to make sure it's all kosher.\n", "Sounds good to me.\n\nFor the record, this fix can still lead to failures. If the user is actually using any of the `proxy_bypass` function on their Mac with 2.6.[0-5], we can potentially not bypass the proxy. That sucks, but it sucks far less than this being broken.\n", "This looks pretty safe to me. I'm going to merge it, just so it's out of my queue. \n\nComment if it doesn't solve the problem :)\n" ]
https://api.github.com/repos/psf/requests/issues/1844
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1844/labels{/name}
https://api.github.com/repos/psf/requests/issues/1844/comments
https://api.github.com/repos/psf/requests/issues/1844/events
https://github.com/psf/requests/issues/1844
25,154,275
MDU6SXNzdWUyNTE1NDI3NQ==
1,844
Proxy Bypass Fails in Python <=2.6.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/145979?v=4", "events_url": "https://api.github.com/users/dstufft/events{/privacy}", "followers_url": "https://api.github.com/users/dstufft/followers", "following_url": "https://api.github.com/users/dstufft/following{/other_user}", "gists_url": "https://api.github.com/users/dstufft/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dstufft", "id": 145979, "login": "dstufft", "node_id": "MDQ6VXNlcjE0NTk3OQ==", "organizations_url": "https://api.github.com/users/dstufft/orgs", "received_events_url": "https://api.github.com/users/dstufft/received_events", "repos_url": "https://api.github.com/users/dstufft/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dstufft/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dstufft/subscriptions", "type": "User", "url": "https://api.github.com/users/dstufft", "user_view_type": "public" }
[]
closed
true
null
[]
null
9
2014-01-07T08:46:47Z
2021-09-09T00:28:15Z
2014-01-10T11:13:14Z
CONTRIBUTOR
resolved
It appears that when using a proxy bypass with an IP wthout a netmask causes an exception to be raised from requests in python <= 2.6.5. This is due to a bug in the stdlib which was fixed in 2.6.6. Python bugs: http://bugs.python.org/issue10643 http://bugs.python.org/issue8883 More information: https://github.com/pypa/pip/issues/1429 This is another issue that pip hit with the recent 1.5 release that is occuring on older 2.6 releases.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1844/reactions" }
https://api.github.com/repos/psf/requests/issues/1844/timeline
null
completed
null
null
false
[ "For ease of reading, here's the exception:\n\n```\n/Users/jerith/.virtualenvs/txTwitter/bin/pip run on Mon Jan 6 21:15:03 2014\nDownloading/unpacking Twisted\n Getting page https://pypi.python.org/simple/Twisted/\nCleaning up...\n Removing temporary dir /Users/jerith/.virtualenvs/txTwitter/build...\nException:\nTraceback (most recent call last):\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/basecommand.py\", line 122, in main\n status = self.run(options, args)\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/commands/install.py\", line 270, in run\n requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/req.py\", line 1157, in prepare_files\n url = finder.find_requirement(req_to_install, upgrade=self.upgrade)\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/index.py\", line 202, in find_requirement\n page = self._get_page(main_index_url, req)\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/index.py\", line 576, in _get_page\n session=self.session,\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/index.py\", line 678, in get_page\n resp = session.get(url)\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/_vendor/requests/sessions.py\", line 394, in get\n return self.request('GET', url, **kwargs)\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/download.py\", line 236, in request\n return super(PipSession, self).request(method, url, *args, **kwargs)\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/_vendor/requests/sessions.py\", line 355, in request\n env_proxies = get_environ_proxies(url) or {}\n File \"/Users/jerith/.virtualenvs/txTwitter/lib/python2.6/site-packages/pip/_vendor/requests/utils.py\", line 490, in get_environ_proxies\n if proxy_bypass(netloc):\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py\", line 1567, in proxy_bypass\n return proxy_bypass_macosx_sysconf(host)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py\", line 1437, in proxy_bypass_macosx_sysconf\n mask = int(m.group(2)[1:])\nTypeError: 'NoneType' object is unsubscriptable\n```\n", "Yeah, this is the same as #1841. If `pip` wasn't supporting old point-releases of 2.6 I'd be happy just to call this \"not a problem\", but you are, so we'll have to do something about it.\n", "So, we already know that this buggy function _can_ throw `TypeError`s and `socket.gaierror`s. I don't know whether that scares me enough to just catch everything.\n", "Fix is in #1845. Feel free to take a look, @dstufft.\n", "Ok @dstufft, I think all the things you needed are merged (though I can't find the comment where you listed them).\n", "@Lukasa Looks good to me. I had the reporter of the bugs test it out and he was able to successfully install stuff without error. So once @kennethreitz cuts a new release (hopefully soon!) pip 1.5.1 can vendor that release and close out those bugs.\n", "Awesome, sounds good. =)\n", "Ok, Requests 2.2.0 is out the door! pip 1.5.1 should be able to vendor that safely. As always, check the changelog (it's not very long).\n", "It's already vendored :D I just forgot to close this sorry!\n" ]
https://api.github.com/repos/psf/requests/issues/1843
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1843/labels{/name}
https://api.github.com/repos/psf/requests/issues/1843/comments
https://api.github.com/repos/psf/requests/issues/1843/events
https://github.com/psf/requests/issues/1843
25,153,520
MDU6SXNzdWUyNTE1MzUyMA==
1,843
Allow bypassing decoding with iter_content
{ "avatar_url": "https://avatars.githubusercontent.com/u/145979?v=4", "events_url": "https://api.github.com/users/dstufft/events{/privacy}", "followers_url": "https://api.github.com/users/dstufft/followers", "following_url": "https://api.github.com/users/dstufft/following{/other_user}", "gists_url": "https://api.github.com/users/dstufft/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dstufft", "id": 145979, "login": "dstufft", "node_id": "MDQ6VXNlcjE0NTk3OQ==", "organizations_url": "https://api.github.com/users/dstufft/orgs", "received_events_url": "https://api.github.com/users/dstufft/received_events", "repos_url": "https://api.github.com/users/dstufft/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dstufft/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dstufft/subscriptions", "type": "User", "url": "https://api.github.com/users/dstufft", "user_view_type": "public" }
[]
closed
true
null
[]
null
6
2014-01-07T08:24:19Z
2021-09-09T00:28:16Z
2014-01-07T09:38:41Z
CONTRIBUTOR
resolved
When servers send a `Content-Encoding` header requests (actually urllib3, but because of decode_content=True kwarg requests passes) will automatically decode the response. In pip this is causing a breakage for servers that serve a .tar.gz with a `Content-Encoding: gzip` header. Would it be at all possible to have iter_content support a `content_decode` kwarg that just get's passed through to urllib3?
{ "avatar_url": "https://avatars.githubusercontent.com/u/145979?v=4", "events_url": "https://api.github.com/users/dstufft/events{/privacy}", "followers_url": "https://api.github.com/users/dstufft/followers", "following_url": "https://api.github.com/users/dstufft/following{/other_user}", "gists_url": "https://api.github.com/users/dstufft/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dstufft", "id": 145979, "login": "dstufft", "node_id": "MDQ6VXNlcjE0NTk3OQ==", "organizations_url": "https://api.github.com/users/dstufft/orgs", "received_events_url": "https://api.github.com/users/dstufft/received_events", "repos_url": "https://api.github.com/users/dstufft/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dstufft/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dstufft/subscriptions", "type": "User", "url": "https://api.github.com/users/dstufft", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1843/reactions" }
https://api.github.com/repos/psf/requests/issues/1843/timeline
null
completed
null
null
false
[ "For reference this is in pip's issues as https://github.com/pypa/pip/issues/1419 and https://github.com/pypa/pip/issues/1421\n", "I'm obviously tempted to argue that `pip` should be able to spot this and handle it (that is, any compliant server would signal `Content-Encoding: gzip` and `Content-Type: application/x-tar`), but that's not a very helpful way to approach this problem.\n\nIn principle we could do that, it would be very easy from our side. @kennethreitz, the API is your call: are you happy to do this?\n", "I'm perfectly happy reaching in and using the raw response myself FWIW, I just figured if requests was OK with exposing this option it'd be easier and might be useful for other people. But either way it's ok with me.\n", "Yeah, that was basically the decision I want @kennethreitz to make: do we just say \"tough, use the raw response\", or do we provide the kwarg.\n", "The current intended API design is to use .raw for this.\n\n## \n\nKenneth Reitz\n\n> On Jan 7, 2014, at 3:52 AM, Cory Benfield [email protected] wrote:\n> \n> I'm obviously tempted to argue that pip should be able to spot this and handle it (that is, any compliant server would signal Content-Encoding: gzip and Content-Type: application/x-tar), but that's not a very helpful way to approach this problem.\n> \n> In principle we could do that, it would be very easy from our side. @kennethreitz, the API is your call: are you happy to do this?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "Ok that's fine! I'll use raw then.\n" ]
https://api.github.com/repos/psf/requests/issues/1842
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1842/labels{/name}
https://api.github.com/repos/psf/requests/issues/1842/comments
https://api.github.com/repos/psf/requests/issues/1842/events
https://github.com/psf/requests/issues/1842
25,128,192
MDU6SXNzdWUyNTEyODE5Mg==
1,842
HTTPS connections with requests incompatible with python 2.6.(0-2)
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[]
closed
true
null
[]
null
8
2014-01-06T20:41:32Z
2021-09-09T00:28:15Z
2014-01-06T20:57:50Z
CONTRIBUTOR
resolved
Creating this for tracking purposes... urllib3 attempts to access an attribute that was added in 2.6.3, breaking older Python 2.6's. https://github.com/shazow/urllib3/pull/307 fixes this issue, once that's merged the dependency in requests should probably be updated.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1842/reactions" }
https://api.github.com/repos/psf/requests/issues/1842/timeline
null
completed
null
null
false
[ "@sigmavirus24 and I are in favour of not supporting very old patch releases of 2.6. For perspective, 2.6.0 was released in 2008 and 2.6.2 was released in April 2009.\n\nObviously, we'll take the fix in urllib3, but I see no reason for Requests to rush into supporting a five year old release of Python. Anyone who felt the need to be on Python 2.6 should have been taking security patches and be at 2.6.9. \n", "Agreed, was mostly creating this so people can hopefully find it on Google\nif they run into this problem.\n\n## \n\nKevin Burke | Twilio\nphone: 925.271.7005 | kev.inburke.com\n\nOn Mon, Jan 6, 2014 at 12:58 PM, Cory Benfield [email protected]:\n\n> @sigmavirus24 https://github.com/sigmavirus24 and I are in favour of\n> not supporting very old patch releases of 2.6. For perspective, 2.6.0 was\n> released in 2008 and 2.6.2 was released in April 2009.\n> \n> Obviously, we'll take the fix in urllib3, but I see no reason for Requests\n> to rush into supporting a five year old release of Python. Anyone who felt\n> the need to be on Python 2.6 should have been taking security patches and\n> be at 2.6.9.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1842#issuecomment-31686263\n> .\n", "Thanks for that. =)\n", "Would it be at all possible to get you to reconsider releasing this relatively quickly? Especially if it's possible to get #1843 into the same release as well. Older versions of OSX ship with Python 2.6.1 or so with no mechanism for updating besides updating the entire OS. This has affected pip on those older OSX releases.\n", "I'm fine with releasing relatively soon but that responsibility lies entirely on @kenmethreitz's shoulders.\n", "Yeah, when we get this and #1845 resolved I'll push for a release.\n", "You all are awesome :sparkles: \n\nOnce things are merged I can make a test branch for pip and verify it works before the release if you'd like.\n", "@dstufft I'd greatly appreciate that. :)\n" ]
https://api.github.com/repos/psf/requests/issues/1841
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1841/labels{/name}
https://api.github.com/repos/psf/requests/issues/1841/comments
https://api.github.com/repos/psf/requests/issues/1841/events
https://github.com/psf/requests/issues/1841
25,098,751
MDU6SXNzdWUyNTA5ODc1MQ==
1,841
URL with port causing socket.gaierror
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2014-01-06T12:19:16Z
2021-09-08T04:00:47Z
2014-01-06T16:56:39Z
NONE
resolved
The following is not working: ``` import requests requests.post('http://localhost:8080/', data={"hello": "world"}) ``` This works: `````` import requests requests.post('http://localhost/', data={"hello": "world"})``` `````` This works too: `````` $ export HTTP_PROXY="http://localhost:8080" $ python import requests requests.post('http://localhost/', data={"hello": "world"})``` `````` Requests 2.1.0 Python 2.6.1 Mac OS X 10.6.8 httpie debug: ``` requests.request({'allow_redirects': False, 'auth': None, 'data': {}, 'files': {}, 'headers': CaseInsensitiveDict({'User-Agent': 'HTTPie/0.8.0-dev'}), 'method': 'post', 'params': {}, 'proxies': {}, 'stream': True, 'timeout': 30, 'url': 'http://localhost:8080/xxx/rest/product/', 'verify': True}) Traceback (most recent call last): File "/Users/andi/.virtualenvs/xxxx/bin/http", line 8, in <module> load_entry_point('httpie==0.8.0-dev', 'console_scripts', 'http')() File "/Users/andi/.virtualenvs/xxxx/lib/python2.6/site-packages/httpie/core.py", line 95, in main response = get_response(args, config_dir=env.config.directory) File "/Users/andi/.virtualenvs/xxxx/lib/python2.6/site-packages/httpie/client.py", line 27, in get_response response = requests.request(**requests_kwargs) File "/Users/andi/.virtualenvs/xxxx/lib/python2.6/site-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, **kwargs) File "/Users/andi/.virtualenvs/xxxx/lib/python2.6/site-packages/requests/sessions.py", line 355, in request env_proxies = get_environ_proxies(url) or {} File "/Users/andi/.virtualenvs/xxxx/lib/python2.6/site-packages/requests/utils.py", line 490, in get_environ_proxies if proxy_bypass(netloc): File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 1567, in proxy_bypass return proxy_bypass_macosx_sysconf(host) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py", line 1433, in proxy_bypass_macosx_sysconf hostIP = socket.gethostbyname(host) socket.gaierror: [Errno 8] nodename nor servname provided, or not known ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1841/reactions" }
https://api.github.com/repos/psf/requests/issues/1841/timeline
null
completed
null
null
false
[ "Hmm, that stack trace is interesting. I don't have a copy of Python 2.6.1 sitting around, but in 2.7 that code is subtly different, and explicitly splits the host and port. I wonder if early versions of 2.6 had a bug in their `proxy_bypass` code.\n", "2.6.1 _is_ ancient. I think the latest release is 2.6.9 or so and that's the last security release it'll see.\n\nThe reality though is that this seems to be in the standard lib and I'm not sure there's very much we can do about it.\n", "Sorry, did not see the forrest from the trees. Of course i should have been using 2.7, I forgot to pass that option to mkvirtualenv. Damn open space office ;) Sorry for the noise and thanks for pointing it out.\n", "\r\n File \"/usr/lib/python3.5/socket.py\", line 732, in getaddrinfo\r\n for res in _socket.getaddrinfo(host, port, family, type, proto, flags):\r\n\r\ngaierror: [Errno -2] Name or service not known" ]
https://api.github.com/repos/psf/requests/issues/1840
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1840/labels{/name}
https://api.github.com/repos/psf/requests/issues/1840/comments
https://api.github.com/repos/psf/requests/issues/1840/events
https://github.com/psf/requests/issues/1840
25,064,220
MDU6SXNzdWUyNTA2NDIyMA==
1,840
server cookies (not set by me) not saved within a Session
{ "avatar_url": "https://avatars.githubusercontent.com/u/6320683?v=4", "events_url": "https://api.github.com/users/dvasseur/events{/privacy}", "followers_url": "https://api.github.com/users/dvasseur/followers", "following_url": "https://api.github.com/users/dvasseur/following{/other_user}", "gists_url": "https://api.github.com/users/dvasseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dvasseur", "id": 6320683, "login": "dvasseur", "node_id": "MDQ6VXNlcjYzMjA2ODM=", "organizations_url": "https://api.github.com/users/dvasseur/orgs", "received_events_url": "https://api.github.com/users/dvasseur/received_events", "repos_url": "https://api.github.com/users/dvasseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dvasseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvasseur/subscriptions", "type": "User", "url": "https://api.github.com/users/dvasseur", "user_view_type": "public" }
[]
closed
true
null
[]
null
18
2014-01-05T01:59:08Z
2021-09-09T00:28:16Z
2014-01-05T09:21:01Z
NONE
resolved
Using a Session, when a server is setting some cookie on a 301/302 response, the cookie is not saved Update: could not find a public site to show the issue, hope this will be enough! Update 2: see here http://blog.dubbelboer.com/2012/11/25/302-cookie.html ``` python http_session = requests.Session() http_session.headers.update({'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64;$ url = 'http://dubbelboer.com/302cookie.php' r = http_session.get(url) print(r.history) print(r.url) print(r.text) for cookie in r.cookies: print(cookie) # --- (Response [302],) http://dubbelboer.com/302cookie.php?show=1388887790.4324 Cookie was: NOT SET<br><a href="?a=1388887790.4493">try again</a> <Cookie test= for dubbelboer.com/> ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1840/reactions" }
https://api.github.com/repos/psf/requests/issues/1840/timeline
null
completed
null
null
false
[ "@dvasseur I updated your issue to use code blocks so that it's highlighted properly.\n\nCan you let us know which version of requests you're using?\n", "FWIW, On the latest requests this is not an issue for me:\n\n``` pycon\n>>> import requests\n>>> s = requests.Session()\n>>> r = s.get('http://dubbelboer.com/302cookie.php')\n>>> r.history\n(<Response [302]>,)\n>>> s.cookies\n<<class 'requests.cookies.RequestsCookieJar'>[Cookie(version=0, name='test', value='', port=None, port_specified=False, domain=u'dubbelboer.com', domain_specified=False, domain_initial_dot=False, path='/', path_specified=False, secure=False, expires=None, discard=True, comment=None, comment_url=None, rest={}, rfc2109=False)]>\n>>> r.text\nu'Cookie was: SET<br><a href=\"?a=1388890217.6877\">try again</a>'\n```\n", "Agreed, this problem does not manifest on my machine on 2.1.0. Try updating to 2.1.0 and try again.\n", "@sigmavirus24 thanks for editing, didn't know how to do it !\n\nI have tested with 2.1.0 (arch version) and installed it again just to be sure\n\nOn my machine:\n\n``` python\nPython 3.3.3 (default, Nov 26 2013, 13:33:18)\n[GCC 4.8.2] on linux\n>>> import requests\n>>> s = requests.Session()\n>>> r = s.get('http://dubbelboer.com/302cookie.php')\n>>> r.history\n(<Response [302]>,)\n>>> s.cookies\n<<class 'requests.cookies.RequestsCookieJar'>[Cookie(version=0, name='test', value='', port=None, port_specified=False, domain='dubbelboer.com', domain_specified=False, domain_initial_dot=False, path='/', path_specified=False, secure=False, expires=None, discard=True, comment=None, comment_url=None, rest={}, rfc2109=False)]>\n>>> r.text\n'Cookie was: NOT SET<br><a href=\"?a=1388927785.835\">try again</a>'\n>>> print(requests.__version__)\n2.1.0\n```\n", "Oooh, weird.\n\n``` python\nPython 3.3.3 (default, Nov 20 2013, 19:35:16) \n[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import requests\n>>> requests.__version__\n'2.1.0'\n>>> s = requests.Session()\n>>> r = s.get('http://dubbelboer.com/302cookie.php')\n>>> r.history\n(<Response [302]>,)\n>>> r.text\n'Cookie was: SET<br><a href=\"?a=1388928255.3903\">try again</a>'\n```\n\nThis is weird. Time to investigate!\n", "Ubuntu also works fine:\n\n``` python\nPython 3.3.2+ (default, Oct 9 2013, 14:50:09) \n[GCC 4.8.1] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import requests\n>>> requests.__version__\n'2.1.0'\n>>> s = requests.Session()\n>>> r = s.get('http://dubbelboer.com/302cookie.php')\n>>> r.history\n(<Response [302]>,)\n>>> r.text\n'Cookie was: SET<br><a href=\"?a=1388928510.5557\">try again</a>'\n```\n", "Wait, did you say you installed `requests` from pacman, not `pip`?\n", "yep, does it matter ?\n", "Arch almost certainly apply a lot of downstream patches. Any number of them could break this function. Try using a version obtained from `pip`.\n", "according to this https://projects.archlinux.org/svntogit/community.git/tree/trunk/PKGBUILD?h=packages/python-requests the only patch applied is for certificates.\n\nI'll check with the git version here (just need to find out how to import for current dir instead of system repo, i'm a python beginner ;-))\n", "=P If that doesn't work, I'm prepared to believe that there's a patch in the Python standard library around the cookie handling that only Arch applies.\n", "OK, I've checked sys.path, tried with the latest version here and still the same issue. I've also checked default python package in Arch (https://projects.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/python) and there are no patch applied, what next ?\n", "Hmmm. Someone is going to need to confirm that this is related to Arch, and not your specific system. This means either I install arch, or you use a different OS. Not sure how quickly I can get Arch installed today though.\n", "Hmm, I'm have a Suse here also and\n\n``` python\nPython 3.3.2 (default, Jun 13 2013, 16:05:31) [GCC] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import requests\n>>> requests.__version__\n'2.1.0'\n>>> s = requests.Session()\nlocal version\n>>> r = s.get('http://dubbelboer.com/302cookie.php')\n>>> r.history\n(<Response [302]>,)\n>>> r.text\n'Cookie was: SET<br><a href=\"?a=1388931312.1533\">try again</a>'\n```\n\nWorking! So it seems to be Arch related :(\n", "Wow, I wonder what weird thing Arch is doing. Want to use `tcpdump` to confirm that the cookie actually isn't being sent to the server on Arch?\n", "I've played with tcpdump and:\n\n```\nSet-Cookie: test=1388932595.4576; expires=Sun, 05-Jan-2014 14:36:39 GMT\nLocation: ?show=1388932595.4576\n```\n\nso the cookie is indeed set\n\nBUT look at the `expires` value, it's set to expire right now ! I've checked with a `TZ=GMT date` at the end of my python script and the result is `Sun Jan 5 14:36:40 GMT 2014`, one second later. So in this specific case the cookie doesn't have to exist anymore and requests is doing the right thing!\n\nCould you check this time point on your system ?\n", "So, here's the flow I see from my machine, captured in Wireshark:\n\n```\nGET /302cookie.php HTTP/1.1\n---> HTTP/1.1 302 Moved Temporarily (received @ Sun, 05-Jan-2014 14:55:52.775986)\n Set-Cookie: test=1388933752.1567; expires=Sun, 05-Jan-2014 14:55:56 GMT\nGET /302cookie.php?show=1388933752.1567 HTTP/1.1 (sent @ Sun, 05-Jan-2014 14:55:52.778463)\n---> HTTP/1.1 200 OK\n```\n\nSo, the cookie is set to expire 4 seconds after I received the HTTP response, and Requests sends the redirect fast enough that the cookie doesn't expire. I suspect this is a timing problem on your end.\n", "OK, fine! I've updated my server clock (and set up ntpd BTW) and everything is fine, thanks!\n" ]
https://api.github.com/repos/psf/requests/issues/1839
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1839/labels{/name}
https://api.github.com/repos/psf/requests/issues/1839/comments
https://api.github.com/repos/psf/requests/issues/1839/events
https://github.com/psf/requests/issues/1839
25,042,360
MDU6SXNzdWUyNTA0MjM2MA==
1,839
Unquote+Quote cycle prohibits urls required by salesforce.com
{ "avatar_url": "https://avatars.githubusercontent.com/u/493648?v=4", "events_url": "https://api.github.com/users/jsullivanlive/events{/privacy}", "followers_url": "https://api.github.com/users/jsullivanlive/followers", "following_url": "https://api.github.com/users/jsullivanlive/following{/other_user}", "gists_url": "https://api.github.com/users/jsullivanlive/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jsullivanlive", "id": 493648, "login": "jsullivanlive", "node_id": "MDQ6VXNlcjQ5MzY0OA==", "organizations_url": "https://api.github.com/users/jsullivanlive/orgs", "received_events_url": "https://api.github.com/users/jsullivanlive/received_events", "repos_url": "https://api.github.com/users/jsullivanlive/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jsullivanlive/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsullivanlive/subscriptions", "type": "User", "url": "https://api.github.com/users/jsullivanlive", "user_view_type": "public" }
[]
closed
true
null
[]
null
9
2014-01-03T23:31:28Z
2021-09-09T00:09:57Z
2014-01-29T19:25:04Z
NONE
resolved
Salesforce.com requires you %hex encode period (.) and slash (/) to issue a PATCH. The unquote+quote cycle in `requote_uri(uri)` disallows this: https://github.com/kennethreitz/requests/blob/master/requests/utils.py#L398 Since it's not prohibited by the http spec (see uri comparison at http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.2.2) but some web services like salesforce.com won't allow the unquoted values for period or slash, could this be disabled? I'm glad to issue a pull request for disabling it but figured that I'd ask first for background before assuming I knew why this was here.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1839/reactions" }
https://api.github.com/repos/psf/requests/issues/1839/timeline
null
completed
null
null
false
[ "Thanks for raising this issue!\n\nFirstly, I think the slashes are a non-starter. We always urlencode them if you pass them in the `params` dict, so that's not a problem. The sole problem seems to be with periods.\n\nHappily, the answer is that you can disable it if you use the [PreparedRequest](http://docs.python-requests.org/en/latest/user/advanced/#prepared-requests) objects. Try this:\n\n``` python\nfrom requests import Request, Session\n\ns = Session()\nreq = Request('GET', url)\nprepped = s.prepare_request(req)\nprepped.url = prepped.url.replace('.', '%2E')\nresp = s.send(prepped)\n```\n", "My issue is not with params but with path. Salesforce.com requires the id in the URL has these both escaped. I only need one of my periods to be quoted in this path, not all of them. I consider this to be a bug because the code is changing my valid (albeit wonky because of Salesforce.com) URL. \n\nIs there a use case for the unquote and quote cycle?\n\nSent from my iPhone\n\n> On Jan 3, 2014, at 6:42 PM, Cory Benfield [email protected] wrote:\n> \n> Thanks for raising this issue!\n> \n> Firstly, I think the slashes are a non-starter. We always urlencode them if you pass them in the params dict, so that's not a problem. The sole problem seems to be with periods.\n> \n> Happily, the answer is that you can disable it if you use the PreparedRequest objects. Try this:\n> \n> from requests import Request, Session\n> \n> s = Session()\n> req = Request('GET', url)\n> prepped = s.prepare_request(req)\n> prepped.url = prepped.url.replace('.', '%2E')\n> resp = s.send(prepped)\n> —\n> Reply to this email directly or view it on GitHub.\n", "Ah, sorry.\n\nYes, there is. The problem is that Requests doesn't know whether it's going to get a quoted or an unquoted URI (or indeed a URI that is some combination of quoted and unquoted). In an attempt to be helpful, we do our best to normalise them: that is, we unquote everything, then requote again. That way, anything that was quoted remains quoted, and anything that wasn't becomes quoted.\n\nThis is valuable behaviour: most users don't understand the complexities of URL quoting (hell, _I_ don't understand all of them), and they don't want to have to have a class about it. They want to be able to pass a URL and have us do the right thing, and in 90% of cases the quote-unquote cycle does exactly that.\n\nIf you're interested, the relevant comment talking about this behaviour is [this one](https://github.com/kennethreitz/requests/issues/369#issuecomment-3953802), but the key point of that comment is this:\n\n> [...] percent-encoded unreserved characters will end up being unquoted (which is legal, because they are equivalent either way), illegal characters will end up being quoted (which is required for a valid URI), and reserved characters will remain exactly as they were.\n\nThe period is unreserved, so is unquoted. This is _totally correct_ behaviour. Normally I wouldn't consider that a compelling argument (we do lots of incorrect things to work around bugs introduced by doing the correct thing), but in this case if we change this behaviour we will simply introduce bugs in other web services that make the opposite mistake to Salesforce.\n\nThe question then becomes, do we add a feature to the API to disable this? The answer is no: we are not changing Requests' API at this time. The expected way to work around this set of problems is to mutate the URL provided by the `PreparedRequest` object I showed you above. This is really the only way to guarantee that we'll do the correct incorrect thing for your specific use case. I appreciate that it's not very clean, but there's not a whole lot we can do about that.\n\nIs that explanation clear? I'm very tired at the minute, so I may have been unclear: if I was, I apologise. =)\n", "What I hear is that the intended behavior is to handle quoted and unquoted (and mixed) urls. I agree with this because it makes me have to think less. \n\nIt looks like the unquote method is used to prevent double quoting. That unquote causes a problem because it modifies a valid url by unnecessarily unquoting text and sends something different than the user specified.\n\nI am proposing it work without unquote and add logic that prevents double quoting so that people can use a mix of both but requests will not unnecessarily modify/destroy urls that are valid without telling the user. \n\nIf this sounds reasonable I will open a pull. \n\nThanks!\n\nSent from my iPhone\n\n> On Jan 3, 2014, at 7:00 PM, Cory Benfield [email protected] wrote:\n> \n> Ah, sorry.\n> \n> Yes, there is. The problem is that Requests doesn't know whether it's going to get a quoted or an unquoted URI (or indeed a URI that is some combination of quoted and unquoted). In an attempt to be helpful, we do our best to normalise them: that is, we unquote everything, then requote again. That way, anything that was quoted remains quoted, and anything that wasn't becomes quoted.\n> \n> This is valuable behaviour: most users don't understand the complexities of URL quoting (hell, I don't understand all of them), and they don't want to have to have a class about it. They want to be able to pass a URL and have us do the right thing, and in 90% of cases the quote-unquote cycle does exactly that.\n> \n> If you're interested, the relevant comment talking about this behaviour is this one, but the key point of that comment is this:\n> \n> [...] percent-encoded unreserved characters will end up being unquoted (which is legal, because they are equivalent either way), illegal characters will end up being quoted (which is required for a valid URI), and reserved characters will remain exactly as they were.\n> \n> The period is unreserved, so is unquoted. This is totally correct behaviour. Normally I wouldn't consider that a compelling argument (we do lots of incorrect things to work around bugs introduced by doing the correct thing), but in this case if we change this behaviour we will simply introduce bugs in other web services that make the opposite mistake to Salesforce.\n> \n> The question then becomes, do we add a feature to the API to disable this? The answer is no: we are not changing Requests' API at this time. The expected way to work around this set of problems is to mutate the URL provided by the PreparedRequest object I showed you above. This is really the only way to guarantee that we'll do the correct incorrect thing for your specific use case. I appreciate that it's not very clean, but there's not a whole lot we can do about that.\n> \n> Is that explanation clear? I'm very tired at the minute, so I may have been unclear: if I was, I apologise. =)\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "First, let's be clear. Yes, the unquote/quote modifies a valid URL, but it modifies the URL in a way that leaves the URL _completely synonymous_, as defined by the standard. The default position is that we have done nothing wrong here, so we need a good justification for removing this step.\n\nUnfortunately, there simply isn't one. Salesforce is _wrong_ to require that a percent-encoded period has a different meaning to a literal one. That wrongness is dramatic: you would encounter the same problem with browsers. Using TCPdump with Chrome shows that Chrome has the same 'bug' Requests has:\n\n```\nInput: http://httpbin.org/get/test.test\nWireshark path: /get/test.test\n\nInput: http://httpbin.org/get/test%2Etest\nWireshark path: /get/test.test\n```\n\nThere is no way for you to turn this off that won't result in the opposite bug for someone else: that is, someone is being saved by us turning their percent-encoded period into a literal one. We've made our decision: we'll do what browsers do with their URL bars. If you need to change that to work with a misbehaving server, you can do that, using the `PreparedRequest` URL.\n", "@jsullivanlive \nHas someone raised this issue with salesforce.com?\n", "In the past quirks like this encountered on salesforce have never been addressed. I encourage someone to tell them but I wouldn't encourage you to get your hopes up\n", "@sigmavirus24 and I have had the same experience, which is also the prevailing impression in #salesforce on freenode. Oracle, Salesforce, and any other big companies have more concern over their slipping ship dates than their adherence to rfc and test coverage. This is why, after 20 years of development behind me, I touch data as little as possible. :)\n", "requests-oauthlib has also had reports that Salesforce has not upgraded the version (draft) of OAuth 2.0 that they're using in spite of the fact that they were involved in its creation.\n" ]
https://api.github.com/repos/psf/requests/issues/1838
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1838/labels{/name}
https://api.github.com/repos/psf/requests/issues/1838/comments
https://api.github.com/repos/psf/requests/issues/1838/events
https://github.com/psf/requests/issues/1838
25,035,380
MDU6SXNzdWUyNTAzNTM4MA==
1,838
Session Uses Wrong Method When Making Requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/1467590?v=4", "events_url": "https://api.github.com/users/creese/events{/privacy}", "followers_url": "https://api.github.com/users/creese/followers", "following_url": "https://api.github.com/users/creese/following{/other_user}", "gists_url": "https://api.github.com/users/creese/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/creese", "id": 1467590, "login": "creese", "node_id": "MDQ6VXNlcjE0Njc1OTA=", "organizations_url": "https://api.github.com/users/creese/orgs", "received_events_url": "https://api.github.com/users/creese/received_events", "repos_url": "https://api.github.com/users/creese/repos", "site_admin": false, "starred_url": "https://api.github.com/users/creese/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/creese/subscriptions", "type": "User", "url": "https://api.github.com/users/creese", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-01-03T20:57:07Z
2021-09-09T00:28:19Z
2014-01-03T21:46:18Z
NONE
resolved
When making a request with a session, such as: ``` response = session.post("some_url", form_data) ``` the value of `response.request.method` can sometimes be a `GET` with a `response.request.body` that's empty. This happens with surprisingly high frequency (appox. 1 in 10 requests).
{ "avatar_url": "https://avatars.githubusercontent.com/u/1467590?v=4", "events_url": "https://api.github.com/users/creese/events{/privacy}", "followers_url": "https://api.github.com/users/creese/followers", "following_url": "https://api.github.com/users/creese/following{/other_user}", "gists_url": "https://api.github.com/users/creese/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/creese", "id": 1467590, "login": "creese", "node_id": "MDQ6VXNlcjE0Njc1OTA=", "organizations_url": "https://api.github.com/users/creese/orgs", "received_events_url": "https://api.github.com/users/creese/received_events", "repos_url": "https://api.github.com/users/creese/repos", "site_admin": false, "starred_url": "https://api.github.com/users/creese/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/creese/subscriptions", "type": "User", "url": "https://api.github.com/users/creese", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1838/reactions" }
https://api.github.com/repos/psf/requests/issues/1838/timeline
null
completed
null
null
false
[ "You are not checking `response.history` which will undoubtedly show that the original response was a redirect. On redirect, we must change the verb and clear the body per the RFC. I'll wait for you to confirm that you see redirects in your history before closing this.\n", "I am getting a 301 in `response.history`. Thanks.\n" ]
https://api.github.com/repos/psf/requests/issues/1837
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1837/labels{/name}
https://api.github.com/repos/psf/requests/issues/1837/comments
https://api.github.com/repos/psf/requests/issues/1837/events
https://github.com/psf/requests/issues/1837
25,000,754
MDU6SXNzdWUyNTAwMDc1NA==
1,837
failed to fetch URL which can be accessed by browser
{ "avatar_url": "https://avatars.githubusercontent.com/u/1005070?v=4", "events_url": "https://api.github.com/users/hustwj/events{/privacy}", "followers_url": "https://api.github.com/users/hustwj/followers", "following_url": "https://api.github.com/users/hustwj/following{/other_user}", "gists_url": "https://api.github.com/users/hustwj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hustwj", "id": 1005070, "login": "hustwj", "node_id": "MDQ6VXNlcjEwMDUwNzA=", "organizations_url": "https://api.github.com/users/hustwj/orgs", "received_events_url": "https://api.github.com/users/hustwj/received_events", "repos_url": "https://api.github.com/users/hustwj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hustwj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hustwj/subscriptions", "type": "User", "url": "https://api.github.com/users/hustwj", "user_view_type": "public" }
[]
closed
true
null
[]
null
2
2014-01-03T03:55:47Z
2021-09-09T00:28:19Z
2014-01-03T07:30:47Z
NONE
resolved
I failed to fetch a web page from the following URL, but this URL is fine in my browser. http://hkblog.xanga.com/2009/08/17/%E3%80%8A%E6%84%9B%E6%83%85%EF%BC%8C%E6%B2%92%E9%82%A3%E9%BA%BC%E7%BE%8E%E5%A5%BD%E3%80%8B/ The error messages are as follows: Traceback (most recent call last): File "test.py", line 4, in <module> cur_r = requests.get('http://hkblog.xanga.com/2009/08/17/%E3%80%8A%E6%84%9B%E6%83%85%EF%BC%8C%E6%B2%92%E9%82%A3%E9%BA%BC%E7%BE%8E%E5%A5%BD%E3%80%8B/') File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 55, in get return request('get', url, *_kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request return session.request(method=method, url=url, *_kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 382, in request resp = self.send(prep, *_send_kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 485, in send r = adapter.send(request, *_kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 372, in send raise ConnectionError(e) requests.exceptions.ConnectionError: HTTPConnectionPool(host='hkblog.xanga.com', port=80): Max retries exceeded with url: /2009/08/17/%E3%80%8A%E6%84%9B%E6%83%85%EF%BC%8C%E6%B2%92%E9%82%A3%E9%BA%BC%E7%BE%8E%E5%A5%BD%E3%80%8B/ (Caused by <class 'httplib.BadStatusLine'>: '')
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1837/reactions" }
https://api.github.com/repos/psf/requests/issues/1837/timeline
null
completed
null
null
false
[ "I think the host 'http://hkblog.xanga.com' does not accept 'User-Agent' containing 'CPython'.\nWhen i send same request with `header={'User-Agent' : 'python-requests/2.1.0 CPython/2.7.5 Darwin/13.0.0'}` (exactly same user-agent with `requests`), that error also occurs in urllib2.\nBut when i remove 'CPython/2.7.5' in user-agent, it works well both requests and urllib2.\nSo, it's not a bug of `requests`.\n", "Looks like @daftshady nailed this one down\n" ]
https://api.github.com/repos/psf/requests/issues/1836
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1836/labels{/name}
https://api.github.com/repos/psf/requests/issues/1836/comments
https://api.github.com/repos/psf/requests/issues/1836/events
https://github.com/psf/requests/pull/1836
24,990,101
MDExOlB1bGxSZXF1ZXN0MTExOTM3Mzg=
1,836
Bump the year ftw
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
[ { "color": "009800", "default": false, "description": null, "id": 44501218, "name": "Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTIxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge" }, { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
8
2014-01-02T22:01:41Z
2021-09-08T23:06:04Z
2014-01-08T18:44:21Z
CONTRIBUTOR
resolved
s/2013/2014
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1836/reactions" }
https://api.github.com/repos/psf/requests/issues/1836/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1836.diff", "html_url": "https://github.com/psf/requests/pull/1836", "merged_at": "2014-01-08T18:44:21Z", "patch_url": "https://github.com/psf/requests/pull/1836.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1836" }
true
[ "May as well update LICENSE. =D\n", "Hurrah! =D :+1:\n", "Done. And to quote my commit message:\n\n> No one checks docs or a license\n\n:wink: \n", "Ooh, ooh, I do!\n", "You would\n", "> Bump the year ftw\n\nThe usual practice is to update the list of years each time the file is modified (but never remove a year). \nhttp://stackoverflow.com/questions/3487007/when-to-update-the-year-in-open-source-copyright-notice\n", "@zedxxx I'm following the convention of the project. requests has been around (much) longer than a year, yes, but for as long as I've been around it has never had the year range in the license.\n", "> has never had the year range in the license\n\nPerhaps, it is time to correct this error.\n" ]
https://api.github.com/repos/psf/requests/issues/1835
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1835/labels{/name}
https://api.github.com/repos/psf/requests/issues/1835/comments
https://api.github.com/repos/psf/requests/issues/1835/events
https://github.com/psf/requests/issues/1835
24,918,485
MDU6SXNzdWUyNDkxODQ4NQ==
1,835
Set trust_env as an option in request method
{ "avatar_url": "https://avatars.githubusercontent.com/u/922419?v=4", "events_url": "https://api.github.com/users/V-E-O/events{/privacy}", "followers_url": "https://api.github.com/users/V-E-O/followers", "following_url": "https://api.github.com/users/V-E-O/following{/other_user}", "gists_url": "https://api.github.com/users/V-E-O/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/V-E-O", "id": 922419, "login": "V-E-O", "node_id": "MDQ6VXNlcjkyMjQxOQ==", "organizations_url": "https://api.github.com/users/V-E-O/orgs", "received_events_url": "https://api.github.com/users/V-E-O/received_events", "repos_url": "https://api.github.com/users/V-E-O/repos", "site_admin": false, "starred_url": "https://api.github.com/users/V-E-O/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/V-E-O/subscriptions", "type": "User", "url": "https://api.github.com/users/V-E-O", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2013-12-31T11:52:38Z
2021-09-09T00:28:19Z
2013-12-31T11:55:39Z
NONE
resolved
Hidden set trust_env in class Session aimed to trust the http_proxy env, can we have this as an option in Session.request or request api, for only in request method, this option are read to decide to trust http_proxy env or not.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1835/reactions" }
https://api.github.com/repos/psf/requests/issues/1835/timeline
null
completed
null
null
false
[ "There's nothing hidden about `trust_env`, it's [documented](http://docs.python-requests.org/en/latest/api/#requests.Session.trust_env). We're not going to change where `trust_env` lives, we're happy with the API as it stands.\n\nThanks for asking though!\n", "My mistake about unclear expression.\nSometimes we'd like to just use requests.get() as simple, not to declare Session() then get().\nNoticed that 'stream', 'verify', 'cert' all can be set either by requests.get() or change it in Session(), why not just let it append to the parameter list. http://docs.python-requests.org/en/latest/api/#requests.Session.request\n\nThanks for reply.\n", "@V-E-O the parameters passed to `get`, `post`, `put`, and other VERB methods are parameters that pertain to the request itself, not to the environment and not to the session. `trust_env` pertains to the environment and how the session considers the environment when performing requests. In short, no we will not add `trust_env` to the parameter list of any of those methods.\n", "@sigmavirus24 I understand your point, also fine with disabling proxy by setting `trust_env`. Just feeling odd first time I tried to use requests.get(..., proxies=None/{}) to bypass env proxies.\nAnyway, it's OK with us, thank you all for reply!\n" ]
https://api.github.com/repos/psf/requests/issues/1834
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1834/labels{/name}
https://api.github.com/repos/psf/requests/issues/1834/comments
https://api.github.com/repos/psf/requests/issues/1834/events
https://github.com/psf/requests/issues/1834
24,892,527
MDU6SXNzdWUyNDg5MjUyNw==
1,834
Provide classifiers for PyPI that specify which Python versions are supported
{ "avatar_url": "https://avatars.githubusercontent.com/u/56778?v=4", "events_url": "https://api.github.com/users/cool-RR/events{/privacy}", "followers_url": "https://api.github.com/users/cool-RR/followers", "following_url": "https://api.github.com/users/cool-RR/following{/other_user}", "gists_url": "https://api.github.com/users/cool-RR/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cool-RR", "id": 56778, "login": "cool-RR", "node_id": "MDQ6VXNlcjU2Nzc4", "organizations_url": "https://api.github.com/users/cool-RR/orgs", "received_events_url": "https://api.github.com/users/cool-RR/received_events", "repos_url": "https://api.github.com/users/cool-RR/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cool-RR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cool-RR/subscriptions", "type": "User", "url": "https://api.github.com/users/cool-RR", "user_view_type": "public" }
[]
closed
true
null
[]
null
3
2013-12-30T17:29:34Z
2021-09-09T00:28:20Z
2013-12-30T17:31:54Z
NONE
resolved
{ "avatar_url": "https://avatars.githubusercontent.com/u/240830?v=4", "events_url": "https://api.github.com/users/sigmavirus24/events{/privacy}", "followers_url": "https://api.github.com/users/sigmavirus24/followers", "following_url": "https://api.github.com/users/sigmavirus24/following{/other_user}", "gists_url": "https://api.github.com/users/sigmavirus24/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sigmavirus24", "id": 240830, "login": "sigmavirus24", "node_id": "MDQ6VXNlcjI0MDgzMA==", "organizations_url": "https://api.github.com/users/sigmavirus24/orgs", "received_events_url": "https://api.github.com/users/sigmavirus24/received_events", "repos_url": "https://api.github.com/users/sigmavirus24/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sigmavirus24/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sigmavirus24/subscriptions", "type": "User", "url": "https://api.github.com/users/sigmavirus24", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1834/reactions" }
https://api.github.com/repos/psf/requests/issues/1834/timeline
null
completed
null
null
false
[ "We do: https://github.com/kennethreitz/requests/blob/master/setup.py#L56..L60\n", "Then why do these don't appear on the PyPI page?\n\nhttps://pypi.python.org/pypi/requests\n\nOn Mon, Dec 30, 2013 at 7:32 PM, Ian Cordasco [email protected]:\n\n> We do:\n> https://github.com/kennethreitz/requests/blob/master/setup.py#L56..L60\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/kennethreitz/requests/issues/1834#issuecomment-31357160\n> .\n", "PyPI is a strange beast that is luckily being slowly replaced by [pypa/warehouse](https://github.com/pypa/warehouse). In the past I had issues where I created a package name manually and then metadata from `setup.py` didn't properly appear. I suspect this might be something similar. Regardless only @kennethreitz can change this and it isn't an actual bug in requests.\n" ]
https://api.github.com/repos/psf/requests/issues/1833
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1833/labels{/name}
https://api.github.com/repos/psf/requests/issues/1833/comments
https://api.github.com/repos/psf/requests/issues/1833/events
https://github.com/psf/requests/issues/1833
24,845,131
MDU6SXNzdWUyNDg0NTEzMQ==
1,833
UnsupportedContentType
{ "avatar_url": "https://avatars.githubusercontent.com/u/1450977?v=4", "events_url": "https://api.github.com/users/xjsender/events{/privacy}", "followers_url": "https://api.github.com/users/xjsender/followers", "following_url": "https://api.github.com/users/xjsender/following{/other_user}", "gists_url": "https://api.github.com/users/xjsender/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xjsender", "id": 1450977, "login": "xjsender", "node_id": "MDQ6VXNlcjE0NTA5Nzc=", "organizations_url": "https://api.github.com/users/xjsender/orgs", "received_events_url": "https://api.github.com/users/xjsender/received_events", "repos_url": "https://api.github.com/users/xjsender/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xjsender/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xjsender/subscriptions", "type": "User", "url": "https://api.github.com/users/xjsender", "user_view_type": "public" }
[]
closed
true
null
[]
null
4
2013-12-28T12:15:45Z
2021-09-09T00:28:21Z
2013-12-28T12:20:59Z
NONE
resolved
I use below code to post a data from file to salesforce, sometimes, it worked. ``` python def create_batch(self, job_id): url = self.base_url + "/job/%s/batch" % job_id headers = self.headers headers["Content-Type"] = "text/csv;charset=UTF-8" if self.operation == "query" and self.records == None: api = SalesforceApi(self.settings) self.records = api.combine_soql(self.sobject) response = requests.post(url, self.records, verify=False, headers=headers) print (response.content) if response.status_code == 400: return self.parse_response(response, url) batch_id = getUniqueElementValueFromXmlString(response.content, "id") return batch_id ``` However, sometimes, it may give me below error ``` b'<?xml version="1.0" encoding="UTF-8"?> <error\n xmlns="http://www.force.com/2009/06/asyncapi/dataload">\n <exceptionCode>UnsupportedContentType</exceptionCode>\n <exceptionMessage> Unsupported content type: application/x-www-form-urlencoded </exceptionMessage>\n </error>' ``` Actually, I am very sure my headers content type is "text/csv;charset=UTF-8", however, after requests post is done, my request content type is changed to "application/x-www-form-urlencoded". So I check the requests source code and I add a print statement in `models.py`, ``` python else: # Multi-part file uploads. if files: (body, content_type) = self._encode_files(files, data) else: if data: body = self._encode_params(data) if isinstance(data, str) or isinstance(data, builtin_str) or hasattr(data, 'read'): content_type = None else: content_type = 'application/x-www-form-urlencoded' self.prepare_content_length(body) # Add content-type if it wasn't explicitly provided. if (content_type) and (not 'content-type' in self.headers): self.headers['Content-Type'] = content_type print (self.headers) self.body = body ``` The print result contains two `Content-Type`, can you help me on this problem? ``` CaseInsensitiveDict({ 'Content-Length' : '22', b'Content-Type' : 'text/csv;charset=UTF-8', b'Accept' : '*/*', b'Accept-Encoding' : 'gzip, deflate, compress', 'Content-Type' : 'application/x-www-form-urlencoded', b'X-SFDC-Session' : '00D90000000Y9ll!AQEAQJXx87QsHqdUeBbbw1qCbC0K0a7063VDxwZeYv8ZRgDpnmTcQjdxQL5R3tgumYtqhYz3WYK74QH3_ZJgngFDvdzBHB_G', b'User-Agent' : 'python-requests/1.2.3 CPython/3.3.0 Windows/7' }) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1382556?v=4", "events_url": "https://api.github.com/users/Lukasa/events{/privacy}", "followers_url": "https://api.github.com/users/Lukasa/followers", "following_url": "https://api.github.com/users/Lukasa/following{/other_user}", "gists_url": "https://api.github.com/users/Lukasa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lukasa", "id": 1382556, "login": "Lukasa", "node_id": "MDQ6VXNlcjEzODI1NTY=", "organizations_url": "https://api.github.com/users/Lukasa/orgs", "received_events_url": "https://api.github.com/users/Lukasa/received_events", "repos_url": "https://api.github.com/users/Lukasa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lukasa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lukasa/subscriptions", "type": "User", "url": "https://api.github.com/users/Lukasa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1833/reactions" }
https://api.github.com/repos/psf/requests/issues/1833/timeline
null
completed
null
null
false
[ "What version of Requests are you using?\n", "```\nCaseInsensitiveDict({\n 'Content-Length' : '22',\n b'Content-Type' : 'text/csv;charset=UTF-8',\n b'Accept' : '*/*',\n b'Accept-Encoding' : 'gzip, deflate, compress',\n 'Content-Type' : 'application/x-www-form-urlencoded',\n b'X-SFDC-Session' : '00D90000000Y9ll!AQEAQJXx87QsHqdUeBbbw1qCbC0K0a7063VDxwZeYv8ZRgDpnmTcQjdxQL5R3tgumYtqhYz3WYK74QH3_ZJgngFDvdzBHB_G',\n b'User-Agent' : 'python-requests/1.2.3 CPython/3.3.0 Windows/7'\n})\n```\n", "Heh, yes, good point. Missed that.\n\nThis is a known bug that we fixed some time ago, as you can see in the [2.0.0 release notes](https://github.com/kennethreitz/requests/blob/master/HISTORY.rst#200-2013-09-24). You'll need to update Requests. =)\n", "@Lukasa, it worked, thanks for your help.\n" ]
https://api.github.com/repos/psf/requests/issues/1832
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1832/labels{/name}
https://api.github.com/repos/psf/requests/issues/1832/comments
https://api.github.com/repos/psf/requests/issues/1832/events
https://github.com/psf/requests/pull/1832
24,842,972
MDExOlB1bGxSZXF1ZXN0MTExMjM4NTg=
1,832
Fix warnings when building the docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[]
closed
true
null
[]
null
1
2013-12-28T08:11:57Z
2021-09-08T11:00:51Z
2014-01-08T18:45:33Z
CONTRIBUTOR
resolved
It may be nice to make builds fail if new documentation generates warnings, to avoid these sorts of problems slipping in in the future.
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1832/reactions" }
https://api.github.com/repos/psf/requests/issues/1832/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1832.diff", "html_url": "https://github.com/psf/requests/pull/1832", "merged_at": "2014-01-08T18:45:33Z", "patch_url": "https://github.com/psf/requests/pull/1832.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1832" }
true
[ "One note, otherwise I'm happy with this. =)\n" ]
https://api.github.com/repos/psf/requests/issues/1831
https://api.github.com/repos/psf/requests
https://api.github.com/repos/psf/requests/issues/1831/labels{/name}
https://api.github.com/repos/psf/requests/issues/1831/comments
https://api.github.com/repos/psf/requests/issues/1831/events
https://github.com/psf/requests/pull/1831
24,842,750
MDExOlB1bGxSZXF1ZXN0MTExMjM3NzA=
1,831
Add more interlinks between the documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/234019?v=4", "events_url": "https://api.github.com/users/kevinburke/events{/privacy}", "followers_url": "https://api.github.com/users/kevinburke/followers", "following_url": "https://api.github.com/users/kevinburke/following{/other_user}", "gists_url": "https://api.github.com/users/kevinburke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kevinburke", "id": 234019, "login": "kevinburke", "node_id": "MDQ6VXNlcjIzNDAxOQ==", "organizations_url": "https://api.github.com/users/kevinburke/orgs", "received_events_url": "https://api.github.com/users/kevinburke/received_events", "repos_url": "https://api.github.com/users/kevinburke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kevinburke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinburke/subscriptions", "type": "User", "url": "https://api.github.com/users/kevinburke", "user_view_type": "public" }
[ { "color": "009800", "default": false, "description": null, "id": 44501218, "name": "Ready To Merge", "node_id": "MDU6TGFiZWw0NDUwMTIxOA==", "url": "https://api.github.com/repos/psf/requests/labels/Ready%20To%20Merge" }, { "color": "207de5", "default": false, "description": null, "id": 60620163, "name": "Minion Seal of Approval", "node_id": "MDU6TGFiZWw2MDYyMDE2Mw==", "url": "https://api.github.com/repos/psf/requests/labels/Minion%20Seal%20of%20Approval" } ]
closed
true
null
[]
null
3
2013-12-28T07:47:48Z
2021-09-08T11:00:51Z
2014-01-08T18:48:25Z
CONTRIBUTOR
resolved
Also fixes up tense in a few cases, and adds the `intersphinx` extension so we can link to the urllib3 documentation when it is called out. I should probably write documentation for how to do this somewhere as well...
{ "avatar_url": "https://avatars.githubusercontent.com/u/119893?v=4", "events_url": "https://api.github.com/users/kennethreitz/events{/privacy}", "followers_url": "https://api.github.com/users/kennethreitz/followers", "following_url": "https://api.github.com/users/kennethreitz/following{/other_user}", "gists_url": "https://api.github.com/users/kennethreitz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kennethreitz", "id": 119893, "login": "kennethreitz", "node_id": "MDQ6VXNlcjExOTg5Mw==", "organizations_url": "https://api.github.com/users/kennethreitz/orgs", "received_events_url": "https://api.github.com/users/kennethreitz/received_events", "repos_url": "https://api.github.com/users/kennethreitz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kennethreitz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kennethreitz/subscriptions", "type": "User", "url": "https://api.github.com/users/kennethreitz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/psf/requests/issues/1831/reactions" }
https://api.github.com/repos/psf/requests/issues/1831/timeline
null
null
false
{ "diff_url": "https://github.com/psf/requests/pull/1831.diff", "html_url": "https://github.com/psf/requests/pull/1831", "merged_at": "2014-01-08T18:48:25Z", "patch_url": "https://github.com/psf/requests/pull/1831.patch", "url": "https://api.github.com/repos/psf/requests/pulls/1831" }
true
[ "This in principle looks fine, though I don't know about about intersphinx to be sure. =) Let's see what @sigmavirus24 thinks.\n", "FWIW I wrote up docs on how to do this here:\n\nhttp://kev.inburke.com/kevin/sphinx-interlinks/\n", "Nice dude! Hook up all the intersphinx you want!\n\nOne thing to note — it should stay out of the Quickstart. Let's keep it to the Advanced, API Docs, Developer Docs, etc. :)\n" ]