url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
βŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
βŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/13247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13247/comments
https://api.github.com/repos/huggingface/transformers/issues/13247/events
https://github.com/huggingface/transformers/issues/13247
978,585,129
MDU6SXNzdWU5Nzg1ODUxMjk=
13,247
Why do we need to use `Loss.repeat(eval_batch_size)` in accelerator gather loop?
{ "login": "thakursc1", "id": 63082305, "node_id": "MDQ6VXNlcjYzMDgyMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/63082305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thakursc1", "html_url": "https://github.com/thakursc1", "followers_url": "https://api.github.com/users/thakursc1/followers", "following_url": "https://api.github.com/users/thakursc1/following{/other_user}", "gists_url": "https://api.github.com/users/thakursc1/gists{/gist_id}", "starred_url": "https://api.github.com/users/thakursc1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thakursc1/subscriptions", "organizations_url": "https://api.github.com/users/thakursc1/orgs", "repos_url": "https://api.github.com/users/thakursc1/repos", "events_url": "https://api.github.com/users/thakursc1/events{/privacy}", "received_events_url": "https://api.github.com/users/thakursc1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger Can you please help ?", "Hi @thakursc1, Sylvain is currently off until next week - he'll answer your query when he's back from his break. Thanks for your understanding.", "I'll look at why it does not work for a 0d tensor when I have a bit of bandwith (lots to do when coming back from vacation!)\r\n\r\nThe main reason we do it this way is to be able to compute the true average of the loss across all the processes at the end (otherwise we won't get the exact value of the loss). That being said, there is no reason why `accelerator.gather(loss)` in your code should be stuck. ", "https://github.com/huggingface/accelerate/pull/152 should fix the problem on the Accelerate side.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/examples/pytorch/language-modeling/run_mlm_no_trainer.py#L510 If I do not use this, and simple do acclerator.gather(loss) my code is stuck at this point. But if I repeat the loss it seems to work. Can you explain why is this the case ? Why do we also later use `losses = losses[: len(eval_dataset)]` ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13247/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13246/comments
https://api.github.com/repos/huggingface/transformers/issues/13246/events
https://github.com/huggingface/transformers/issues/13246
978,538,067
MDU6SXNzdWU5Nzg1MzgwNjc=
13,246
[model loading] framework-agnostic dtype parameter
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "Would like to ping @Rocketknight1 regarding the TensorFlow management of types, and @patil-suraj for flax", "This should work in Tensorflow too - you can use `tf.dtypes.as_dtype(dtype_string)` to turn strings into TF dtype objects.", "@Rocketknight1 Sorry, but can you please elaborate on how to load the model in Tensorflow or point me in the right direction? I am new to hugging face and I have been looking all over for instructions on how to do it. Thank you." ]
1,629
1,632
null
CONTRIBUTOR
null
This is a split off from one of the discussions at https://github.com/huggingface/transformers/pull/13209: 1. It all started with trying to load torch models under either the desired dtype or the the dtype of the pretrained model - and thus avoid 2x memory usage needs e.g. if the model needs to be just fp16. So we added `torch_dtype` to `from_pretrained` and `from_config`. 2. Then we started storing `torch_dtype` in the config file for future possibly automatic loading model in the optimal "regime". 3. This resulted in a discrepancy where the same symbol sometimes means `torch.dtype` at other times a string like "float32" as we can't store `torch.dtype` in json. 4. then in https://github.com/huggingface/transformers/pull/13209#discussion_r693292542 we started discussing how `dtype` is really the same across pt/tf/flux and perhaps we should just use `dtype` in the config and variables and have it consistently to be a string ("float32") and convert it to the right dtype object of the desired framework at the point of use, e.g. `getattr(torch, "float32")` A possible solution is to deprecate `torch_dtype` and replace it with `dtype` string both in config and in the function argument. Possible conflicts with the naming: 1. we already have the `dtype` attribute in modeling_utils, which returns `torch.dtype` based on the first param's dtype. https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L205 The context is different, but still this is something to consider to avoid ambiguity. I may have missed some other areas. So please share if something else needs to be added. Additional notes: - wrt flux: https://github.com/huggingface/transformers/pull/13209#discussion_r694511759 > #13098 - the idea of the PR is exactly to disentangle parameter dtype from matmul/computation dtype. In Flax, it's common practice that the dtype parameter defines the matmul/computation dtype, see: https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.Dense.html#flax.linen.Dense.dtype instead of the parameter dtype and not the parameter dtype. > So for Flax, I don't really think it would make sense to use a config.dtype to define weights dtype as it would be quite confusing with Flax's computation dtype parameter. @LysandreJik, @sgugger, @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13246/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/13245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13245/comments
https://api.github.com/repos/huggingface/transformers/issues/13245/events
https://github.com/huggingface/transformers/issues/13245
978,529,865
MDU6SXNzdWU5Nzg1Mjk4NjU=
13,245
Add ability to include additional model card info within Trainer
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }, { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Yes the `ModelCard` is deprecated as it does not make any sense to deal with this programmatically (the class was there for more than a year and I haven't seen it used once).\r\n\r\nThe Trainer drafts the model card, it's then the responsibility of the user to edit it in a followup commit, but the easiest way to do this, is through the user's preferred text editor IMO.", "\r\n> but the easiest way to do this, is through the user's preferred text editor IMO\r\n\r\nor directly on the hub website πŸ‘ \r\n\r\nwe should remove the current `ModelCard` implementation to make room for new helpers IMO (which will be in `huggingface_hub`)\r\n", "It will disappear in v5 as it's a breaking change.", "closing and will address in `huggingface_hub`" ]
1,629
1,630
1,630
CONTRIBUTOR
null
# πŸš€ Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Allow users to optionally provide model description, intended use, ethical considerations, caveats and recommendations, etc. when calling `trainer.push_to_hub` and/or `trainer.create_model_card`. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Right now, when you do `trainer.push_to_hub`, it runs `trainer.create_model_card`, which calls `transformers.modelcard.TrainingSummary.to_model_card` behind the scenes to put together the model card's text. It does not allow you to pass the sections mentioned above. https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/trainer.py#L2479-L2502 Thus, whenever someone pushes to the hub with `Trainer`, there are a bunch of sections that say "More information needed". --- I see that `transformers.modelcard.ModelCard` has these options, but there's a note about it being deprecated. https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/modelcard.py#L74-L100 ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13245/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13244/comments
https://api.github.com/repos/huggingface/transformers/issues/13244/events
https://github.com/huggingface/transformers/issues/13244
978,451,864
MDU6SXNzdWU5Nzg0NTE4NjQ=
13,244
Tapas tokenization Different from Tensorflow Code
{ "login": "Doreenruirui", "id": 8978500, "node_id": "MDQ6VXNlcjg5Nzg1MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8978500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Doreenruirui", "html_url": "https://github.com/Doreenruirui", "followers_url": "https://api.github.com/users/Doreenruirui/followers", "following_url": "https://api.github.com/users/Doreenruirui/following{/other_user}", "gists_url": "https://api.github.com/users/Doreenruirui/gists{/gist_id}", "starred_url": "https://api.github.com/users/Doreenruirui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Doreenruirui/subscriptions", "organizations_url": "https://api.github.com/users/Doreenruirui/orgs", "repos_url": "https://api.github.com/users/Doreenruirui/repos", "events_url": "https://api.github.com/users/Doreenruirui/events{/privacy}", "received_events_url": "https://api.github.com/users/Doreenruirui/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi,\r\n\r\nThanks for your interest in TAPAS. However, I do think the `_tokenize()` method is effectively used by TapasTokenizer. This is because `TapasTokenizer` itself inherits from `PreTrainedTokenizer`, which defines the `tokenize()` method [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L249). This method will in turn call `_tokenize()` as can be seen [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L339).\r\n\r\nYou can also verify this using a simple example:\r\n\r\n```\r\nimport pandas as pd\r\nfrom transformers import TapasTokenizer\r\n\r\ntokenizer = TapasTokenizer.from_pretrained(\"google/tapas-base\")\r\n\r\ndata = {'Actors': [\"Brad Pitt\", \"Leonardo Di Caprio\", \"n/a\"], 'Number of movies': [\"?\", \"53\", \"69\"]}\r\nqueries = [\"What is the name of the first actor?\", \"How many movies has George Clooney played in?\", \"What is the total number of movies?\"]\r\ntable = pd.DataFrame.from_dict(data)\r\ninputs = tokenizer(table=table, queries=queries)\r\nprint(tokenizer.decode(inputs.input_ids[0]))\r\n```\r\nAs you can see, I've replaced two cell values by n/a and ?, i.e. there are some empty cells in the table. This returns:\r\n\r\n`[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt [EMPTY] leondardi di caprio 53 [EMPTY] 69`\r\n\r\nThe empty cells are correctly replaced by the [EMPTY] token.", "> Hi,\r\n> \r\n> Thanks for your interest in TAPAS. However, I do think the `_tokenize()` method is effectively used by TapasTokenizer. This is because `TapasTokenizer` itself inherits from `PreTrainedTokenizer`, which defines the `tokenize()` method [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L249). This method will in turn call `_tokenize()` as can be seen [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L339).\r\n> \r\n> You can also verify this using a simple example:\r\n> \r\n> ```\r\n> import pandas as pd\r\n> from transformers import TapasTokenizer\r\n> \r\n> tokenizer = TapasTokenizer.from_pretrained(\"google/tapas-base\")\r\n> \r\n> data = {'Actors': [\"Brad Pitt\", \"Leonardo Di Caprio\", \"n/a\"], 'Number of movies': [\"?\", \"53\", \"69\"]}\r\n> queries = [\"What is the name of the first actor?\", \"How many movies has George Clooney played in?\", \"What is the total number of movies?\"]\r\n> table = pd.DataFrame.from_dict(data)\r\n> inputs = tokenizer(table=table, queries=queries)\r\n> print(tokenizer.decode(inputs.input_ids[0]))\r\n> ```\r\n> \r\n> As you can see, I've replaced two cell values by n/a and ?, i.e. there are some empty cells in the table. This returns:\r\n> \r\n> `[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt [EMPTY] leondardi di caprio 53 [EMPTY] 69`\r\n> \r\n> The empty cells are correctly replaced by the [EMPTY] token.\r\n\r\nThank you very much for your reply!\r\n\r\nIt seems that \"n/a\" and \"?\" are tokenized into [EMPTY] token, but if the cell is an empty string, then it is ignored by the tokenizer. \r\nFor this example,\r\n`data = {'Actors': [\"Brad Pitt\", \"Leonardo Di Caprio\", \"n/a\"], 'Number of movies': [\"\", \"53\", \"69\"]}`\r\nthe tokenization result is \r\n`[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt leonardo di caprio 53 [EMPTY] 69`.\r\nIf I directly call `self._tokenize`, it is tokenized into\r\n`[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt [EMPTY] leonardo di caprio 53 [EMPTY] 69`\r\nI guess it is because the [tokenize function](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L336) returns empty list when the token is an empty string before it is passed to the `_tokenize` function, which is different from that of the Tapas tensorflow implementation.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "That's interesting @Doreenruirui, are you interested in making a PR to fix this?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "unstale", "Hi @Doreenruirui,\r\n\r\n> After fixing this, I could use the released table retrieval model to replicate their results on NQ dataset with Huggingface Tapas.\r\n\r\nThis is very interesting, thanks for letting me know. Are you interested in opening a PR that includes the fix?\r\n\r\nWe could perhaps also add the table retrieval models to the hub.\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hi @NielsRogge\r\n\r\n> This is very interesting, thanks for letting me know. Are you interested in opening a PR that includes the fix?\r\n\r\nI would like to work on this, i can start if nobody else is working on this.\r\n\r\nThanks", "@NielsRogge @Doreenruirui This issue seems to fixed. We can close this issue." ]
1,629
1,658
null
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.1 ### Who can help @LysandreJik @sgugger @NielsRogge ## Information Model I am using (Bert, XLNet ...): Tapas When I am trying to replicate the TAPAS table retrieval results using Huggingface Tapas implementation, I find that [Tapas tokenization in Huggingface](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L1314) is different from the original [Tensorflow code ](https://github.com/google-research/tapas/blob/master/tapas/utils/tf_example_utils.py#L391). The original code first checks whether the table cell is "n/a", "?" or empty. If so, it would return "[EMPTY]" token. The Huggingface code has implemented [the same tokenization](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L370) with the tensorflow code, but it is not used to tokenize the tables. It could be easily fixed by changing all the calls of function `self.tokenize` to `self._tokenize` in the `_tokenize_table` function. After fixing this, I could use the released table retrieval model to replicate their results on NQ dataset with Huggingface Tapas.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13244/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/13243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13243/comments
https://api.github.com/repos/huggingface/transformers/issues/13243/events
https://github.com/huggingface/transformers/pull/13243
978,443,305
MDExOlB1bGxSZXF1ZXN0NzE5MDI5MzA4
13,243
Validate onnx model
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unstale", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,651
1,635
MEMBER
null
Test ONNX model export with different input shapes when exporting with dynamic axis.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13243/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13243/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13243", "html_url": "https://github.com/huggingface/transformers/pull/13243", "diff_url": "https://github.com/huggingface/transformers/pull/13243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13243.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13242/comments
https://api.github.com/repos/huggingface/transformers/issues/13242/events
https://github.com/huggingface/transformers/pull/13242
978,415,967
MDExOlB1bGxSZXF1ZXN0NzE5MDA2Nzc2
13,242
[Tentative] Adds support for exporting TransformerXL-based models to ONNX
{ "login": "gugarosa", "id": 4120639, "node_id": "MDQ6VXNlcjQxMjA2Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gugarosa", "html_url": "https://github.com/gugarosa", "followers_url": "https://api.github.com/users/gugarosa/followers", "following_url": "https://api.github.com/users/gugarosa/following{/other_user}", "gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions", "organizations_url": "https://api.github.com/users/gugarosa/orgs", "repos_url": "https://api.github.com/users/gugarosa/repos", "events_url": "https://api.github.com/users/gugarosa/events{/privacy}", "received_events_url": "https://api.github.com/users/gugarosa/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "> It seems I'm facing the issue you mention about `tril` when trying to export a Transfo-XL model using your PR:\r\n> \r\n> ```\r\n> RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.\r\n> ```\r\n> \r\n> Is this due to a mismatch in `onnxruntime` version? cc @mfuntowicz\r\n\r\nExactly! The main problem is that onnx/onnxruntime added the `triu/tril` operator on opset version 13 (if I am not mistaken), but PyTorch has not released a version which supports such opset yet (current PyTorch supports up to opset version 12).\r\n\r\nIn my humble opinion, I guess this PR should be staled until PyTorch releases their new version, hopefully by the end of the month. With that in mind, we do not need to leverage the `triu/tril` operator or even implement a hacked-version that allows to be exported to ONNX.\r\n\r\nWhat do you think?", "I see - thank you for the explanation. I'll add the `WIP` label to the PR so that the stalebot does not close it, and let's revisit once pytorch 1.10 comes out.", "Torch nightly now has opset 14 with support for both triu/tril, however my testing had these issues:\r\n**1) both triu and tril want a constant for the diagonal parameter**\r\nA work-around hack is to:\r\nuse model config:\r\n same_length=False\r\nand this code change in modeling_transfo_xl.py (line 908 and 945):\r\n\r\n> if mems is None and False:\r\n> mems = self.init_mems(bsz)\r\n> ...\r\n> if mems is not None:\r\n> print(\"WARNING: if using mems no onnx export\")\r\n> dec_attn_mask = torch.triu(word_emb.new_ones((qlen, klen), dtype=torch.uint8), diagonal=1 + mlen)[:, :, None]\r\n> else:\r\n> dec_attn_mask = torch.triu(word_emb.new_ones((qlen, klen), dtype=torch.uint8), diagonal=1)[:, :, None]\r\n\r\n**2) The export command \r\npython -m transformers.onnx --model models --feature causal-lm --opset 14 models_onnx\r\nstill gives an error:**\r\n\r\n> File \"/home/craig/envcond/transformers/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 378, in _create_inference_session\r\n> sess.initialize_session(providers, provider_options, disabled_optimizers)\r\n> onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Trilu(14) node with name 'Trilu_64'\r\n\r\n" ]
1,629
1,638
1,638
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hello everyone! I hope everything is going well with you. This PR adds a naΓ―ve support when trying to export a TransformerXL model with ONNX. Nevertheless, this is a tentative PR as PyTorch still does not support the newest ONNX opset versions (13+), which includes `triu` and `tril` operators (required by TransformerXL). As soon as PyTorch updates its API, any reflected change will also be updated in this PR. Meanwhile, this tentative PR serves as a guide to anyone that needs to export their model. **Note:** there is a trick to leverage `triu` and `tril` operators by hand-implementing them with a compatible format to ONNX, instead of using the ones PyTorch's provides. Best regards, Gustavo. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13242/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13242", "html_url": "https://github.com/huggingface/transformers/pull/13242", "diff_url": "https://github.com/huggingface/transformers/pull/13242.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13242.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13241/comments
https://api.github.com/repos/huggingface/transformers/issues/13241/events
https://github.com/huggingface/transformers/issues/13241
978,317,515
MDU6SXNzdWU5NzgzMTc1MTU=
13,241
Can we use trainer api with custom model layer?
{ "login": "monk1337", "id": 17107749, "node_id": "MDQ6VXNlcjE3MTA3NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monk1337", "html_url": "https://github.com/monk1337", "followers_url": "https://api.github.com/users/monk1337/followers", "following_url": "https://api.github.com/users/monk1337/following{/other_user}", "gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monk1337/subscriptions", "organizations_url": "https://api.github.com/users/monk1337/orgs", "repos_url": "https://api.github.com/users/monk1337/repos", "events_url": "https://api.github.com/users/monk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/monk1337/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes you can. As mentioned in the [docs](https://huggingface.co/transformers/main_classes/trainer.html):\r\n\r\n> When using it on your own model, make sure:\r\n> your model always return tuples or subclasses of ModelOutput.\r\n> your model can compute the loss if a labels argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples)\r\n> your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named \"label\"." ]
1,629
1,630
1,630
NONE
null
I am trying to use huggin face trainer API, For example, my model looks like this: ``` class BERTClass(torch.nn.Module): def __init__(self): super(BERTClass, self).__init__() self.l1 = AutoModel.from_pretrained("distilbert-base-uncased",return_dict=False) self.l2 = torch.nn.Dropout(0.2) self.l3 = torch.nn.Linear(768, 50) def forward(self, ids, mask, token_type_ids): _, output_1= self.l1(ids, attention_mask = mask, token_type_ids = token_type_ids) output_2 = self.l2(output_1) output = self.l3(output_2) return output model = BERTClass() ``` Is it possible to utilize the trainer API for custom models? such as : trainer = Trainer( model=model, # the instantiated πŸ€— Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train()
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13241/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13240/comments
https://api.github.com/repos/huggingface/transformers/issues/13240/events
https://github.com/huggingface/transformers/pull/13240
978,091,272
MDExOlB1bGxSZXF1ZXN0NzE4NzM4NDIz
13,240
Improve T5 docs
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? This PR aims to clarify and explain some of the magic that's happening when using T5 for training/inference. It includes: - explaining that the model automatically creates the `decoder_input_ids` based on the `labels` (a lot of people were still confused by this, see e.g. #11977, #13213 - added code examples, to show a basic forward pass that includes the fact that padding token ids of the labels should be replaced by -100 (at least, for PyTorch, I see that for FLAX one uses the `decoder_attention_mask` to skip padding tokens), and code examples for inference (both batched/not batched) - adding info about T5's variants, including T5v1.1, mT5 and byT5, with links to their docs. - additional tips & tricks, based on what I found on the forum (learning rate, training on TPU, etc). In addition, I've added T5v1.1 to the main README as well making it have its own documentation page, and some more info to the mT5 docs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13240/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13240", "html_url": "https://github.com/huggingface/transformers/pull/13240", "diff_url": "https://github.com/huggingface/transformers/pull/13240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13240.patch", "merged_at": 1630501540000 }
https://api.github.com/repos/huggingface/transformers/issues/13239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13239/comments
https://api.github.com/repos/huggingface/transformers/issues/13239/events
https://github.com/huggingface/transformers/issues/13239
978,081,755
MDU6SXNzdWU5NzgwODE3NTU=
13,239
BERT finetuning β€œindex out of range in self”
{ "login": "marlon19894", "id": 44276670, "node_id": "MDQ6VXNlcjQ0Mjc2Njcw", "avatar_url": "https://avatars.githubusercontent.com/u/44276670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marlon19894", "html_url": "https://github.com/marlon19894", "followers_url": "https://api.github.com/users/marlon19894/followers", "following_url": "https://api.github.com/users/marlon19894/following{/other_user}", "gists_url": "https://api.github.com/users/marlon19894/gists{/gist_id}", "starred_url": "https://api.github.com/users/marlon19894/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marlon19894/subscriptions", "organizations_url": "https://api.github.com/users/marlon19894/orgs", "repos_url": "https://api.github.com/users/marlon19894/repos", "events_url": "https://api.github.com/users/marlon19894/events{/privacy}", "received_events_url": "https://api.github.com/users/marlon19894/received_events", "type": "User", "site_admin": false }
[ { "id": 1897896961, "node_id": "MDU6TGFiZWwxODk3ODk2OTYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Migration", "name": "Migration", "color": "e99695", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hello! It seems your model and tokenizer are mismatched: the tokenizer generated an ID that the model doesn't understand.\r\n\r\nI don't see your tokenizer initialization in your code, do you mind showing me how you initialize it?", "Yeah you are right! Thank you", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
0 I am trying to build a Multiclass Classifier with a pretrained BERT model. I am completely new to the topic. I have 8 classes and use Huggingface’s Dataset infrastructure to finetune a pretrained model for the german language: ``` from transformers import AutoModelForSequenceClassification from transformers import Trainer, TrainingArguments from sklearn.metrics import accuracy_score, f1_score num_labels_cla = 8 model_name_cla = "bert-base-german-dbmdz-uncased" batch_size_cla = 8 model = AutoModelForSequenceClassification.from_pretrained(model_name_cla, num_labels=num_labels_cla) def tokenize(batch): return tokenizer(batch['text'], padding=True, truncation=True,max_length=260) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) f1 = f1_score(labels, preds, average="weighted") acc = accuracy_score(labels,preds) return {"accuracy":acc, "f1":f1} ``` My model shouldn’t be a sentiment classifier but a multilabel classifier which classifies customer reviews based on different label (e.g customer support etc.). When I train/finetune my model with the Huggingface Trainer() instance: ``` #Encoding the data data_encoded = data_dict.map(tokenize, batched=True, batch_size=None) data_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) #Specify training arguments logging_steps=len(data_encoded["train"]) training_args = TrainingArguments(output_dir='./results', num_train_epochs=3, learning_rate=2e-5, per_device_train_batch_size=batch_size_cla, per_device_eval_batch_size=batch_size_cla, load_best_model_at_end=True, metric_for_best_model="f1", weight_decay=0.01, evaluation_strategy="steps", eval_steps = 2, disable_tqdm=False, logging_steps=logging_steps) #Specify trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=data_encoded['train'], eval_dataset=data_encoded['test'] ) #Train trainer.train() ``` After 6 steps I get the following error: ``` ~/miniconda3/envs/textmallet/lib/python3.9/site-packages/torch/nn/modules/sparse.py in forward(self, input) 156 157 def forward(self, input: Tensor) -> Tensor: --> 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, 160 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/miniconda3/envs/textmallet/lib/python3.9/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2041 # remove once script supports set_grad_enabled 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 2045 IndexError: index out of range in self ``` Does anyone have any idea what I could change in my code? Cheers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13239/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13238/comments
https://api.github.com/repos/huggingface/transformers/issues/13238/events
https://github.com/huggingface/transformers/issues/13238
978,070,240
MDU6SXNzdWU5NzgwNzAyNDA=
13,238
Correct way to use pre-trained models - Any document on this?
{ "login": "pratikchhapolika", "id": 11159549, "node_id": "MDQ6VXNlcjExMTU5NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pratikchhapolika", "html_url": "https://github.com/pratikchhapolika", "followers_url": "https://api.github.com/users/pratikchhapolika/followers", "following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}", "gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}", "starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions", "organizations_url": "https://api.github.com/users/pratikchhapolika/orgs", "repos_url": "https://api.github.com/users/pratikchhapolika/repos", "events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}", "received_events_url": "https://api.github.com/users/pratikchhapolika/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! We have several documents that can help you get started! First of all, the [quicktour](https://huggingface.co/transformers/quicktour.html), and the [free course](https://huggingface.co/course/chapter1) of the HF ecosystem may help you out.", "> Hello! We have several documents that can help you get started! First of all, the [quicktour](https://huggingface.co/transformers/quicktour.html), and the [free course](https://huggingface.co/course/chapter1) of the HF ecosystem may help you out.\r\n\r\nWhat about my code above. Is this correct way of doing things?", "@pratikchhapolika \r\nHi Pratik, yes you can use most models for sequence classification. You can do the following\r\n\r\n\r\n```\r\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"name_of_base_model\")\r\nmodel = AutoModelForSequenceClassification(\"name_of_base_model\")\r\n\r\n//name_of_base_model can be bert-base-cased, albert-base-v2, roberta-large etc. \r\n```\r\nThe full list is [here](https://huggingface.co/transformers/pretrained_models.html)\r\nYou can then use the model & finetune it on the desired classification task (e.g. GLUE / SUPERGLUE)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
I want to do `Multiclass-Multilabel ( MLMC)` **classification** problem using `Conv-BERT` model. **Steps that I have taken is:** I downloaded the Conv-Bert model from this link: https://huggingface.co/YituTech/conv-bert-base << **YituTech/conv-bert-base**>> ``` from pytorch_pretrained_bert import BertTokenizer, BertForSequenceClassification, BertAdam tokenizer = **BertTokenizer.from_pretrained**("path_to_Conv-Bert_model", do_lower_case = True) model = **BertForSequenceClassification.from_pretrained**("path_to_Conv-Bert_model", num_labels = 240) model.cuda() ``` I want to understand can we call any classification module from Hugging face and pass any pre-trained models to it like `Roberta, Conv-BERT.. so on`. ? << As in above example>> Is it mandatory to use Conv-Bert classification pre-trained model ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13238/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13237/comments
https://api.github.com/repos/huggingface/transformers/issues/13237/events
https://github.com/huggingface/transformers/pull/13237
977,995,183
MDExOlB1bGxSZXF1ZXN0NzE4NjU3NDA4
13,237
Fix broken links in Splinter documentation
{ "login": "oriram", "id": 26966674, "node_id": "MDQ6VXNlcjI2OTY2Njc0", "avatar_url": "https://avatars.githubusercontent.com/u/26966674?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oriram", "html_url": "https://github.com/oriram", "followers_url": "https://api.github.com/users/oriram/followers", "following_url": "https://api.github.com/users/oriram/following{/other_user}", "gists_url": "https://api.github.com/users/oriram/gists{/gist_id}", "starred_url": "https://api.github.com/users/oriram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oriram/subscriptions", "organizations_url": "https://api.github.com/users/oriram/orgs", "repos_url": "https://api.github.com/users/oriram/repos", "events_url": "https://api.github.com/users/oriram/events{/privacy}", "received_events_url": "https://api.github.com/users/oriram/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? Fixed two broken links in Splinter documentation: Before fix: `https://huggingface.co/transformers/model_doc/master/splinter.html` After fix: `https://huggingface.co/transformers/master/model_doc/splinter.html`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13237/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13237", "html_url": "https://github.com/huggingface/transformers/pull/13237", "diff_url": "https://github.com/huggingface/transformers/pull/13237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13237.patch", "merged_at": 1629806122000 }
https://api.github.com/repos/huggingface/transformers/issues/13236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13236/comments
https://api.github.com/repos/huggingface/transformers/issues/13236/events
https://github.com/huggingface/transformers/issues/13236
977,994,292
MDU6SXNzdWU5Nzc5OTQyOTI=
13,236
Upgrade `os.path` to use `pathlib.Path` API for `from_pretrained` internals
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This requires more code than I anticipated, and the caching isn't as fast as local. I think I will just download to local and call `from_pretrained` with local path.", "Thanks for exploring\r\n\r\n> I think I will just download to local and call from_pretrained with local path.\r\n\r\nYes it's the way we've been recommending πŸ‘ " ]
1,629
1,630
1,630
CONTRIBUTOR
null
# πŸš€ Feature request Use `pathlib.Path` instead of `os.path` in places like `src/transformers/file_utils.py` and `src/transformers/configuration_utils.py` ## Motivation I am using [cloudpathlib](https://github.com/drivendataorg/cloudpathlib), a library that wraps remote path as `pathlib.Path` like object. But I cannot use a cloudpathlib's remote directory for `from_pretrained` because the existing code uses `os.path`. Using `pathlib`'s API would solve this. ## Your contribution I can submit a PR for this. I see pathlib used elsewhere inside `src/transformers`, so I think it's ok.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13236/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13235/comments
https://api.github.com/repos/huggingface/transformers/issues/13235/events
https://github.com/huggingface/transformers/issues/13235
977,968,107
MDU6SXNzdWU5Nzc5NjgxMDc=
13,235
BeitForMaskedImageModeling forward not using bool_masked_pos
{ "login": "elliottzheng", "id": 22427645, "node_id": "MDQ6VXNlcjIyNDI3NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/22427645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elliottzheng", "html_url": "https://github.com/elliottzheng", "followers_url": "https://api.github.com/users/elliottzheng/followers", "following_url": "https://api.github.com/users/elliottzheng/following{/other_user}", "gists_url": "https://api.github.com/users/elliottzheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/elliottzheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elliottzheng/subscriptions", "organizations_url": "https://api.github.com/users/elliottzheng/orgs", "repos_url": "https://api.github.com/users/elliottzheng/repos", "events_url": "https://api.github.com/users/elliottzheng/events{/privacy}", "received_events_url": "https://api.github.com/users/elliottzheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThanks for looking into this model. Looking at it now, it's indeed a mistake, not sure why I forgot to pass `bool_masked_pos` to `BeitModel`.\r\n\r\nI have a Colab notebook, in which I tried to reconstruct DALL-E's visual tokens using `BeitForMaskedImageModeling`: https://colab.research.google.com/drive/1Mjt-3jHw9HYMXECmSdDlbiG59ZAw-Z0T?usp=sharing#scrollTo=ZwTO9fbhPOxi\r\n\r\nIt was not working as expected, so probably that'll be the mistake.\r\n\r\n> Furthermore, the documentation of the BeitForMaskedImageModeling's forward lack the description of the bool_masked_pos.\r\n\r\nThis is a good point and should be added. \r\n\r\nI will fix both in a PR.\r\n\r\nThanks!" ]
1,629
1,630
1,630
NONE
null
### Who can help @NielsRogge ## Information I am reading the code of BEIT, trying to use the `BeitForMaskedImageModeling`, while I found the `bool_masked_pos` is defined but not used in `BeitForMaskedImageModeling`'s forward. In my understanding, `bool_masked_pos` is used to mask out the input image tokens, thus should be passed to the `self.beit` forward here. https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/models/beit/modeling_beit.py#L723 Furthermore, the documentation of the `BeitForMaskedImageModeling`'s `forward` lack the description of the `bool_masked_pos`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13235/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13234/comments
https://api.github.com/repos/huggingface/transformers/issues/13234/events
https://github.com/huggingface/transformers/issues/13234
977,822,253
MDU6SXNzdWU5Nzc4MjIyNTM=
13,234
Error while trying to run run_wwm_mlm.py using my saved model: TypeError: β€˜NoneType’ object is not iterable
{ "login": "jungminc88", "id": 25294395, "node_id": "MDQ6VXNlcjI1Mjk0Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/25294395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungminc88", "html_url": "https://github.com/jungminc88", "followers_url": "https://api.github.com/users/jungminc88/followers", "following_url": "https://api.github.com/users/jungminc88/following{/other_user}", "gists_url": "https://api.github.com/users/jungminc88/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungminc88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungminc88/subscriptions", "organizations_url": "https://api.github.com/users/jungminc88/orgs", "repos_url": "https://api.github.com/users/jungminc88/repos", "events_url": "https://api.github.com/users/jungminc88/events{/privacy}", "received_events_url": "https://api.github.com/users/jungminc88/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, since your case is a mlm task, you should probably use `BertForMaskedLM` instead of `BertForSequenceClassification ` to train your model first, and then feed it into `run_wwm_mlm.py` script.", "@qqaatw Thank you for your suggestion!\r\n> Hi, since your case is a mlm task, you should probably use `BertForMaskedLM` instead of `BertForSequenceClassification ` to train your model first, and then feed it into `run_wwm_mlm.py` script.\r\n\r\nMy objective is to see the effect of training BERT on different tasks. I am wondering if training on MLM task after training on classification yields better results. Is there a way to do this using the script?", "I got your point. You can use `BertForPreTraining`, which includes two prediction heads (MLM, NSP), to train a sentence classification task first, then feed the trained model into `run_wwm_mlm.py` to run MLM task. Because `BertForPreTraining` has two heads already, running mlm afterwards will no longer raise an error regarding mlm head missing.", "@qqaatw That's a neat solution! Thank you!\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.7.0 - Platform: Linux-4.18.0-25-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. I have trained a BertForSequenceClassification model, saved the model and tokenizer: ``` model.save_pretrained('output_mlm_cls') tokenizer.save_pretrained('output_mlm_cls') ``` 2. I tried to run run_mlm_wwm.py, giving the the saved model above as the input model: python run_mlm_wwm.py \ --model_name_or_path /path/to/output_mlm_cls \ --train_file /path/to/my_data.txt \ --do_train \ --output_dir /output_dir <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> I got this error message: Traceback (most recent call last): File β€œrun_mlm_wwm.py”, line 408, in main() File β€œrun_mlm_wwm.py”, line 367, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File β€œ/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/trainer.py”, line 1066, in train self._load_state_dict_in_model(state_dict) File β€œ/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/trainer.py”, line 1387, in _load_state_dict_in_model if set(load_result.missing_keys) == set(self.model._keys_to_ignore_on_save): TypeError: β€˜NoneType’ object is not iterable ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should run and train the input model on the whole word masking MLM task. When I run the same thing only changing --model_name_or_path to one of the HuggingFace provided pretrained models (cl-tohoku/bert-base-japanese-whole-word-masking), it runs without a problem, so it's not the problem with the dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13234/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13233/comments
https://api.github.com/repos/huggingface/transformers/issues/13233/events
https://github.com/huggingface/transformers/issues/13233
977,787,772
MDU6SXNzdWU5Nzc3ODc3NzI=
13,233
Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at 'yyy' and are newly initialized
{ "login": "BalajiAJ", "id": 19531638, "node_id": "MDQ6VXNlcjE5NTMxNjM4", "avatar_url": "https://avatars.githubusercontent.com/u/19531638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BalajiAJ", "html_url": "https://github.com/BalajiAJ", "followers_url": "https://api.github.com/users/BalajiAJ/followers", "following_url": "https://api.github.com/users/BalajiAJ/following{/other_user}", "gists_url": "https://api.github.com/users/BalajiAJ/gists{/gist_id}", "starred_url": "https://api.github.com/users/BalajiAJ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BalajiAJ/subscriptions", "organizations_url": "https://api.github.com/users/BalajiAJ/orgs", "repos_url": "https://api.github.com/users/BalajiAJ/repos", "events_url": "https://api.github.com/users/BalajiAJ/events{/privacy}", "received_events_url": "https://api.github.com/users/BalajiAJ/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I'm not very familiar with GPT Neo, but should [this](https://huggingface.co/transformers/model_doc/gpt_neo.html#transformers.GPTNeoForCausalLM) be the correct model for fine-tuned GPT Neo?", "Apologies for pasting the wrong code, below is the one. Thanks @qqaatw for pointing it out. \r\nThis is the line of code where am seeing the warning.\r\n\r\n### model = GPTNeoForCausalLM.from_pretrained(\"finetuned\").half().to(\"cuda\")", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
Hi Everyone, I am trying to use a fine tuned GPT Neo Model for my inferencing, am getting the below warning - ### **_Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at 'yyy' and are newly initialized..._** I have used deep speed to fine tune the model. Because of this warning, **_the generated text during inference is giving some random meaningless text_**. Please suggest how to load those weights. ### Inference code is below (highlighted in bold is the line of code) from transformers import GPT2Tokenizer, GPT2LMHeadModel import torch device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = GPT2Tokenizer.from_pretrained('finetuned') tokenizer.padding_side = "left" tokenizer.pad_token = tokenizer.eos_token ### model = GPTNeoForCausalLM.from_pretrained("finetuned").half().to("cuda") Thanks, Balaji
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13233/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13232/comments
https://api.github.com/repos/huggingface/transformers/issues/13232/events
https://github.com/huggingface/transformers/issues/13232
977,777,182
MDU6SXNzdWU5Nzc3NzcxODI=
13,232
run_translation_no_trainer with MBart: unsupported operand type(s) for /: 'dict' and 'int'
{ "login": "Doragd", "id": 26213546, "node_id": "MDQ6VXNlcjI2MjEzNTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/26213546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Doragd", "html_url": "https://github.com/Doragd", "followers_url": "https://api.github.com/users/Doragd/followers", "following_url": "https://api.github.com/users/Doragd/following{/other_user}", "gists_url": "https://api.github.com/users/Doragd/gists{/gist_id}", "starred_url": "https://api.github.com/users/Doragd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Doragd/subscriptions", "organizations_url": "https://api.github.com/users/Doragd/orgs", "repos_url": "https://api.github.com/users/Doragd/repos", "events_url": "https://api.github.com/users/Doragd/events{/privacy}", "received_events_url": "https://api.github.com/users/Doragd/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "I have tested `outputs = accelerator.unwrap_model(model)(**batch)`. It works well, and the `outputs.loss` is a `Tensor` as expected.", "I have checked that it is caused by `convert_to_fp32` in `accelerate/utils.py`.\r\n```python\r\ndef convert_to_fp32(tensor):\r\n if isinstance(tensor, (list, tuple)):\r\n return honor_type(tensor, (convert_to_fp32(t) for t in tensor))\r\n elif isinstance(tensor, dict):\r\n return type(tensor)({k: convert_to_fp32(v) for k, v in tensor.items()})\r\n elif not hasattr(tensor, \"dtype\") or tensor.dtype != torch.float16:\r\n return tensor\r\n return tensor.float()\r\n```\r\n\r\nwhen `return type(tensor)({k: convert_to_fp32(v) for k, v in tensor.items()})`, it's actually implemented as `Seq2SeqLMOutput({k: convert_to_fp32(v) for k, v in tensor.items()})`. \r\n\r\nThe `{k: convert_to_fp32(v) for k, v in tensor.items()}` denotes a `dict` object which has keys = `'loss', 'logits', 'past_key_values', 'encoder_last_hidden_state'`.\r\n\r\nThe result of `Seq2SeqLMOutput(...)` is that the value of `loss` attribute turn to a dict. So the `outputs.loss` is a dict.\r\n\r\n\r\nFor example:\r\n```python\r\noutput = {\r\n 'loss': torch.randn(1),\r\n 'logits':torch.randn(2,2,2),\r\n 'past_key_values': None,\r\n 'encoder_last_hidden_state' : torch.randn(2,2,2),\r\n}\r\nSeq2SeqLMOutput(output)\r\n```\r\n\r\n```python\r\nSeq2SeqLMOutput(loss={'loss': tensor([-0.8864]), 'logits': tensor([[[-0.5915, -0.9891],\r\n [ 0.5060, -1.2748]],\r\n\r\n [[ 0.8566, -0.6958],\r\n [-0.2949, -0.7065]]]), 'past_key_values': None, 'encoder_last_hidden_state': tensor([[[-0.9881, 0.3471],\r\n [-0.3888, 3.0862]],\r\n\r\n [[ 0.2813, 0.4011],\r\n [-0.1960, 1.0331]]])}, logits=None, past_key_values=None, decoder_hidden_states=None, decoder_attentions=None, cross_attentions=None, encoder_last_hidden_state=None, encoder_hidden_states=None, encoder_attentions=None)\r\n```\r\n", "To fix it,I modified `run_translation_no_trainer.py` with the following snippet in the training loop, and it works well.\r\n```python\r\n outputs = model(**batch, return_dict=False)\r\n loss = outputs[0]\r\n loss = loss / args.gradient_accumulation_steps\r\n accelerator.backward(loss)\r\n```", "For my solution, i think the `convert_to_fp32` should be corrected as follows:\r\n\r\n```python\r\ndef convert_to_fp32(tensor):\r\n if isinstance(tensor, (list, tuple)):\r\n return honor_type(tensor, (convert_to_fp32(t) for t in tensor))\r\n elif isinstance(tensor, dict):\r\n return type(tensor)(**{k: convert_to_fp32(v) for k, v in tensor.items()}) # add **\r\n elif not hasattr(tensor, \"dtype\") or tensor.dtype != torch.float16:\r\n return tensor\r\n return tensor.float()\r\n```\r\nDo you think so? @patil-suraj" ]
1,629
1,630
1,630
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Ubuntu 20.04 - Python version: 3.8 - PyTorch version (GPU?): 1.8.2 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes In addition, I add the details of `accelerate` config. ```shell In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0 Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2 How many different machines will you use (use more than 1 for multi-node training)? [1]: 1 Do you want to use DeepSpeed? [yes/NO]: NO How many processes in total will you use? [1]: 1 Do you wish to use FP16 (mixed precision)? [yes/NO]: yes ``` ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): facebook/mbart-large-cc25 The problem arises when using: * [x] the official example scripts: examples/pytorch/translation/run_translation_no_trainer.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: * [x] my own task or dataset: finetuning en-ro dataset of wmt16 translation ## To reproduce I use the following script: note that i rename the `run_translation_no_trainer.py` with `run.py` ```shell accelerate launch run.py \ --model_name_or_path facebook/mbart-large-cc25 \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir ~/tmp/tst-translation ``` ## Expected behavior ![image](https://user-images.githubusercontent.com/26213546/130571764-92200a1f-222c-4267-a237-3b97e39c5e7c.png) `outputs` is still an object of `Seq2SeqLMOutput`, however, the `outputs.loss` is a `dict` with keys `dict_keys(['loss', 'logits', 'past_key_values', 'encoder_last_hidden_state'])` ## Additional information By the way, I also test another script: `examples/pytorch/text-classification/run_glue_no_trainer.py` It met the same problem. You can view it on colab: https://colab.research.google.com/drive/1BLt5rtHFdHaRliyqj_BKmM1_uzQclQHw?usp=sharing ![image](https://user-images.githubusercontent.com/26213546/130642391-f4f34bb0-e22c-4201-9382-0239b25a28e7.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13232/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13231/comments
https://api.github.com/repos/huggingface/transformers/issues/13231/events
https://github.com/huggingface/transformers/issues/13231
977,650,435
MDU6SXNzdWU5Nzc2NTA0MzU=
13,231
codecarbon plugin issues
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the feedback @stas00!\n\nI don't think it runs automatically - it needs to be activated by setting a flag, just like other integrations. I will double check what's going on there. For other issues, I'll investigate as well.", "Investigating it some more it's like all the other plugins. So it runs automatically unless the user explicitly turns it off. Not my favorite Trainer feature.\r\n\r\nI had to add `--report_to none` to disable it and everything else. \r\n\r\nhttps://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/src/transformers/training_args.py#L316-L319\r\n\r\nThat doc is outdated.\r\n\r\nI will update the Issue to remove the incorrect part of the report, as I can turn it off explicitly. \r\n\r\nNow that it can be disabled there is absolutely no rush with the rest.", "@stas00 I 100% agree with you that the default value of `--report_to` should be `none` instead of `all`. Detecting what's installed then enabling everything is a weird and aggressive behavior.\r\ncc @sgugger ", "This is planned to be so in v5. But for back-compat reasons it remains \"all\" - I don't remember all the history, but it probably needs to be on by default for `tensorboard` and none of the other plugins (or perhaps several plugins).\r\n\r\ni.e. most likely the default shouldn't be 'all' but only the list of plugins that are required for back-compat.", "Sounds good.\r\n\r\n@stas00 For the online tracker, could you `ping get.geojs.io` so I can determine what's wrong there? Since it works fine on my side.", "```\r\n$ ping get.geojs.io\r\nPING get.geojs.io (127.0.0.1) 56(84) bytes of data.\r\n64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.036 ms\r\n64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.061 ms\r\n64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.038 ms\r\n^C\r\n--- get.geojs.io ping statistics ---\r\n3 packets transmitted, 3 received, 0% packet loss, time 2041ms\r\nrtt min/avg/max/mdev = 0.036/0.045/0.061/0.011 ms\r\n```\r\n\r\nbut it doesn't respond to 80 or 443 ports:\r\n```\r\n$ HEAD https://get.geojs.io\r\n500 Can't connect to get.geojs.io:443 (Connection refused)\r\nContent-Type: text/plain\r\nClient-Date: Tue, 24 Aug 2021 06:05:27 GMT\r\nClient-Warning: Internal response\r\n\r\n$ HEAD http://get.geojs.io\r\n500 Can't connect to get.geojs.io:80 (Connection refused)\r\nContent-Type: text/plain\r\nClient-Date: Tue, 24 Aug 2021 06:05:36 GMT\r\nClient-Warning: Internal response\r\n```\r\n\r\nAs I mentioned in OP this is not a new problem, has been like this for at least one month.", "Got it. My guess is the `geojs` service is not available in some countries. Let me see how we can add an offline option.", "It will also break on instances without internet, like JZ.", "pinging @JetRunner ", "> pinging @JetRunner\r\n\r\nwill add when I'm back from the leave", "Unfortunately it doesn't look like this is ever going to be fixed.\r\n\r\nCould we please disable this plugin from loading by default if it has no maintainer that takes care of making it work?\r\n\r\n@sgugger, @LysandreJik ", "Switching to `codecarbon.OfflineEmissionsTracker` should at least solve this particular issue.", "I'm happy to review any PR @stas00 :-)", "The problem is that this feature was added without tests. So it's far from just changing its main class.\r\n\r\nMoreover we dropped `codecarbon` from the BigScience since it has multiple issues and was causing more issues than it was solving. After investing so much time into trying to make this module work, I'm not keen on giving it any more of my energy. So my vote is to disable it for now until it becomes more mature.\r\n\r\nTo keep it, at the very least @JetRunner (who added it) or someone else needs to write a test for it.", "Hey @stas00\nAs you know I have already left HF for a full-time PhD and I don't have any bandwidth to do it. I don't want to comment on the quality of codecarbon but feel free to remove it if you think it's buggy. For BigScience, our WG will switch to another way for carbon emission estimation.", "Thank you for letting us know that you have no resources to work on this, @JetRunner ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,638
1,638
CONTRIBUTOR
null
https://github.com/huggingface/transformers/pull/12304 added the `codecarbon` plugin and there are multiple issues with it: 1. it needs to be documented in https://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/src/transformers/training_args.py#L316-L319 along all the other plugins 2. It doesn't respect user's logging level. It needs to read the set for the current rank log-level and pass it explicitly to the object instance here: https://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/src/transformers/integrations.py#L782 via the `log_level` argument (but which expects a string like "warning" and not the real `logging.WARNING` which is normally used, so one needs to remap from real `logging` level to the string CC expects. 3. same logs are logged more than once in different formats: ``` [codecarbon INFO @ 19:33:14] Tracking Nvidia GPU via pynvml [codecarbon INFO @ 19:33:14] Tracking Intel CPU via RAPL interface 08/23/2021 19:33:14 - INFO - codecarbon - Tracking Nvidia GPU via pynvml 08/23/2021 19:33:14 - INFO - codecarbon - Tracking Intel CPU via RAPL interface ``` 4. it breaks the training as it can't find some server. ``` Traceback (most recent call last): File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/adapters.py", line 439, in send resp = conn.urlopen( File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/urllib3/connectionpool.py", line 755, in urlopen retries = retries.increment( File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/urllib3/util/retry.py", line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='get.geojs.io', port=443): Max retries exceeded with url: /v1/ip/geo.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fcd95312700>: Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/core/util.py", line 10, in suppress yield File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/contextlib.py", line 75, in inner return func(*args, **kwds) File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/emissions_tracker.py", line 348, in stop emissions_data = self._prepare_emissions_data() File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/emissions_tracker.py", line 367, in _prepare_emissions_data geo: GeoMetadata = self._get_geo_metadata() File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/emissions_tracker.py", line 612, in _get_geo_metadata return GeoMetadata.from_geo_js(self._data_source.geo_js_url) File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/external/geography.py", line 83, in from_geo_js response: Dict = requests.get(url, timeout=0.5).json() File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/api.py", line 76, in get return request('get', url, params=params, **kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='get.geojs.io', port=443): Max retries exceeded with url: /v1/ip/geo.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fcd95312700>: Failed to establish a new connection: [Errno 111] Connection refused')) 08/23/2021 19:33:20 - WARNING - codecarbon - stopping. ``` This part of CC never worked for me. It always fails here for me and not just in this integration. Only Offline version of the tracker works w/o this failure. Could we use the offline tracker by default instead? ---------------------- To reproduce - I run: ``` python examples/pytorch/translation/run_translation.py --model_name_or_path google/mt5-small --do_train --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir output_dir --per_device_train_batch_size=4 --logging_step 2 --save_steps 0 --fp16 --max_train_samples 10 --save_total_limit 0 --overwrite_output_dir --save_strategy no ``` Thank you. @JetRunner, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13231/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13230/comments
https://api.github.com/repos/huggingface/transformers/issues/13230/events
https://github.com/huggingface/transformers/issues/13230
977,597,539
MDU6SXNzdWU5Nzc1OTc1Mzk=
13,230
Bert Loses Patience - Batch Inference Doubt
{ "login": "tanmaylaud", "id": 31733620, "node_id": "MDQ6VXNlcjMxNzMzNjIw", "avatar_url": "https://avatars.githubusercontent.com/u/31733620?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmaylaud", "html_url": "https://github.com/tanmaylaud", "followers_url": "https://api.github.com/users/tanmaylaud/followers", "following_url": "https://api.github.com/users/tanmaylaud/following{/other_user}", "gists_url": "https://api.github.com/users/tanmaylaud/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmaylaud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmaylaud/subscriptions", "organizations_url": "https://api.github.com/users/tanmaylaud/orgs", "repos_url": "https://api.github.com/users/tanmaylaud/repos", "events_url": "https://api.github.com/users/tanmaylaud/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmaylaud/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your interest in our work Tanmay!\r\n\r\nThe code only considers the scenario when `batch_size=1`. Theoretically, we can do batch inference but as you noted we have to wait for the longest patience outcome anyway so it's not very useful. ", "@JetRunner Thanks for the prompt response! \nIs there any better way to handle batch size greater than 1? ", "I don't think so. If you really want to fully use the GPU, you can try multi-threads but I don't really recommend that. `bs=1` is fast enough to me.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
CONTRIBUTOR
null
In PABEE, with evaluate every layer output for a given patience threshold. This can be done with if else when batch size is 1. How does this code work when batch_size > 1: https://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/examples/research_projects/bert-loses-patience/pabee/modeling_pabee_bert.py#L224 When batch size is greater than 1, I believe every input in the batch will have its own patience outcome. But torch.all would wait for the longest patience outcome. @JetRunner Won't the patience counter dimension be equal to batch size ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13230/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13229/comments
https://api.github.com/repos/huggingface/transformers/issues/13229/events
https://github.com/huggingface/transformers/issues/13229
977,575,562
MDU6SXNzdWU5Nzc1NzU1NjI=
13,229
Can't install transformers in Conda environment with python 3.9
{ "login": "mikemykhaylov", "id": 32168861, "node_id": "MDQ6VXNlcjMyMTY4ODYx", "avatar_url": "https://avatars.githubusercontent.com/u/32168861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mikemykhaylov", "html_url": "https://github.com/mikemykhaylov", "followers_url": "https://api.github.com/users/mikemykhaylov/followers", "following_url": "https://api.github.com/users/mikemykhaylov/following{/other_user}", "gists_url": "https://api.github.com/users/mikemykhaylov/gists{/gist_id}", "starred_url": "https://api.github.com/users/mikemykhaylov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mikemykhaylov/subscriptions", "organizations_url": "https://api.github.com/users/mikemykhaylov/orgs", "repos_url": "https://api.github.com/users/mikemykhaylov/repos", "events_url": "https://api.github.com/users/mikemykhaylov/events{/privacy}", "received_events_url": "https://api.github.com/users/mikemykhaylov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems that its works with python 3.8. Still, I don't get why it wouldn't with 3.9", "Hello! Could you add the unsatisfiable hints with `conda config --set unsatisfiable_hints True` so that we may see what packages are incompatible in python 3.9? Thank you!", "Set the config, nothing has changed", "No this won't change anything, this is so that we may understand what's happening. Please set the config and copy the logs here so that we may help, thank you.", "I know, I meant that the logs stayed absolutely the same. Weird 🀨", "I am having the same issue\r\n\r\n```bash\r\n$ conda install -c huggingface transformers\r\nSolving environment: failed\r\n\r\nUnsatisfiableError: The following specifications were found to be in conflict:\r\n - python=3.9\r\n - transformers\r\nUse \"conda info <package>\" to see the dependencies for each package.\r\n```\r\n\r\nThis is a fresh anaconda environment. I did install PyTorch 1.9 with conda, which installs python 3.9.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unstale, this is an actual issue", "I'm able to install latest transformers from the huggingface channel with Python 3.9.7.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,641
1,641
NONE
null
## Environment info - `transformers` version: 4.8.1 - Platform: macOS 11.5.2 - Python version: 3.9.6 - Conda Version: 4.10.3 ## Information I'm trying to install transformers in a condo environment and get and error "conda.exceptions.UnsatisfiableError: The following specifications were found to be incompatible with each other:" with blank lines beneath ## To reproduce Steps to reproduce the behavior: ``` shell conda create -n py39 python=3.9 conda activate py39 conda install -c huggingface transformers ``` Verbose conda output of last command: ``` Collecting package metadata (current_repodata.json): ...working... Unable to retrieve repodata (response: 404) for https://conda.anaconda.org/HuggingFace/osx-64/current_repodata.json done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Solving environment: ...working... Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed Traceback (most recent call last): File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 261, in install unlink_link_transaction = solver.solve_for_transaction( File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 114, in solve_for_transaction unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 157, in solve_for_diff final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 275, in solve_final_state ssc = self._add_specs(ssc) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 706, in _add_specs raise UnsatisfiableError({}) conda.exceptions.UnsatisfiableError: Did not find conflicting dependencies. If you would like to know which packages conflict ensure that you have enabled unsatisfiable hints. conda config --set unsatisfiable_hints True During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/exceptions.py", line 1079, in __call__ return func(*args, **kwargs) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/main.py", line 84, in _main exit_code = do_call(args, p) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/conda_argparse.py", line 83, in do_call return getattr(module, func_name)(args, parser) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/main_install.py", line 20, in execute install(args, parser, 'install') File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 308, in install raise e File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 295, in install unlink_link_transaction = solver.solve_for_transaction( File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 114, in solve_for_transaction unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 157, in solve_for_diff final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 275, in solve_final_state ssc = self._add_specs(ssc) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 704, in _add_specs ssc.r.find_conflicts(spec_set) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/resolve.py", line 352, in find_conflicts raise UnsatisfiableError(bad_deps, strict=strict_channel_priority) conda.exceptions.UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions ``` ## Expected behavior Installs without error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13229/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/13229/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13228/comments
https://api.github.com/repos/huggingface/transformers/issues/13228/events
https://github.com/huggingface/transformers/pull/13228
977,562,124
MDExOlB1bGxSZXF1ZXN0NzE4MjkwNjAw
13,228
Improve documentation of pooler_output in ModelOutput
{ "login": "navjotts", "id": 8072161, "node_id": "MDQ6VXNlcjgwNzIxNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/8072161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/navjotts", "html_url": "https://github.com/navjotts", "followers_url": "https://api.github.com/users/navjotts/followers", "following_url": "https://api.github.com/users/navjotts/following{/other_user}", "gists_url": "https://api.github.com/users/navjotts/gists{/gist_id}", "starred_url": "https://api.github.com/users/navjotts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/navjotts/subscriptions", "organizations_url": "https://api.github.com/users/navjotts/orgs", "repos_url": "https://api.github.com/users/navjotts/repos", "events_url": "https://api.github.com/users/navjotts/events{/privacy}", "received_events_url": "https://api.github.com/users/navjotts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger noting: this PR is ready (tests have passed)", "Thanks a lot!" ]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? Improves the doc string for pooler_output in modeling_outputs.py – making it more clear, and opening its availability to a more generic use-case than just BERT-family of models. **Motivation**: I was writing a `cls_pooler` for a sentence embeddings usage, and initially thought this is the CLS token output from the last layer – which is not the case, that would just be `last_hidden_state[0]` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13228/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13228", "html_url": "https://github.com/huggingface/transformers/pull/13228", "diff_url": "https://github.com/huggingface/transformers/pull/13228.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13228.patch", "merged_at": 1630325296000 }
https://api.github.com/repos/huggingface/transformers/issues/13227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13227/comments
https://api.github.com/repos/huggingface/transformers/issues/13227/events
https://github.com/huggingface/transformers/issues/13227
977,454,653
MDU6SXNzdWU5Nzc0NTQ2NTM=
13,227
make test failing
{ "login": "merleyc", "id": 10016650, "node_id": "MDQ6VXNlcjEwMDE2NjUw", "avatar_url": "https://avatars.githubusercontent.com/u/10016650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merleyc", "html_url": "https://github.com/merleyc", "followers_url": "https://api.github.com/users/merleyc/followers", "following_url": "https://api.github.com/users/merleyc/following{/other_user}", "gists_url": "https://api.github.com/users/merleyc/gists{/gist_id}", "starred_url": "https://api.github.com/users/merleyc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merleyc/subscriptions", "organizations_url": "https://api.github.com/users/merleyc/orgs", "repos_url": "https://api.github.com/users/merleyc/repos", "events_url": "https://api.github.com/users/merleyc/events{/privacy}", "received_events_url": "https://api.github.com/users/merleyc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just to complement my issue above, I got some errors and warnings when calling _transformers-cli env_\r\nI believe it is not related with the issue since it is cuda related messages but I am sharing it anyway.\r\n\r\n```\r\n$ transformers-cli env\r\n2021-08-24 08:21:49.993817: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2021-08-24 08:21:49.993881: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nWARNING:tensorflow:From /home/yuzhou/miniconda3/envs/hf_py380/lib/python3.8/site-packages/transformers/commands/env.py:50: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nUse `tf.config.list_physical_devices('GPU')` instead.\r\n2021-08-24 08:21:52.426412: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2021-08-24 08:21:52.437177: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory\r\n2021-08-24 08:21:52.437216: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)\r\n2021-08-24 08:21:52.437251: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (sr507): /proc/driver/nvidia/version does not exist\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.9.2\r\n- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.8.0\r\n- PyTorch version (GPU?): 1.9.0+cu102 (False)\r\n- Tensorflow version (GPU?): 2.6.0 (False)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\n```", "Hello @merleyc! I can't see the full stack trace of your error logs, but it seems that you're having connection issues? It seems that nearly all errors are `requests.exceptions.ProxyError`", "Thanks for observing that, @LysandreJik !\r\nI was able to successfully use the commands below without setting proxies:\r\n```\r\ngit remote add upstream https://github.com/huggingface/transformers.git\r\ngit pull upstream master\r\n```\r\nBut I set the http_proxy, https_proxy and ftp_proxy and I am running again the tests. They will take >2h to complete. At least until now I don't see proxy related errors but already see FAILING tests, like:\r\n```\r\n24 [gw1] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs\r\n82 [gw1] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs\r\n83 tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain\r\n84 [gw1] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain\r\n```\r\nAny idea why are these tests failing?\r\nI will paste the entire log here once the tests are finished.\r\n\r\nThanks!", "Hi,\r\nI got 49 failing tests when running the test after setting the proxies. Below are the lines that contains FAILED on it. Please see the entire log file attached.\r\n\r\n`\r\n34:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs\r\n82:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs\r\n84:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain\r\n148:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures\r\n264:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs\r\n266:[gw0] FAILED tests/test_file_utils.py::GetFromCacheTests::test_bogus_url\r\n268:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript\r\n272:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs\r\n274:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_eager\r\n282:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_graph\r\n286:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs\r\n290:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_only_pretrain\r\n294:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs\r\n298:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_eager\r\n366:[gw2] FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs\r\n454:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_graph\r\n506:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs\r\n528:[gw1] FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs\r\n14436:[gw2] FAILED tests/test_tokenization_distilbert.py::BertTokenizationTest::test_padding\r\n14438:[gw0] FAILED tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_tokenizer_mismatch_warning\r\n14450:[gw1] FAILED tests/test_tokenization_dpr.py::BertTokenizationTest::test_padding\r\n18032:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_initialization\r\n18036:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_add_special_tokens\r\n18038:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_map_equal\r\n18054:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenization_python_rust_equals\r\n18060:[gw2] FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenizer_mismatch_warning\r\n18222:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_num_special_tokens_to_add_equal\r\n18258:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_prepare_for_model\r\n18262:[gw2] FAILED tests/test_tokenization_small_blenderbot.py::BlenderbotSmallTokenizerTest::test_empty_word_small_tok\r\n18330:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_pretokenized_inputs\r\n18518:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding\r\n18538:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding_different_model_input_name\r\n18566:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_create_token_type_ids\r\n18568:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_alignement_methods\r\n18586:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_embeded_special_tokens\r\n18602:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_save_pretrained\r\n18614:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_equivalence_to_orig_tokenizer\r\n18616:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_special_tokens_initialization\r\n18626:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_is_fast\r\n18646:[gw1] FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_tokenization_python_rust_equals\r\n18650:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_max_length_equal\r\n18654:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_build_inputs_with_special_tokens\r\n18668:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_add_special_tokens\r\n18674:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_num_special_tokens_to_add_equal\r\n18688:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_prepare_for_model\r\n18692:[gw1] FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_add_tokens\r\n18698:[gw2] FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_pretokenized_inputs\r\n18704:[gw0] FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_padding\r\n20268:[gw2] FAILED tests/test_trainer.py::TrainerIntegrationTest::test_mem_metrics\r\n59432:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs\r\n59433:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs - As...\r\n59434:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain\r\n59435:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures\r\n59436:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs\r\n59437:FAILED tests/test_file_utils.py::GetFromCacheTests::test_bogus_url - requests...\r\n59438:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript - A...\r\n59439:FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs - ...\r\n59440:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_eager\r\n59441:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_graph\r\n59442:FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs\r\n59443:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_only_pretrain\r\n59444:FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs - Assert...\r\n59445:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_eager\r\n59446:FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs - Asse...\r\n59447:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_graph\r\n59448:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs - A...\r\n59449:FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs\r\n59450:FAILED tests/test_tokenization_distilbert.py::BertTokenizationTest::test_padding\r\n59451:FAILED tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_tokenizer_mismatch_warning\r\n59452:FAILED tests/test_tokenization_dpr.py::BertTokenizationTest::test_padding - r...\r\n59453:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_initialization\r\n59454:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_add_special_tokens\r\n59455:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_special_tokens_map_equal\r\n59456:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenization_python_rust_equals\r\n59457:FAILED tests/test_tokenization_reformer.py::ReformerTokenizationTest::test_tokenizer_mismatch_warning\r\n59458:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_num_special_tokens_to_add_equal\r\n59459:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_prepare_for_model\r\n59460:FAILED tests/test_tokenization_small_blenderbot.py::BlenderbotSmallTokenizerTest::test_empty_word_small_tok\r\n59461:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_compare_pretokenized_inputs\r\n59462:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding\r\n59463:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_padding_different_model_input_name\r\n59464:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_create_token_type_ids\r\n59465:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_alignement_methods\r\n59466:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_embeded_special_tokens\r\n59467:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_save_pretrained\r\n59468:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_equivalence_to_orig_tokenizer\r\n59469:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_special_tokens_initialization\r\n59470:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_is_fast\r\n59471:FAILED tests/test_tokenization_roberta.py::RobertaTokenizationTest::test_tokenization_python_rust_equals\r\n59472:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_max_length_equal\r\n59473:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_build_inputs_with_special_tokens\r\n59474:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_add_special_tokens\r\n59475:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_num_special_tokens_to_add_equal\r\n59476:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_prepare_for_model\r\n59477:FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_add_tokens - r...\r\n59478:FAILED tests/test_tokenization_squeezebert.py::BertTokenizationTest::test_compare_pretokenized_inputs\r\n59479:FAILED tests/test_tokenization_pegasus.py::BigBirdPegasusTokenizationTest::test_padding\r\n59480:FAILED tests/test_trainer.py::TrainerIntegrationTest::test_mem_metrics - Asse...`\r\n\r\n[results_cli3_v2.txt](https://github.com/huggingface/transformers/files/7042503/results_cli3_v2.txt)\r\n\r\nI'd appreciate any help! :)\r\nThanks!", "Any thought on this issue, @sgugger and @LysandreJik ? Thanks!", "It's impossible to know what went wrong without having the whole output of the tests. The log file does not contain the logs, just which test passed and which did not.", "Hi @sgugger ,\r\nApologies but how can I get the log file ?\r\nTo get the results I sent to you in the file \"results_cli3_v2.txt\" I run this command:\r\n`python -m pytest -n 3 --dist=loadfile -s -v ./tests/ >> results_cli3_v2.txt`", "Hi @sgugger ,\r\nI have tried this command, which includes `--tb=long ` :\r\n`python -m pytest --tb=long -n 8 --dist=loadfile -s -v ./tests/ > ~/resultsCLI.txt`\r\n\r\nIs this the log file that you mentioned that should contain the _whole output of the tests_? If not, please advise.\r\n\r\nAfter runnning the mentioned command, I got 3 failing tests and the error\r\n```\r\nINTERNALERROR> Traceback (most recent call last):\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/_pytest/main.py\", line 269, in wrap_session\r\nINTERNALERROR> session.exitstatus = doit(config, session) or 0\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/_pytest/main.py\", line 323, in _main\r\nINTERNALERROR> config.hook.pytest_runtestloop(session=session)\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_hooks.py\", line 265, in __call__\r\nINTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_manager.py\", line 80, in _hookexec\r\nINTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_callers.py\", line 60, in _multicall\r\nINTERNALERROR> return outcome.get_result()\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_result.py\", line 60, in get_result\r\nINTERNALERROR> raise ex[1].with_traceback(ex[2])\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/pluggy/_callers.py\", line 39, in _multicall\r\nINTERNALERROR> res = hook_impl.function(*args)\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/dsession.py\", line 112, in pytest_runtestloop\r\nINTERNALERROR> self.loop_once()\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/dsession.py\", line 135, in loop_once\r\nINTERNALERROR> call(**kwargs)\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/dsession.py\", line 256, in worker_collectionfinish\r\nINTERNALERROR> self.sched.schedule()\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/scheduler/loadscope.py\", line 341, in schedule\r\nINTERNALERROR> self._reschedule(node)\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/scheduler/loadscope.py\", line 323, in _reschedule\r\nINTERNALERROR> self._assign_work_unit(node)\r\nINTERNALERROR> File \"/home/mypath/miniconda3/envs/hf-dev-py380_v2/lib/python3.8/site-packages/xdist/scheduler/loadscope.py\", line 261, in _assign_work_unit\r\nINTERNALERROR> worker_collection = self.registered_collections[node]\r\nINTERNALERROR> KeyError: <WorkerController gw10>\r\n\r\n```\r\nas showed in this output file: [resultsCLI-pytestv625.txt](https://github.com/huggingface/transformers/files/7140353/resultsCLI-pytestv625.txt)\r\n\r\nI downgraded the pytest version from 6.2.5 to 6.2.2 as stated [here](https://stackoverflow.com/questions/66803324/how-can-i-resolve-an-error-running-pytest-in-parallel-via-xdist-in-bitbucket-pip), but didn't help it. The output file with pytest v6.2.2 is: [resultsCLI-pytestv622.txt](https://github.com/huggingface/transformers/files/7140352/resultsCLI-pytestv622.txt)\r\n\r\nPlease advise.\r\nThanks!", "We need the stack trace and the error message of the failing test to understand what is going on, this is not it.", "@sgugger \r\nCould you please tell me what is the command to get what you are looking for?\r\nI didn’t find it in the documentation .\r\nThank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,634
1,634
NONE
null
## Environment info - `transformers` version: 4.9.2 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.0 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Documentation: @sgugger ## Information Found this issue while following the [instructions](https://huggingface.co/transformers/contributing.html) on how to install transformers as [dev]. The [dev] command _pip install -e .[dev]_ gives me 58 failing tests. This happens with python=3.8.0 and python=3.8.8. I am using py=3.8.0 because of this [related issue](https://github.com/huggingface/transformers/issues/9410). The problem arises when using: * [ X ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: conda create -n hf_py380 python=3.8.0 conda activate hf_py380 git clone https://github.com/myuser/transformers.git cd transformers/ git checkout -b exploration pip uninstall transformers git clone https://github.com/huggingface/datasets cd datasets pip install -e ".[dev]" cd .. python -m pytest -n 3 --dist=loadfile -s -v ./tests/ As results, I am getting 60 failed tests. Log file below: > -- Docs: https://docs.pytest.org/en/stable/warnings.html =================================================== short test summary info =================================================== FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs - AssertionError: unexpected... FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_eager - AssertionError: unexpectedly None FAILED tests/test_file_utils.py::GetFromCacheTests::test_bogus_url - requests.exceptions.ProxyError: HTTPSConnectionPool(hos... FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_torchscript - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_inference_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_graph - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_only_pretrain - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_eager - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_with_configs_graph - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_train_no_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark.py::BenchmarkTest::test_train_with_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs - AssertionError: unexpectedly None FAILED tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs - AssertionError: unexpectedly None FAILED tests/test_generation_utils.py::GenerationIntegrationTests::test_beam_search_warning_if_max_length_is_passed - OSErro... FAILED tests/test_modeling_tf_longformer.py::TFLongformerModelIntegrationTest::test_layer_attn_probs - OSError: Can't load w... FAILED tests/test_modeling_tf_longformer.py::TFLongformerModelIntegrationTest::test_layer_global_attn - OSError: Can't load ... FAILED tests/test_tokenization_blenderbot.py::Blenderbot3BTokenizerTests::test_3B_tokenization_same_as_parlai - requests.exc... FAILED tests/test_tokenization_bart.py::TestTokenizationBart::test_tokenization_python_rust_equals - requests.exceptions.Pro... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_build_inputs_with_special_tokens - requests.except... FAILED tests/test_tokenization_bart.py::TestTokenizationBart::test_tokenizer_mismatch_warning - IndexError: list index out o... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_compare_add_special_tokens - requests.exceptions.P... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_add_tokens - requests.exceptions.ProxyError: HT... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_compare_prepare_for_model - requests.exceptions.Pr... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_alignement_methods - requests.exceptions.ProxyE... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_batch_encode_dynamic_overflowing - requests.exc... FAILED tests/test_tokenization_byt5.py::ByT5TokenizationTest::test_empty_target_text - requests.exceptions.ProxyError: HTTPS... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_compare_pretokenized_inputs - requests.exceptions.... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_create_token_type_ids - requests.exceptions.ProxyE... FAILED tests/test_tokenization_byt5.py::ByT5TokenizationTest::test_eos_treatment - requests.exceptions.ProxyError: HTTPSConn... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_embeded_special_tokens - requests.exceptions.Proxy... FAILED tests/test_tokenization_byt5.py::ByT5TokenizationTest::test_max_length_integration - requests.exceptions.ProxyError: ... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_fast_only_inputs - requests.exceptions.ProxyError:... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_is_fast - requests.exceptions.ProxyError: HTTPSCon... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_max_length_equal - requests.exceptions.ProxyError:... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_compare_prepare_for_model - requests.exceptions... FAILED tests/test_tokenization_canine.py::CanineTokenizationTest::test_encoding_keys - requests.exceptions.ProxyError: HTTPS... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_num_special_tokens_to_add_equal - requests.excepti... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_compare_pretokenized_inputs - requests.exceptio... FAILED tests/test_tokenization_big_bird.py::BigBirdTokenizationTest::test_padding - requests.exceptions.ProxyError: HTTPSCon... FAILED tests/test_tokenization_camembert.py::CamembertTokenizationTest::test_create_token_type_ids - requests.exceptions.Pro... FAILED tests/test_tokenization_squeezebert.py::SqueezeBertTokenizationTest::test_batch_encode_dynamic_overflowing - requests... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_padding - requests.exceptions.ProxyError: HTTPSConnectionPool... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_alignement_methods - requests.exceptions.Pro... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_padding_different_model_input_name - requests.exceptions.Prox... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_batch_encode_dynamic_overflowing - requests.... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_prepare_batch - requests.exceptions.ProxyError: HTTPSConnecti... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_build_inputs_with_special_tokens - requests.... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_save_pretrained - requests.exceptions.ProxyError: HTTPSConnec... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_compare_add_special_tokens - requests.except... FAILED tests/test_tokenization_squeezebert.py::SqueezeBertTokenizationTest::test_build_inputs_with_special_tokens - requests... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_special_tokens_initialization - requests.exceptions.ProxyErro... FAILED tests/test_tokenization_squeezebert.py::SqueezeBertTokenizationTest::test_compare_add_special_tokens - requests.excep... FAILED tests/test_tokenization_t5.py::T5TokenizationTest::test_special_tokens_map_equal - requests.exceptions.ProxyError: HT... FAILED tests/test_tokenization_xlm_roberta.py::XLMRobertaTokenizationTest::test_compare_prepare_for_model - requests.excepti... FAILED tests/test_trainer.py::TrainerIntegrationTest::test_mem_metrics - AssertionError: 'init_mem_cpu_alloc_delta' not foun... ERROR tests/test_modeling_bart.py ERROR tests/test_modeling_encoder_decoder.py ERROR tests/test_modeling_flax_bart.py ERROR tests/test_modeling_flax_marian.py ERROR tests/test_modeling_flax_mbart.py ERROR tests/test_modeling_fsmt.py ERROR tests/test_modeling_rag.py ERROR tests/test_skip_decorators.py ERROR tests/deepspeed/test_deepspeed.py ERROR tests/deepspeed/test_model_zoo.py ERROR tests/extended/test_trainer_ext.py ERROR tests/sagemaker/test_multi_node_data_parallel.py ERROR tests/sagemaker/test_multi_node_model_parallel.py ERROR tests/sagemaker/test_single_node_gpu.py ===================== 60 failed, 7773 passed, 2260 skipped, 653 warnings, 14 errors in 7663.28s (2:07:43) ===================== ## Expected behavior Since I am not changing the code, just cloning repo etc, I expetected having all tests as PASSED. What am I doing wrong? Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13227/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13226/comments
https://api.github.com/repos/huggingface/transformers/issues/13226/events
https://github.com/huggingface/transformers/pull/13226
977,444,142
MDExOlB1bGxSZXF1ZXN0NzE4MTg2NjY3
13,226
Bump notebook from 6.1.5 to 6.4.1 in /examples/research_projects/lxmert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
Bumps [notebook](http://jupyter.org) from 6.1.5 to 6.4.1. [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=notebook&package-manager=pip&previous-version=6.1.5&new-version=6.4.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13226/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13226", "html_url": "https://github.com/huggingface/transformers/pull/13226", "diff_url": "https://github.com/huggingface/transformers/pull/13226.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13226.patch", "merged_at": 1629813159000 }
https://api.github.com/repos/huggingface/transformers/issues/13225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13225/comments
https://api.github.com/repos/huggingface/transformers/issues/13225/events
https://github.com/huggingface/transformers/pull/13225
977,253,171
MDExOlB1bGxSZXF1ZXN0NzE4MDIxNjc3
13,225
Allow local_files_only for fast pretrained tokenizers
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
COLLABORATOR
null
# What does this PR do? There seems to have been a legacy issue where `local_files_only` did not work for fast tokenizers. I understand that priority focus is given to the environment variable `TRANSFORMERS_OFFLINE` (which did work) but I'd argue that it is best to have such file-related arguments work in the same manner across models, tokenizers, configs. This change is quite small. The argument `local_files_only` already existed in `PretrainedTokenizerBase.from_pretrained` (but was not present in the docstring, I added it now) - but it was never passed to `get_fast_tokenizer_file`. This latter function ultimately only skipped online look-up if `is_offline_mode()`. But as discussed above it might be better to include a local argument to control this behaviour in addition to an absolute (environmental) one. This PR makes sure that `local_files_only` has the same effect when loading a slow or fast tokenizer. Fixes #12571 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). ## Who can review? @n1t0, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13225/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13225", "html_url": "https://github.com/huggingface/transformers/pull/13225", "diff_url": "https://github.com/huggingface/transformers/pull/13225.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13225.patch", "merged_at": 1629788733000 }
https://api.github.com/repos/huggingface/transformers/issues/13224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13224/comments
https://api.github.com/repos/huggingface/transformers/issues/13224/events
https://github.com/huggingface/transformers/pull/13224
977,248,862
MDExOlB1bGxSZXF1ZXN0NzE4MDE4MjU3
13,224
Add RemBert to AutoTokenizer
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
MEMBER
null
The RemBert tokenizer was not added to the `AutoTokenizer` factory. This fixes it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13224/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13224", "html_url": "https://github.com/huggingface/transformers/pull/13224", "diff_url": "https://github.com/huggingface/transformers/pull/13224.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13224.patch", "merged_at": 1629739009000 }
https://api.github.com/repos/huggingface/transformers/issues/13223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13223/comments
https://api.github.com/repos/huggingface/transformers/issues/13223/events
https://github.com/huggingface/transformers/issues/13223
977,178,897
MDU6SXNzdWU5NzcxNzg4OTc=
13,223
Unable to load 'rembert' checkpoint
{ "login": "Sam131112", "id": 8017133, "node_id": "MDQ6VXNlcjgwMTcxMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8017133?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sam131112", "html_url": "https://github.com/Sam131112", "followers_url": "https://api.github.com/users/Sam131112/followers", "following_url": "https://api.github.com/users/Sam131112/following{/other_user}", "gists_url": "https://api.github.com/users/Sam131112/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sam131112/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sam131112/subscriptions", "organizations_url": "https://api.github.com/users/Sam131112/orgs", "repos_url": "https://api.github.com/users/Sam131112/repos", "events_url": "https://api.github.com/users/Sam131112/events{/privacy}", "received_events_url": "https://api.github.com/users/Sam131112/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The checkpoint is `google/rembert`: https://huggingface.co/google/rembert", "thanks :)" ]
1,629
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.0-54-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: Contributor Author @Iwontbecreative - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information from transformers import RemBertTokenizer,RemBertTokenizerFast,RemBertForQuestionAnswering tokenizer = RemBertTokenizer.from_pretrained('rembert') ## Output HTTPError Traceback (most recent call last) <ipython-input-23-22b8f1f94b36> in <module> 1 from transformers import RemBertTokenizer,RemBertTokenizerFast,RemBertForQuestionAnswering ----> 2 tokenizer = RemBertTokenizer.from_pretrained('rembert') ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name 1647 fast_tokenizer_file = get_fast_tokenizer_file( -> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token 1649 ) 1650 additional_files_names = { ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token) 3409 """ 3410 # Inspect all files from the repo/folder. -> 3411 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token) 3412 tokenizer_files_map = {} 3413 for file_name in all_files: ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token) 1693 token = None 1694 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info( -> 1695 path_or_repo, revision=revision, token=token 1696 ) 1697 return [f.rfilename for f in model_info.siblings] ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token) 246 ) 247 r = requests.get(path, headers=headers) --> 248 r.raise_for_status() 249 d = r.json() 250 return ModelInfo(**d) ~/anaconda3/envs/Sam1/lib/python3.6/site-packages/requests/models.py in raise_for_status(self) 939 940 if http_error_msg: --> 941 raise HTTPError(http_error_msg, response=self) 942 943 def close(self): HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/rembert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13223/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13222/comments
https://api.github.com/repos/huggingface/transformers/issues/13222/events
https://github.com/huggingface/transformers/pull/13222
976,990,131
MDExOlB1bGxSZXF1ZXN0NzE3Nzk3OTI1
13,222
Add TFEncoderDecoderModel + Add cross-attention to some TF models
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @ydshieh,\r\n\r\nIt's awesome that you already give `TFEncoderDecoder` a stab! Note that they were a lot of difficulties when adding TFRag with saving/loading and parameter scopes - see: https://github.com/huggingface/transformers/pull/9002 so it's maybe a good idea to not include to many models in the first PR and try to keep it as simple as possible :-) \r\n\r\nThe most important part here is to make sure that saving & loading works correctly depending on how the TFEncoderDecoder was constructed. *E.g.* we should have all those tests we have for TFRag also for TFEncoderDecoder: https://github.com/huggingface/transformers/blob/cf5744764821c3254773a62e4cc160dd6f09df8e/tests/test_modeling_tf_rag.py#L945 . It's very much not easy to make sure saving and loading works correctly for all models in TF so it would be important to focus on that part first I think before adding cross-attention to many other models :-) \r\n\r\nHappy to help you here whenever you're stuck, but we should be careful to keep it simple in the beginning :-)", "@patrickvonplaten Yes, I did have some troubles with saving/loading and parameter scopes. That took me quite some time, but currently I am able to solve the issues I had, but I will check the PR you mentioned, and will also try to add the equivalent tests contained in TFRag.", "Hi @patrickvonplaten , a silly question, but it would be great if you can explain to me what `Model templates runner / run_tests_templates (pull_request)` does, and why it failed here (if possible). I am out of idea about the reason", "> Hi @patrickvonplaten , a silly question, but it would be great if you can explain to me what `Model templates runner / run_tests_templates (pull_request)` does, and why it failed here (if possible). I am out of idea about the reason\r\n\r\nIt's a test that makes sure that the cookie cutter keeps working correctly: https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model\r\n\r\nIt your case it fails because you've adapted some fundamental TF models which are used in the cookie cutter as a template. In order to make the test pass you should adapt the cookie cutter template analogous so that the changes done to TFBart are also added here: https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_tf_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py and here: https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/test_modeling_tf_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py\r\n\r\nBut for the beginning I wouldn't pay too much attention to this test (it's not super important). Once your PR is ready, it's a good idea to fix the cookiecutter test in a final commit. If it doesn't work, I can help you with it :-)", "@patrickvonplaten I have added \r\n\r\nhttps://github.com/huggingface/transformers/blob/f73cab3be8f0dc2cd816ce8f5c9a50e113f8eacb/tests/test_modeling_tf_encoder_decoder.py#L742\r\n\r\nsimiliar to `TFRagModelSaveLoadTests` for `TFRag`.\r\n(There is no pretrained TFEncoderDecoder model on model hub yet, so I made some adjustment for the test)\r\n\r\nThe PR is ready for review :-)\r\n", "@patrickvonplaten , I am trying to add `TFBartForCausalLM` similar to `BartForCausalLM`. Howerver, there is one last issue:\r\n\r\nIn TF / PyTroch CausalLM models, there are shift inside their call method, like:\r\nIn TensorFlow\r\n```\r\n if inputs[\"labels\"] is not None:\r\n # shift labels to the left and cut last logit token\r\n logits = logits[:, :-1]\r\n labels = inputs[\"labels\"][:, 1:]\r\n loss = self.compute_loss(labels=labels, logits=logits)\r\n```\r\nor in PyTorch\r\n```\r\n if labels is not None:\r\n # we are doing next-token prediction; shift prediction scores and input ids by one\r\n shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()\r\n labels = labels[:, 1:].contiguous()\r\n loss_fct = CrossEntropyLoss()\r\n lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))\r\n```\r\nYou can find some of them in\r\n\r\nhttps://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/gpt2/modeling_tf_gpt2.py#L745\r\nhttps://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/gpt2/modeling_gpt2.py#L973\r\nhttps://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bert/modeling_bert.py#L1233\r\nhttps://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bert/modeling_tf_bert.py#L1244\r\n\r\nHowerver, for `BartForCausalLM` (and the new added TF version), this shift is not done inside the call\r\n\r\nhttps://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bart/modeling_bart.py#L1780\r\n\r\nI think for Bart, it expected the `(decoder's) input_ids` and `labels` being preprocessed outside the `call`.\r\n\r\nHowever, this difference will cause a problem in TF test, because for TF causal LM models (Bert/GPT2/...), it returns the truncated `logits`\r\nhttps://github.com/huggingface/transformers/blob/0ebda5382b6456cba2d92a3670383f9adf61533a/src/transformers/models/bert/modeling_tf_bert.py#L1246\r\n\r\nBTW, In PyTorch causal LM models, they return the complete logits\r\nhttps://github.com/huggingface/transformers/blob/662b143b71eb5ef775e27a8f79798bb28b3283bd/src/transformers/models/bert/modeling_bert.py#L1235\r\n\r\nThe test for `TFEncoderDecoderModel.check_encoder_decoder_model_labels` therefore expects the logits has `seq_len - 1`.\r\nhttps://github.com/huggingface/transformers/blob/f73cab3be8f0dc2cd816ce8f5c9a50e113f8eacb/tests/test_modeling_tf_encoder_decoder.py#L279\r\n\r\nThis works all fine until I introduce `TFBartForCausalLM`, as currently it will retrun logits of `seq_len`.\r\n\r\nDo you have some opinions on how should I deal with this situation?", "Awesome work so far @ydshieh! Mostly left nits, but the following things should be checked before merging:\r\n\r\n1. - `EncoderDecoderModel` and `TFEncoderDecoder` model should be exactly the same. We should write a test for this similar to https://github.com/huggingface/transformers/blob/ba1b3db70907b975b5ca52b9957c5ed7a186a0fa/tests/test_modeling_tf_common.py#L431 . In this test IMO we can use two small BERT models. We also should have added a test for Flax, but I've forgotten to mention it. For Flax we can do this in another PR, for TF we should do it in this PR ideally :-) \r\n\r\n2. - `TFEncoderDecoder` has to load and save weights correctly in multiple scenarios. Essentially we need all those tests that we have for TFRag passing for TFEncoderDecoder as well: https://github.com/huggingface/transformers/blob/ba1b3db70907b975b5ca52b9957c5ed7a186a0fa/tests/test_modeling_tf_rag.py#L945 . Here I can help you, so maybe you can add some tests and let them fail for the moment and then I can go in and fix them :-)\r\n\r\n3. - We also have to adapt the TF templates here: https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_tf_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py since we are doing core changes to TFBert. Feel free to give it a try - otherwise I'm happy to take over this part as well if it becomes time-consuming #13288 \r\n\r\n4. - Finally we need to run all BERT & RoBERTa slow tests to make sure nothing is broken. I can do this before merging\r\n\r\n=> If ok maybe you can look at the above suggestions and write some tests for 1), 2) and then I can help you make the tests for 2 pass? :-) \r\n\r\nReally great work so far - this is one of the most complex architectures in the repo!", "@patrickvonplaten Thanks for the feedbacks. I will make the changes. I will try to write `test_pt_tf_model_equivalence(self): `. About `class TFRagModelSaveLoadTests(unittest.TestCase): `, the last time I checked, it always passed, but I will verify again (since I reverted some changes done in the core tf weights loading/saving). (All the slow tests have passed when I run them locally, but again, I will verify)", "Also @ydshieh rebasing onto master would likely help resolve some of the currently failing tests, they don't seem related to this PR at all.", "> \r\n> \r\n> Also @ydshieh rebasing onto master would likely help resolve some of the currently failing tests, they don't seem related to this PR at all.\r\n\r\nYes. There is `run_tests_tf` failed which is related to this PR. Once this is resolved, and having a note on the big hack for PT <-> TF, I think this PR will be ready :) - Let's see what Patrick say.", "I agree with both of you! Once we fix the https://app.circleci.com/pipelines/github/huggingface/transformers/28285/workflows/d3d182a4-44b3-4bc8-b61e-dafcb341c2eb/jobs/278321?invite=true#step-108-4434 test (which is caused by this PR), we should add a note to `TFEncoderDecoder.from_pretrained(...)` and then we can merge this PR :tada: - very good work @ydshieh :-)", "> \r\n> \r\n> I agree with both of you! Once we fix the https://app.circleci.com/pipelines/github/huggingface/transformers/28285/workflows/d3d182a4-44b3-4bc8-b61e-dafcb341c2eb/jobs/278321?invite=true#step-108-4434 test (which is caused by this PR), we should add a note to `TFEncoderDecoder.from_pretrained(...)` and then we can merge this PR πŸŽ‰ - very good work @ydshieh :-)\r\n\r\nI already know a way to fix the issue, but want to know which way might be better in your opinion. (The question I posted on Slack). Let me know what you think once you check it :) - In short, the question is about if setting `GPT2Config.is_decoder=True` makes sense.", "@ydshieh - the PR looks to be in a very good state to me! In a final step, could you maybe adapt the test:\r\n`tests/test_modeling_tf_encoder_decoder.py::TFEncoderDecoderModelSaveLoadTests::test_encoder_decoder_save_load_from_encoder_decoder_from_pt` to showcase how the load a checkpoint from pytorch using the encoder and decoder seperately? \r\n\r\nAfter that I think we are good to merge :-)", "> \r\n> \r\n> @ydshieh - the PR looks to be in a very good state to me! In a final step, could you maybe adapt the test: `tests/test_modeling_tf_encoder_decoder.py::TFEncoderDecoderModelSaveLoadTests::test_encoder_decoder_save_load_from_encoder_decoder_from_pt` to showcase how the load a checkpoint from pytorch using the encoder and decoder seperately?\r\n> \r\n> After that I think we are good to merge :-)\r\n\r\nHey, @patrickvonplaten Sure. Let me make sure: you are saying to change the hack in `test_encoder_decoder_save_load_from_encoder_decoder_from_pt` to use encoder and decoder separately (and load their pytorch weights ), right? ", "I made the change to \r\n\r\n```\r\ntest_encoder_decoder_save_load_from_encoder_decoder_from_pt\r\n```\r\n\r\nHere is the change I made\r\n\r\nhttps://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/tests/test_modeling_tf_encoder_decoder.py#L684\r\n```\r\n # PyTorch => TensorFlow\r\n with tempfile.TemporaryDirectory() as tmp_dirname_1, tempfile.TemporaryDirectory() as tmp_dirname_2:\r\n encoder_decoder_pt.encoder.save_pretrained(tmp_dirname_1)\r\n encoder_decoder_pt.decoder.save_pretrained(tmp_dirname_2)\r\n encoder_decoder_tf = TFEncoderDecoderModel.from_encoder_decoder_pretrained(\r\n tmp_dirname_1, tmp_dirname_2, encoder_from_pt=True, decoder_from_pt=True\r\n )\r\n```\r\n\r\nWe also have a note in the doc of `TFEncoderDecoderModel.from_pretrained` (which also explains how to deal with a pytorch checkpoint)\r\n\r\nhttps://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py#L243\r\n\r\n\r\n@patrickvonplaten , @Rocketknight1 Thank you for your review! I am glad we have a TensorFlow Encoder Decoder now :)", "Hi @patrickvonplaten & @Rocketknight1, I made the change to \r\n\r\n```\r\ntest_encoder_decoder_save_load_from_encoder_decoder_from_pt\r\n```\r\n\r\nHere is the change I made\r\n\r\nhttps://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/tests/test_modeling_tf_encoder_decoder.py#L684\r\n```\r\n # PyTorch => TensorFlow\r\n with tempfile.TemporaryDirectory() as tmp_dirname_1, tempfile.TemporaryDirectory() as tmp_dirname_2:\r\n encoder_decoder_pt.encoder.save_pretrained(tmp_dirname_1)\r\n encoder_decoder_pt.decoder.save_pretrained(tmp_dirname_2)\r\n encoder_decoder_tf = TFEncoderDecoderModel.from_encoder_decoder_pretrained(\r\n tmp_dirname_1, tmp_dirname_2, encoder_from_pt=True, decoder_from_pt=True\r\n )\r\n```\r\n\r\nWe also have a note in the doc of `TFEncoderDecoderModel.from_pretrained` (which also explains how to deal with a pytorch checkpoint)\r\n\r\nhttps://github.com/huggingface/transformers/blob/0cd88b8538b8a27b4f3df2e8974a41d7e027dd70/src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py#L243\r\n\r\nDo you have any further comments?\r\n\r\nThank you for your review! Looking forward for the merge and having a TensorFlow Encoder Decoder in HF :)", "@ydshieh At this point I'm pretty happy with it! @patrickvonplaten do you have any objections, or should we merge?", "@sgugger, \r\n\r\nAbout `assert self.is_decoder, f\"{self} should be used as a decoder model if cross attention is added\"`, I copied it from Pytorch models. Do you think it is a good idea for me to change all of them (PT/TF files) to `if not self.is_decoder: raise ValueError(xxx)` in this PR, or just the TF files involved currently (and a new PR for all other occurrences)?", "> About assert self.is_decoder, f\"{self} should be used as a decoder model if cross attention is added\", I copied it from Pytorch models. \r\n\r\nYes we have some old ones in the codebase, we are just not accepting new ones, so please adapt your PR. We can adapt the PyTorch files in a separate PR.", "@patrickvonplaten , I tried to run slow tests for the changed models, and found some issues. (Previously, I only run the tests for `TFEncoderDecoderModel`). I will let you know when I finish fixing them :)", "> Are there any caveats to be known for this implementation vs the PyTorch implementation which should be put in the docs, or should they behave identically?\r\n> \r\n\r\nThere is also one thing I pointed much earlier: For a given `TFEncoderDecoderModel`, if we do\r\n\r\n```\r\nmodel.encoder.save_pretrained(encoder_path)\r\nmodel.decoder.save_pretrained(decoder_path)\r\n```\r\n\r\nThen\r\n\r\n```\r\nnew_model = TFEncoderDecoderModel.from_encoder_decoder_pretrained(\r\n encoder_path, decoder_path\r\n)\r\n```\r\nwon't load the TF checkpoint weights correctly.\r\n\r\nThis is somehow strange (logically), but the chance of doing so is very low -> If we already have a `TFEncoderDecoderModel`, it's more likely `save_pretrained` will be used rather than saving the 2 components separately.\r\n\r\nI can add this to the doc if necessary. (I will verify again to make sure)", "@LysandreJik I added the PT->TF information to `encoderdecoder.rst`, along with the model contributors (I hope this is fine).\r\n\r\nhttps://github.com/huggingface/transformers/blob/f021eec0c7e97334a6c2fc3b9b1a1b43ec06fce3/docs/source/model_doc/encoderdecoder.rst#L30\r\n\r\nAll the suggestions have been addressed.\r\n\r\n@sgugger , I left the following unchanged in this PR (we can clean things up in another PR)\r\n\r\n```\r\n# T5 has a mask that can compare sequence ids,\r\n```\r\n\r\n@patrickvonplaten I ran the slow tests locally with all the models changed in this PR, except `TFRemBert` (my poor laptop just can't ran it). It's ready for you to do a final verification, thank you! (The `get_tf_activation(\"gelu\")` issue is fixed )", "This looks good to me, thank you @ydshieh!", "Awesome - looked through the PR again and it looks good to me! Thanks a lot for all your amazing work on this :-)", "@patrickvonplaten , Thank you!\r\n\r\nDo you want to upload a converted TF checkpoint to `\"patrickvonplaten/bert2bert-cnn_dailymail-fp16\"` (so we can change the examples, and adding 1 or 2 more tests).\r\n\r\nOtherwise, would it be a good idea for me to upload to `\"ydshieh/bert2bert-cnn_dailymail-fp16\"`? I assume that the checkpoints used officially for the tests/examples should be under the name of Hugging Face or its staffs.\r\n\r\nKindly tag @LysandreJik for this. " ]
1,629
1,651
1,634
COLLABORATOR
null
# What does this PR do? - Add TFEncoderDecoderModel + Add cross-attention to some TF models - Add cross attention & cache mechanism (`use_cache` & `past_key_values`) to some TF models - Add `test_modeling_tf_encoder_decoder.py` ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @LysandreJik Closes https://github.com/huggingface/transformers/issues/9863
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13222/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13222", "html_url": "https://github.com/huggingface/transformers/pull/13222", "diff_url": "https://github.com/huggingface/transformers/pull/13222.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13222.patch", "merged_at": 1634076634000 }
https://api.github.com/repos/huggingface/transformers/issues/13221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13221/comments
https://api.github.com/repos/huggingface/transformers/issues/13221/events
https://github.com/huggingface/transformers/issues/13221
976,963,207
MDU6SXNzdWU5NzY5NjMyMDc=
13,221
Typo in M2M100 1.2B model card page, strange translation results and new M2M100 615M model
{ "login": "Fikavec", "id": 83672821, "node_id": "MDQ6VXNlcjgzNjcyODIx", "avatar_url": "https://avatars.githubusercontent.com/u/83672821?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fikavec", "html_url": "https://github.com/Fikavec", "followers_url": "https://api.github.com/users/Fikavec/followers", "following_url": "https://api.github.com/users/Fikavec/following{/other_user}", "gists_url": "https://api.github.com/users/Fikavec/gists{/gist_id}", "starred_url": "https://api.github.com/users/Fikavec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fikavec/subscriptions", "organizations_url": "https://api.github.com/users/Fikavec/orgs", "repos_url": "https://api.github.com/users/Fikavec/repos", "events_url": "https://api.github.com/users/Fikavec/events{/privacy}", "received_events_url": "https://api.github.com/users/Fikavec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Fikavec \r\n\r\nThank you for reporting this, the typo is fixed now!\r\n\r\n> It is possible to add a new M2M100 615M model?\r\nYes, I will take a look.\r\n\r\n> Model m2m100_1.2B sometimes gives a strange translation results on news titles - incorrectly translates the names of countries and cities in sentences, but model m2m100_418M translates correctly (i'm saw this in many languages pairs) - it is normal, or maybe there error in uploaded \"facebook/m2m100_1.2B\" tokenizer/model or function code M2M100Tokenizer.from_pretrained(\"facebook/m2m100_1.2B\")?\r\n\r\nI don't think this is an error but I will take a look. But I have observed this behavior with multi-linguial models, the translations sometimes could be wrong especially for low-resource languages.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
@patil-suraj thank you so much for your great work, seems like there's a typo in the [M2M100 1.2B page:](https://huggingface.co/facebook/m2m100_1.2B) >model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") >tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") It should be "m2m100_1.2B" instead of "m2m100_418M". Model m2m100_1.2B sometimes gives a strange translation results on news titles - incorrectly translates the names of countries and cities in sentences, but model m2m100_418M translates correctly (i'm saw this in many languages pairs) - it is normal, or maybe there error in uploaded "facebook/m2m100_1.2B" tokenizer/model or function code M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B")? For example: > from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") sentence = "في Ω…ΩŠΨ³Ψ§Ω†" tokenizer.src_lang = "ar" encoded_zh = tokenizer(sentence, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)) model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B") tokenizer.src_lang = "ar" encoded_zh = tokenizer(sentence, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)) gives ['in Messan.'] and ['in Messengers'] Try also sentence = "Ω…ΨͺΩ‡Ω…ΩŠΩ† في Ω…ΩŠΨ³Ψ§Ω†" - gives ['Accused in Messiah.'] ['Prosecutors in Missouri'] - why [Ω…ΩŠΨ³Ψ§Ω†](https://en.wikipedia.org/wiki/Maysan_Governorate) in news titles translates by m2m100_1.2B as Messengers, Missouri, Mexico, Munich? It is possible to add [new M2M100 615M model?](https://github.com/huggingface/transformers/issues/12775#issuecomment-889437365)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13221/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13220
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13220/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13220/comments
https://api.github.com/repos/huggingface/transformers/issues/13220/events
https://github.com/huggingface/transformers/pull/13220
976,846,122
MDExOlB1bGxSZXF1ZXN0NzE3Njc2NDk1
13,220
[Tentative] Moving slow tokenizer to the Trie world.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten It speeds up all the time, but it's only that impressive with adding tons of `added_tokens`. The reason is because of the change in complexity. Even for simple `ByT5` that uses 125 extra `added_tokens` by default, that's already 100x speed (so not really that exotic). \r\n\r\nFor \"regular\" slow tokenizers, with <5 added_tokens the speedup exists but is rather negligible.\r\n\r\nAlso I've seen 2/3 different issues regarding this. The one reported (https://github.com/huggingface/tokenizers/issues/615) has 8 participants since February, so while not super urgent, there definitely seems to be more than a couple of people doing that.\r\n\r\nAnd always fine adding more documentation, but fwiw it's a pretty standard data structure. ", "Hi @LysandreJik Do we have benchmarks tests anywhere ?\r\n\r\nThis doesn't fix anything that was broken before, it just make things faster (some usage of the lib was so slow that it was unusable, but it was definitely working).\r\n\r\nI could add a test that some tokenization takes too long, but it's always a tricky business to add tests related to timings because it might depend on the hardware running the tests, so it would definitely NOT be a unit test.", "@LysandreJik Ok, I added 1 `common` test which fails only on Canine (`extra_id_1` is not valid over there).\r\n\r\nAlso added Trie specific tests (the one in the doc basically)", "Thanks @SaulLu , the matching was incorrect in that edge case where some token is rigourously included in another, then we would match the inner token instead of the first match.\r\n\r\nThe lookahead part got more complex but will now work in that edge case (which is important to at least follow the documentation)\r\n\r\nRegarding the idea of `added_tokens` following order, recaping some offline conversation:\r\n- It's doable, but would make code even more complex. We would need to keep track of ranks in the Trie, whenever we have a full match, resolve all partial matches, sort by order and take the highest rank.\r\n- Seems overly complex for what seems to be pathological cases at best, so out of scope of this one. ", "Will merge this later today unless there are still some comments (But I feel it's ok in current state)" ]
1,629
1,631
1,631
CONTRIBUTOR
null
# What does this PR do? This PR attempts to solve the slow tokenizer `added_tokens` source of slowness. Currently the splitting is done in O(n) manner, with very non obvious algorithm to "pre-tokenize" (`tokenize` function). This will yield extremely slow tokenization even by slow tokenization standards. It also affects slow-only tokenizers like ByT5. The proposed fix simply moves the splitting into a O(1) algorithm (relative to `added_tokens`). It does that by manually implementing a real Trie (more information why Python regexp can't be trusted on this: https://stackoverflow.com/questions/42742810/speed-up-millions-of-regex-replacements-in-python-3). There is at least one know breaking change here, it's that users could rely on token ORDER to force splitting on some `added_tokens` before others (https://github.com/huggingface/tokenizers/issues/615). This won't be the case anymore with this code, as the splitting will happen on ~~first~~ longest encounter of `added_tokens` regardless. This is a pretty standard practice. ~~We could instead split on longest match first, but it's also a breaking change (although most likely less breaking).It does mean adding backtracking so the algorithm will be more complex and more state management~~ Edit: Implemented Benchmarking code: ```python import datetime from transformers import GPT2Tokenizer # They used to have to be sorted in reverse by length, otherwise the tokens arent newtokens = range(0, 20000) newtokens = list(newtokens) newtokens.sort(reverse=True) newtokens = [f"new_{x}" for x in newtokens] slow = GPT2Tokenizer.from_pretrained("gpt2") # Add new vocab slow_custom = GPT2Tokenizer.from_pretrained("gpt2") slow_custom.add_tokens(newtokens) # Differences when tokenising the text... text = "this is a sentence containing new_200" for tokenizer in [slow, slow_custom]: start = datetime.datetime.now() print(tokenizer.tokenize(text)) print(datetime.datetime.now() - start) ``` This goes from 4~7s on the `slow_custom` to 1ms (and ~0.3ms without `added_tokens`) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/tokenizers/issues/615 (unrelated, because users seem to still be using slow tokenizers there. @LysandreJik @patrickvonplaten @n1t0 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ```bash RUN_SLOW=1 pytest -sv tests/test_tokenization_* Results (1250.63s): 3957 passed 353 skipped ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13220/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13220/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13220", "html_url": "https://github.com/huggingface/transformers/pull/13220", "diff_url": "https://github.com/huggingface/transformers/pull/13220.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13220.patch", "merged_at": 1631201176000 }
https://api.github.com/repos/huggingface/transformers/issues/13219
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13219/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13219/comments
https://api.github.com/repos/huggingface/transformers/issues/13219/events
https://github.com/huggingface/transformers/issues/13219
976,400,085
MDU6SXNzdWU5NzY0MDAwODU=
13,219
"Resource exhausted" when loading Flax GPT-Neo 2.7B
{ "login": "rolandgvc", "id": 26813782, "node_id": "MDQ6VXNlcjI2ODEzNzgy", "avatar_url": "https://avatars.githubusercontent.com/u/26813782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rolandgvc", "html_url": "https://github.com/rolandgvc", "followers_url": "https://api.github.com/users/rolandgvc/followers", "following_url": "https://api.github.com/users/rolandgvc/following{/other_user}", "gists_url": "https://api.github.com/users/rolandgvc/gists{/gist_id}", "starred_url": "https://api.github.com/users/rolandgvc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rolandgvc/subscriptions", "organizations_url": "https://api.github.com/users/rolandgvc/orgs", "repos_url": "https://api.github.com/users/rolandgvc/repos", "events_url": "https://api.github.com/users/rolandgvc/events{/privacy}", "received_events_url": "https://api.github.com/users/rolandgvc/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hi, any updates on this?", "Thanks for reporting this, I'm looking into it.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Unsale", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Running into this same issue when trying to load t5-3b as a Flax model from the Pytorch version on TPU3.8", "Working on a feature that should fix this issue. This is probably because the model is initialized randomly and the weights are on the device, and then the pre-trained weights are also loaded directly on the device. So working on a feature that allows initializing the model only abstractly to consume less memory. Should be available in a couple of weeks :) ", "Any update on this @patil-suraj \r\nWe are experiencing this when trying to load RoBERTa using Flax with TPU v3-8" ]
1,629
1,638
null
NONE
null
## Environment info - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.19 - JaxLib version: 0.1.70 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @patrickvonplaten @patil-suraj @LysandreJik ## Information I am not able to load the Flax GPT-Neo 2.7B model in my TPU VM v3-8 instance. ```python tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B", pad_token="</s>", padding_side="left") model = FlaxAutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B", pad_token_id=tokenizer.eos_token_id) ``` The model will download but will fail to load with ``` RuntimeError: Resource exhausted: Failed to allocate request for 100.00MiB (104857600B) on device ordinal 0 ``` However, the pytorch version will load and run just fine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13219/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/13218
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13218/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13218/comments
https://api.github.com/repos/huggingface/transformers/issues/13218/events
https://github.com/huggingface/transformers/issues/13218
976,371,256
MDU6SXNzdWU5NzYzNzEyNTY=
13,218
How to run GLUE tasks on my model?
{ "login": "orenpapers", "id": 28626773, "node_id": "MDQ6VXNlcjI4NjI2Nzcz", "avatar_url": "https://avatars.githubusercontent.com/u/28626773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orenpapers", "html_url": "https://github.com/orenpapers", "followers_url": "https://api.github.com/users/orenpapers/followers", "following_url": "https://api.github.com/users/orenpapers/following{/other_user}", "gists_url": "https://api.github.com/users/orenpapers/gists{/gist_id}", "starred_url": "https://api.github.com/users/orenpapers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orenpapers/subscriptions", "organizations_url": "https://api.github.com/users/orenpapers/orgs", "repos_url": "https://api.github.com/users/orenpapers/repos", "events_url": "https://api.github.com/users/orenpapers/events{/privacy}", "received_events_url": "https://api.github.com/users/orenpapers/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can run the `run_glue.py` script, only specifying `--do_eval` (and not `--do_train`).\r\n\r\nIt's located here: https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
I trained a BERT model on my dataset. Now , I want to run it on GLUE tasks, just to get eval score (no finetuning on GLUE). Is this possible? I found this proposed example: https://pypi.org/project/pytorch-transformers/#quick-tour-of-the-fine-tuningusage-scripts but it doesn't explain where I can find the `run_glue.py `script. I found this link: https://github.com/huggingface/transformers/blob/master/examples/run_glue.py But it is broken
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13218/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13217
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13217/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13217/comments
https://api.github.com/repos/huggingface/transformers/issues/13217/events
https://github.com/huggingface/transformers/pull/13217
976,238,724
MDExOlB1bGxSZXF1ZXN0NzE3MjE5ODMx
13,217
Update clip loss calculation
{ "login": "sachinruk", "id": 1410927, "node_id": "MDQ6VXNlcjE0MTA5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sachinruk", "html_url": "https://github.com/sachinruk", "followers_url": "https://api.github.com/users/sachinruk/followers", "following_url": "https://api.github.com/users/sachinruk/following{/other_user}", "gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}", "starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions", "organizations_url": "https://api.github.com/users/sachinruk/orgs", "repos_url": "https://api.github.com/users/sachinruk/repos", "events_url": "https://api.github.com/users/sachinruk/events{/privacy}", "received_events_url": "https://api.github.com/users/sachinruk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
Hello, I'm the author of the blog you took the snippet from. I think this way of calculating is possibly slightly more accurate for calculation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13217/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13217", "html_url": "https://github.com/huggingface/transformers/pull/13217", "diff_url": "https://github.com/huggingface/transformers/pull/13217.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13217.patch", "merged_at": 1630565157000 }
https://api.github.com/repos/huggingface/transformers/issues/13216
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13216/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13216/comments
https://api.github.com/repos/huggingface/transformers/issues/13216/events
https://github.com/huggingface/transformers/pull/13216
976,203,953
MDExOlB1bGxSZXF1ZXN0NzE3MTk3MTE0
13,216
Use DS callable API to allow hf_scheduler + ds_optimizer
{ "login": "tjruwase", "id": 4271600, "node_id": "MDQ6VXNlcjQyNzE2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/4271600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tjruwase", "html_url": "https://github.com/tjruwase", "followers_url": "https://api.github.com/users/tjruwase/followers", "following_url": "https://api.github.com/users/tjruwase/following{/other_user}", "gists_url": "https://api.github.com/users/tjruwase/gists{/gist_id}", "starred_url": "https://api.github.com/users/tjruwase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tjruwase/subscriptions", "organizations_url": "https://api.github.com/users/tjruwase/orgs", "repos_url": "https://api.github.com/users/tjruwase/repos", "events_url": "https://api.github.com/users/tjruwase/events{/privacy}", "received_events_url": "https://api.github.com/users/tjruwase/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "* [x] https://github.com/microsoft/DeepSpeed/pull/1316 is merged\r\n* [x] v0.5.1 released to PyPI: https://pypi.org/project/deepspeed/0.5.1/" ]
1,629
1,630
1,630
CONTRIBUTOR
null
This PR: - Used the (new) Callable api of deepspeed.initialize() to enable combining hf schedulers with deepspeed optimizers. - `create_scheduler` now has an optional `optimizer` arg - Updates relevant unit test. Blocking events: All unblocked now. - [x] depends on deepspeed PR [1316](https://github.com/microsoft/DeepSpeed/pull/1316). - [x] needs new deepspeed version after PR is merged and need to update the dependencies when that happens. deepspeed: @stas00.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13216/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13216/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13216", "html_url": "https://github.com/huggingface/transformers/pull/13216", "diff_url": "https://github.com/huggingface/transformers/pull/13216.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13216.patch", "merged_at": 1630342866000 }
https://api.github.com/repos/huggingface/transformers/issues/13215
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13215/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13215/comments
https://api.github.com/repos/huggingface/transformers/issues/13215/events
https://github.com/huggingface/transformers/issues/13215
976,203,298
MDU6SXNzdWU5NzYyMDMyOTg=
13,215
Input to a Tensorflow model where a dictionary cannot be used
{ "login": "old-school-kid", "id": 56781123, "node_id": "MDQ6VXNlcjU2NzgxMTIz", "avatar_url": "https://avatars.githubusercontent.com/u/56781123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/old-school-kid", "html_url": "https://github.com/old-school-kid", "followers_url": "https://api.github.com/users/old-school-kid/followers", "following_url": "https://api.github.com/users/old-school-kid/following{/other_user}", "gists_url": "https://api.github.com/users/old-school-kid/gists{/gist_id}", "starred_url": "https://api.github.com/users/old-school-kid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/old-school-kid/subscriptions", "organizations_url": "https://api.github.com/users/old-school-kid/orgs", "repos_url": "https://api.github.com/users/old-school-kid/repos", "events_url": "https://api.github.com/users/old-school-kid/events{/privacy}", "received_events_url": "https://api.github.com/users/old-school-kid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Maybe of interest to @Rocketknight1 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,635
1,635
NONE
null
Made a Tensorflow fuctional API model on top of TFAutoModelForSequenceClassification with 3 sentence as input. Training model directly on tokenized input raises **ValueError: Failed to find data adapter that can handle input: (<class 'list'> containing values of types {'(<class \'list\'> containing values of types {"<class \'tensorflow.python.framework.ops.EagerTensor\'>"})'}), (<class 'list'> containing values of types {"<class 'int'>"})** If I convert it into numpy array it raises **ValueError: Data cardinality is ambiguous:** `model(X_train[0])` prduces the desired result in both cases but on training the model it raises errors. Code can be found in this [notebook](https://colab.research.google.com/drive/1wsVVHiaqBF8joIEsP_XSMF35fnDQS19D?usp=sharing)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13215/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13214
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13214/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13214/comments
https://api.github.com/repos/huggingface/transformers/issues/13214/events
https://github.com/huggingface/transformers/pull/13214
976,189,928
MDExOlB1bGxSZXF1ZXN0NzE3MTg3ODc0
13,214
✨ add citation file
{ "login": "flaxel", "id": 19373153, "node_id": "MDQ6VXNlcjE5MzczMTUz", "avatar_url": "https://avatars.githubusercontent.com/u/19373153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flaxel", "html_url": "https://github.com/flaxel", "followers_url": "https://api.github.com/users/flaxel/followers", "following_url": "https://api.github.com/users/flaxel/following{/other_user}", "gists_url": "https://api.github.com/users/flaxel/gists{/gist_id}", "starred_url": "https://api.github.com/users/flaxel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flaxel/subscriptions", "organizations_url": "https://api.github.com/users/flaxel/orgs", "repos_url": "https://api.github.com/users/flaxel/repos", "events_url": "https://api.github.com/users/flaxel/events{/privacy}", "received_events_url": "https://api.github.com/users/flaxel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> I have added a new file to make it easier to quote the software. Once again, there is more information in [this documentation](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-citation-files#citing-something-other-than-software). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13214/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13214", "html_url": "https://github.com/huggingface/transformers/pull/13214", "diff_url": "https://github.com/huggingface/transformers/pull/13214.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13214.patch", "merged_at": 1630324015000 }
https://api.github.com/repos/huggingface/transformers/issues/13213
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13213/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13213/comments
https://api.github.com/repos/huggingface/transformers/issues/13213/events
https://github.com/huggingface/transformers/issues/13213
976,158,584
MDU6SXNzdWU5NzYxNTg1ODQ=
13,213
Questions on generating using encoder-decoder models
{ "login": "devjwsong", "id": 16731987, "node_id": "MDQ6VXNlcjE2NzMxOTg3", "avatar_url": "https://avatars.githubusercontent.com/u/16731987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devjwsong", "html_url": "https://github.com/devjwsong", "followers_url": "https://api.github.com/users/devjwsong/followers", "following_url": "https://api.github.com/users/devjwsong/following{/other_user}", "gists_url": "https://api.github.com/users/devjwsong/gists{/gist_id}", "starred_url": "https://api.github.com/users/devjwsong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devjwsong/subscriptions", "organizations_url": "https://api.github.com/users/devjwsong/orgs", "repos_url": "https://api.github.com/users/devjwsong/repos", "events_url": "https://api.github.com/users/devjwsong/events{/privacy}", "received_events_url": "https://api.github.com/users/devjwsong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nencoder-decoder models like T5 and BART create the `decoder_input_ids` automatically based on the `labels` you provide. So you should only provide the encoder inputs (`input_ids`, `attention_mask`, possibly `token_type_ids`) and the decoder \r\ntargets (`labels`). As you can see [here](https://github.com/huggingface/transformers/blob/f689743e7454b93f6cab4343026de03fa530bfb9/src/transformers/models/bart/modeling_bart.py#L1287), `BartForConditionalGeneration` will automatically create the `decoder_input_ids` by shifting the `labels` one position to the right.\r\n\r\nLet's consider what happens with a small example. Suppose we want to train BART for translation, and we have:\r\n* input sequence: \"HuggingFace is a company based in New York and Paris.\"\r\n* target sequence: \"HuggingFace est une sociΓ©tΓ© basΓ©e Γ  New York et Γ  Paris.\"\r\n\r\n=> to prepare this example for `BartForConditionalGeneration`, we can use `BartTokenizer`. We can prepare the input for BART by encoding the input sequence, like so:\r\n```\r\nfrom transformers import BartTokenizer\r\n\r\ntokenizer = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\n\r\ninput_sequence = \"HuggingFace is a company based in New York and Paris.\"\r\nencoding = tokenizer(input_sequence, return_tensors=\"pt\")\r\ninput_ids, attention_mask = encoding.input_ids, encoding.attention_mask\r\n```\r\nTo create the labels, we can also use `BartTokenizer`. The labels are just the `input_ids` from the encoding of the target sequence:\r\n\r\n```\r\ntarget_sequence = \"HuggingFace est une sociΓ©tΓ© basΓ©e Γ  New York et Γ  Paris.\"\r\ntarget_encoding = tokenizer(target_sequence, return_tensors=\"pt\")\r\nlabels = target_encoding.input_ids\r\n```\r\n\r\nNow we have everything we need to do a forward pass and obtain a loss, like so:\r\n\r\n```\r\nfrom transformers import BartForConditionalGeneration\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-large\")\r\n\r\noutputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)\r\nloss = outputs.loss\r\nprint(loss.item())\r\n```\r\n\r\nWe can also check how these labels look like in text, by decoding them:\r\n\r\n```\r\nfor id in labels.squeeze().tolist():\r\n print(id, tokenizer.decode([id]))\r\n\r\n# this prints:\r\n0 <s>\r\n40710 Hug\r\n3923 ging\r\n34892 Face\r\n3304 est\r\n12515 une\r\n17380 soc\r\n118 i\r\n10221 Γ©t\r\n1140 Γ©\r\n11909 bas\r\n9703 Γ©e\r\n6534 Γ \r\n188 New\r\n469 York\r\n4400 et\r\n6534 Γ \r\n2201 Paris\r\n4 .\r\n2 </s>\r\n```\r\n\r\nWhat internally happens, is that first the encoded input sequence (i.e. the `input_ids` and `attention_mask`) are forwarded through the encoder of BART. The encoder will output a tensor of shape `(batch_size, sequence_length, hidden_size)`. In this case, we only have a single example which means that the batch size is 1, the sequence length (which is the number of tokens) is equal to `len(input_ids) = len(attention_mask)`, which in this case is 15 tokens, and the hidden size of BART-large is 1024 (BART-base would be 768). So the encoder will output a tensor of shape (1, 15, 1024). This tensor is often refered to as the \"last hidden states\", as these are the hidden representations for all tokens from the last layer of the encoder. \r\n\r\nNext, we have the decoder. The decoder needs to spit out the desired `input_ids` of the target sequence (in other words, the `labels`). The decoder of BART (and T5) is autoregressive, which is a fancy term to say \"from left to right\". So what happens is, we provide the first `decoder_input_id` to the decoder (which is the `decoder_start_token_id`, which for BART is equal to the \\</s> token). Then, the decoder outputs a probability over all possible `input_ids`, and this is compared to the first label (which will be the first input_id of the labels we created, i.e. the \\<s> token). Next, we provide the first two decoder input ids, i.e. \\</s> \\<s> to the decoder, and then it needs to spit out the first two labels, i.e. \\<s> Hug. Next, we provide the first three decoder input ids, i.e. \\</s> \\<s> Hug to the decoder, and then it needs to spit out the first three labels, i.e. \\<s> Hug ging, and so on. \r\n\r\nNOTE: this was just a single example. In practice, deep learing models are always trained in batches. As the input_ids and labels have different lengths for each example in the batch, we use padding and truncation to make sure they are all of the same length. One typically defines a `max_source_length` and `max_target_length` as hyperparameters, and then prepares all data like so:\r\n\r\n```\r\n# encode the inputs\r\nencoding = tokenizer(text, padding=\"max_length\", max_length=max_source_length, truncation=True, return_tensors=\"pt\")\r\ninput_ids, attention_mask = encoding.input_ids, encoding.attention_mask\r\n\r\n# encode the labels\r\ntarget_encoding = tokenizer(text, padding=\"max_length\", max_length=max_target_length, truncation=True, return_tensors=\"pt\")\r\nlabels = target_encoding.input_ids\r\n```\r\nAn additional thing to keep in mind is to replace padding tokens of the labels by -100, such that they are not taken into account by the loss function. For that, I use the following code (assuming the `labels` of a batch are still lists rather than PyTorch tensors):\r\n\r\n```\r\nlabels_with_ignore_index = []\r\nfor labels_example in labels:\r\n labels_example = [label if label != tokenizer.pad_token_id else -100 for label in labels_example]\r\n labels_with_ignore_index.append(labels_example)\r\n```\r\n\r\nRegarding your third question, yes, during inference one should use `model.generate` instead of `model.forward`. Check out [this blog post](https://huggingface.co/blog/how-to-generate) to know all the details about generating after training your model.", "I really appreciate with your help.\r\nAbout the last question, I think I can get the desired last decoder hidden states based on output scores.\r\n\r\nThank you so much and have a nice day.", "@NielsRogge \r\n\r\nHi Niels, \r\n\r\nI'm new to NLP and was reading this to try and further understand the BART model for seq2seq summarization. As you said above, the encoder outputs a tensor of the shape `(batch_size, sequence_length, hidden_size)` , and the decoder then goes and generate probabilities over all the `input_ids`. The decoder now outputs the softmax result, in the shape of `(batch_size, sequence_length, hidden_size)`. However, as I'm trying provide summarization, I want to convert this result into text. I understand greedy and beam searching, but am unsure of how to get to the generated text from the decoder's `last_hidden_state`. \r\n\r\nAny help would be much appreciated. Thanks in advance. \r\n ", "The decoder of `BartModel` outputs a tensor of shape `(batch_size, sequence_length, hidden_size)`, indeed (no softmax is involved). Next, the language modeling head that `BartForConditionalGeneration` places on top of the decoder will transform this into a tensor (usually called logits) of shape `(batch_size, sequence_length, vocab_size)`. \r\n\r\nTo know which tokens BART predicts, you can apply an argmax on the last dimension, i.e. `logits.argmax(dim=-1)`. This will give you a new tensor of shape `(batch_size, sequence_length)`, containing the token IDs as predicted by BART. \r\n\r\nHowever, at inference time, it's recommended to use the `generate()` method, which will autoregressively (i.e. from left to right) predict token ids. There are several decoding strategies available, such as greedy decoding, beam search, top-k sampling, etc. Let's take an example:\r\n\r\n```\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration\r\n\r\ntokenizer = BartTokenizer.from_pretrained(\"sshleifer/distilbart-cnn-12-6\")\r\nmodel = BartForConditionalGeneration.from_pretrained(\"sshleifer/distilbart-cnn-12-6\")\r\n\r\ntext = \"\"\"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.\"\"\"\r\n\r\n# prepare text for model\r\nencoding = tokenizer(text, return_tensors=\"pt\")\r\n\r\n# generate IDs autoregressively\r\npredicted_ids = model(**encoding)\r\n\r\n# decode IDs back to text\r\ngenerated_text = tokenizer.batch_decode(predicted_ids)[0]\r\nprint(generated_text)\r\n```", "@NielsRogge Yes that's what I used at the start. The problem lies in the fact that I want to convert my model to onnx, where the `generate` function is not available. I guess I will have to write my own greedy decoding method. ", "We've actually just added an [example](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/translation) of converting BART to ONNX, including beam search generation. However, the example doesn't include a README right now, it will be added soon. " ]
1,629
1,635
1,629
NONE
null
Hi, I want to conduct a Grammatical Error Correction task with BART, which takes corrupted sentences as inputs and make corrected answers as outputs. The model I'm using is `BartForConditionalGeneration`. I want to ask several things. 1. What is the difference between `decoder_input_ids` and `labels`? [The doc](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration) says, when handling seq2seq problems such as translation or summarization, `decoder_input_ids` should be given, otherwise the model just put the shifted encoder input into the decoder, which is not the desired process. However, there is another argument `labels` and I think I should give the answer sequence as `labels` to get the loss. And according to [here](https://huggingface.co/transformers/glossary.html#decoder-input-ids), I assume that BART takes the answer outputs as `labels`. Then what is `decoder_input_ids`? Is this not necessary when using `model.forward` function to train the model? 2. Should I pad the decoder inputs with `-100`? According to the doc, to make the loss function ignore the unwanted locations, it should be set to `-100`. But I want to make it ignore the pad token. Should I just set the pad token as `-100` or is there any way to make the loss function ignore the value I set? 3. Unlike the training, inference does not require the answers. However, like I mentioned above, if the model is not given `decoder_input_ids` or `labels`, then the model put the shifted inputs into the decoder. But this is not what we want. The decoder should start only with the start token at first. Then is it right to use `model.generate` not `model.forward` function without any decoder inputs given? I think I should use `model.generate` when inferencing but I want to make sure that `model.generate(input_ids=input_ids)` works as I described, which gives only the start token in the beginning. In fact, like the image below, it seems the input ids might be just copied judging by the values. So I'm worried if the decoder just took the input ids. ![image](https://user-images.githubusercontent.com/16731987/130325911-4c911ec7-6f5f-49e6-9c3c-802509163c56.png) 4. According to [this](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration), BART was pretrained to use EOS token as the start token of the decoder. I don't know why it should be, but anyway like the above image, we can see that all outputs start with both EOS and BOS token. Then may I assume that the model put both EOS and BOS token as the starting sign? 5. The last question is about beam search. I want to get the last hidden state from the decoder to conduct multi-task learning combined with LM and sentence classification. But when using the beam search, the shape of one tensor from `decoder_hidden_states` becomes `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`. Then how can we know which one is from the best result? Thank you for reading this long questions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13213/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13212
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13212/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13212/comments
https://api.github.com/repos/huggingface/transformers/issues/13212/events
https://github.com/huggingface/transformers/pull/13212
976,130,769
MDExOlB1bGxSZXF1ZXN0NzE3MTQ4MzE5
13,212
fix: typo spelling grammar
{ "login": "slowy07", "id": 40540262, "node_id": "MDQ6VXNlcjQwNTQwMjYy", "avatar_url": "https://avatars.githubusercontent.com/u/40540262?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slowy07", "html_url": "https://github.com/slowy07", "followers_url": "https://api.github.com/users/slowy07/followers", "following_url": "https://api.github.com/users/slowy07/following{/other_user}", "gists_url": "https://api.github.com/users/slowy07/gists{/gist_id}", "starred_url": "https://api.github.com/users/slowy07/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slowy07/subscriptions", "organizations_url": "https://api.github.com/users/slowy07/orgs", "repos_url": "https://api.github.com/users/slowy07/repos", "events_url": "https://api.github.com/users/slowy07/events{/privacy}", "received_events_url": "https://api.github.com/users/slowy07/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you run `make fixup` at the root of your `transformers` clone to fix the code quality issues? Thank you!", "thanks sir @LysandreJik , how can i do ??", "I guess you have clones `transformers` the following way:\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\n```\r\nYou can `cd` in the directory:\r\n```\r\ncd transformers\r\n```\r\ninstall the code quality tools:\r\n```\r\npip install -e \".[quality]\"\r\n```\r\nand run the command:\r\n```\r\nmake fixup\r\n```\r\n\r\nIf there's an error it can solve by itself, it will do so; if an error cannot be solved programmatically, it will tell you so :)\r\n\r\nAfterwards, you can commit the changes and push to your branch, the code quality issues should be fixed!", "make fixup and push on commit [c233be1](https://github.com/huggingface/transformers/pull/13212/commits/c233be17db7ecce48b54f1eb71070a1dee39d342)", "thank you sir @sgugger " ]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? fix typo spelling grammar, and replace to correct words with reference from [merriam webster](merriam-webster.com) and [wiktionary](https://www.wiktionary.org/) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> <!-- Fixes # (issue) --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13212/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13212", "html_url": "https://github.com/huggingface/transformers/pull/13212", "diff_url": "https://github.com/huggingface/transformers/pull/13212.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13212.patch", "merged_at": 1630325355000 }
https://api.github.com/repos/huggingface/transformers/issues/13211
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13211/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13211/comments
https://api.github.com/repos/huggingface/transformers/issues/13211/events
https://github.com/huggingface/transformers/pull/13211
976,035,742
MDExOlB1bGxSZXF1ZXN0NzE3MDg2NjUy
13,211
correcting group beam search function output score bug
{ "login": "sourabh112", "id": 66176305, "node_id": "MDQ6VXNlcjY2MTc2MzA1", "avatar_url": "https://avatars.githubusercontent.com/u/66176305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sourabh112", "html_url": "https://github.com/sourabh112", "followers_url": "https://api.github.com/users/sourabh112/followers", "following_url": "https://api.github.com/users/sourabh112/following{/other_user}", "gists_url": "https://api.github.com/users/sourabh112/gists{/gist_id}", "starred_url": "https://api.github.com/users/sourabh112/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sourabh112/subscriptions", "organizations_url": "https://api.github.com/users/sourabh112/orgs", "repos_url": "https://api.github.com/users/sourabh112/repos", "events_url": "https://api.github.com/users/sourabh112/events{/privacy}", "received_events_url": "https://api.github.com/users/sourabh112/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
#13177 This PR Fixes [#13177](https://github.com/huggingface/transformers/issues/13177) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13211/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13211/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13211", "html_url": "https://github.com/huggingface/transformers/pull/13211", "diff_url": "https://github.com/huggingface/transformers/pull/13211.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13211.patch", "merged_at": 1629718044000 }
https://api.github.com/repos/huggingface/transformers/issues/13210
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13210/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13210/comments
https://api.github.com/repos/huggingface/transformers/issues/13210/events
https://github.com/huggingface/transformers/pull/13210
976,020,991
MDExOlB1bGxSZXF1ZXN0NzE3MDc2NzY1
13,210
Add support for XLM-R XL and XXL models
{ "login": "Soonhwan-Kwon", "id": 7395166, "node_id": "MDQ6VXNlcjczOTUxNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/7395166?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Soonhwan-Kwon", "html_url": "https://github.com/Soonhwan-Kwon", "followers_url": "https://api.github.com/users/Soonhwan-Kwon/followers", "following_url": "https://api.github.com/users/Soonhwan-Kwon/following{/other_user}", "gists_url": "https://api.github.com/users/Soonhwan-Kwon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Soonhwan-Kwon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Soonhwan-Kwon/subscriptions", "organizations_url": "https://api.github.com/users/Soonhwan-Kwon/orgs", "repos_url": "https://api.github.com/users/Soonhwan-Kwon/repos", "events_url": "https://api.github.com/users/Soonhwan-Kwon/events{/privacy}", "received_events_url": "https://api.github.com/users/Soonhwan-Kwon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Soonhwan-Kwon ,\r\n\r\nsorry for the late reply! I discussed this topic with @patrickvonplaten a while ago and we came to the conclusion that it would be better to have a new model/class name for it, such as `XLMRobertaExtraLarge` to avoid these `if self.normalize_before` switches.\r\n\r\nI've also tested the model implementation on a GLUE task, but the result was not very good. The model is so large, that it was impossible for me to test it on a GPU - even with batch size 1. Then I did some DeepSpeed tests, but on my V100 I would have to wait more than 3 days for the smallest GLUE task - and the final result was not performing well :thinking: ", "@stefan-it thank you for the reply, and I have A100 80gb machine if you need any cross check.", "@Soonhwan-Kwon @stefan-it Can you share your Deepspeed configuration for loading the XLMR-xl? I'm getting Nan as the loss from deepspeed after using your code changes for the conversion. @Soonhwan-Kwon Do you have a plan to create a standalone file for XLMRobertaExtraLarge? The reason is that you current file change breaks the conversion for the large and base model.", "> @Soonhwan-Kwon @stefan-it Can you share your Deepspeed configuration for loading the XLMR-xl? I'm getting Nan as the loss from deepspeed after using your code changes for the conversion. @Soonhwan-Kwon Do you have a plan to create a standalone file for XLMRobertaExtraLarge? The reason is that you current file change breaks the conversion for the large and base model.\r\n\r\nMaybe I could paste my fine-tuning script by loading the XLM-Roberta-XLarge model, which is converted from @Soonhwan-Kwon 's script. You could run the script and have a double check with it. \r\n\r\n```bash\r\ndeepspeed --num_gpus=8 run_xnli.py --model_name_or_path /mnt/xlm-roberta-xlarge \\\r\n --deepspeed ds_config_zero3.json \\\r\n --language zh \\\r\n --train_language en \\\r\n --do_predict \\\r\n --max_seq_length 128 \\\r\n --per_device_train_batch_size 4 \\\r\n --learning_rate 2e-6 \\\r\n --logging_steps 100 \\\r\n --eval_steps 100 \\\r\n --save_steps 5000 \\\r\n --num_train_epochs 5 \\\r\n --output_dir /mnt/output_xlmr \\\r\n --cache_dir /mnt/cache \\\r\n --fp16 \\\r\n --overwrite_output_dir \\\r\n --evaluation_strategy \"steps\" \\\r\n --dataloader_num_workers 8 \\\r\n --use_fast_tokenizer False \r\n```" ]
1,629
1,632
1,632
CONTRIBUTOR
null
This PR adds support for the newly released XL and XXL models for XLM-R. These models are described in the "Larger-Scale Transformers for Multilingual Masked Language Modeling" paper. I compared fairseq and transformers side by side, and managed output same. torch.Size([1, 10, 250880]) torch.Size([1, 10, 250880]) max_absolute_diff = 0.00022125244140614 Do both models outut the same tensors? πŸ”₯ Since fairseq roberta to transformers conversion was made a long time ago, transformers architecture differs far from fairseq which originally started from, and it makes quite confusion to write right code. I synced transformers code to allow fairseq model structure. And the original PR https://github.com/huggingface/transformers/pull/12082#issue-665786049 was closed by its author @stefan-it and the PR(https://github.com/stefan-it/transformers/pull/1) I pushed for his repo about 40 days ago but got no response, so I opened the new PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13210/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13210/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13210", "html_url": "https://github.com/huggingface/transformers/pull/13210", "diff_url": "https://github.com/huggingface/transformers/pull/13210.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13210.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13209
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13209/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13209/comments
https://api.github.com/repos/huggingface/transformers/issues/13209/events
https://github.com/huggingface/transformers/pull/13209
976,010,691
MDExOlB1bGxSZXF1ZXN0NzE3MDcwMDM4
13,209
fix `AutoModel.from_pretrained(..., torch_dtype=...)`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note, I first tried a simple monkeypatching method, but it doesn't work with C extensions, which `torch.dtype` is:\r\n\r\n```\r\n if config.torch_dtype is not None:\r\n # in v5 convert str to torch.dtype\r\n import torch\r\n if not hasattr(torch.dtype, \"to_json_string\"):\r\n import builtins\r\n #torch.dtype.to_json_string = builtins.str\r\n setattr(torch.dtype, \"to_json_string\", builtins.str)\r\n```\r\ngot:\r\n```\r\nsetattr(torch.dtype, \"to_json_string\", builtins.str)\r\nTypeError: can't set attributes of built-in/extension type 'torch.dtype'\r\n```\r\n" ]
1,629
1,629
1,629
CONTRIBUTOR
null
This PR fixes one of the 2 issues reported in https://github.com/huggingface/transformers/issues/13076 ``` python -c "import torch; from transformers import AutoModel; AutoModel.from_pretrained('sshleifer/tiny-gpt2', torch_dtype=torch.float16)" 2021-08-20 18:45:07.802651: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 382, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/configuration_auto.py", line 511, in from_pretrained return config_class.from_dict(config_dict, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 581, in from_dict logger.info(f"Model config {config}") File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 613, in __repr__ return f"{self.__class__.__name__} {self.to_json_string()}" File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/configuration_utils.py", line 677, in to_json_string return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/__init__.py", line 234, in dumps return cls( File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 201, in encode chunks = list(chunks) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 438, in _iterencode o = _default(o) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type dtype is not JSON serializable ``` Additionally, it corrects the config object to convert the short "float32" string into `torch.float32` at object creation time. Note, I had a to change `from_dict` a bit to preserve `torch_dtype` arg in `AutoModel.from_pretrained(..., torch_dtype=...), as without this change `from_pretrained` was ignoring this argument. To remind, the issue is that we decided to store `torch_dtype` in the config object, but ignore it for now at load time. Which this PR also documents. Of course, tests added. Thank you. Fixes: https://github.com/huggingface/transformers/issues/13076 (note: 2 separate issues were reported there but it looks like only this is the real issue, so linking to close it with this PR) @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13209", "html_url": "https://github.com/huggingface/transformers/pull/13209", "diff_url": "https://github.com/huggingface/transformers/pull/13209.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13209.patch", "merged_at": 1629798222000 }
https://api.github.com/repos/huggingface/transformers/issues/13208
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13208/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13208/comments
https://api.github.com/repos/huggingface/transformers/issues/13208/events
https://github.com/huggingface/transformers/issues/13208
976,006,387
MDU6SXNzdWU5NzYwMDYzODc=
13,208
Loading of a model takes much RAM, passing to CUDA doesn’t free RAM
{ "login": "Artyrm", "id": 21180686, "node_id": "MDQ6VXNlcjIxMTgwNjg2", "avatar_url": "https://avatars.githubusercontent.com/u/21180686?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Artyrm", "html_url": "https://github.com/Artyrm", "followers_url": "https://api.github.com/users/Artyrm/followers", "following_url": "https://api.github.com/users/Artyrm/following{/other_user}", "gists_url": "https://api.github.com/users/Artyrm/gists{/gist_id}", "starred_url": "https://api.github.com/users/Artyrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Artyrm/subscriptions", "organizations_url": "https://api.github.com/users/Artyrm/orgs", "repos_url": "https://api.github.com/users/Artyrm/repos", "events_url": "https://api.github.com/users/Artyrm/events{/privacy}", "received_events_url": "https://api.github.com/users/Artyrm/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey @Artyrm,\r\n\r\nIt's quite difficult for us to reproduce the error - could you post a link to a google colab where we can rerun the code? Also do you see the same behavior locally or is it just on google colab?", "Hi, @patrickvonplaten.\r\nI have provided the link in my post, at the beggining of \"reproduce\" section.\r\n\r\nAnd yes, I can see it locally too, though I have only 2GB local VRAM, so I have to use small models, and memory waste is less obvious.", "Related I think: https://github.com/huggingface/transformers/pull/12106#discussion_r649876604", "@patil-suraj - we wanted to change GPT-Neo to use a local attention mask instead of the local attention layers as it was shown to be faster and less memory intensive no? Should we tackle that again?", "Also related: https://github.com/huggingface/transformers/pull/11736", "Need to say, that I have same problem locally (in less scale, since much less VRAM available) with other models, for example `sberbank-ai/rugpt3medium_based_on_gpt2` Although it is also \"GPT-3-like\" model.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "That's a pity.\r\nI hoped someone would have some ideas about it.", "@patil-suraj - did you look into this by any chance? ", "Similar to unresolved issue https://github.com/huggingface/transformers/issues/13624", "I tried some experiments, and it seems it's related to PyTorch rather than Transformers model.\r\nIt seems that when a model is moved to GPU, all CPU RAM is not immediately freed, as you could see in this [colab](https://colab.research.google.com/drive/1FvUtyCXFfx1cMexO24IvXXrkLPTt_Ok5?usp=sharing), but you could still use the RAM to create other objects, and it'll then free the memory or you could manually call `gc.collect`.\r\n\r\nAlso note that, `py.memory_info()[0]` gives total memory used by the process and not the current memory in use. We could use the `psutil.virtual_memory().available` to get the available RAM. I've used it in the colab above so you could see the difference.\r\n\r\nAlso gently pinging @stas00 who might be able to shed some light here :) \r\n ", "It's most likely a python issue and not torch's - this is because `gc.collect()` is a scheduled event and doesn't always run when a large object is freed.\r\n\r\nYou can read more about it here: https://docs.python.org/3/library/gc.html\r\n\r\nYou can experiment with setting a lower threshold https://docs.python.org/3/library/gc.html#gc.set_threshold\r\n\r\nI don't think there is any harm in `transformers` calling `gc.collect` immediately after switching the model to gpu - it'll be run anyway sooner or later, and thus it's not like it'll be introducing a performance hit at that particular point. \r\n\r\nWrt debug/tracing memory usage in notebooks I highly recommend using https://github.com/stas00/ipyexperiments since it prints out all that memory usage automatically for you after each cell is run, so it's much easier to run. if using on colab there are some nuances to handle:\r\nhttps://github.com/stas00/ipyexperiments#google-colab\r\n\r\nHmm, but actually `ipyexperiments` calls `gc.collect()` by itself to measure things correctly, so it's going to hide this issue. So probably scratch that idea in this particular context.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,636
1,636
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.7.0+cu110 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - benchmarks: @patrickvonplaten - pipelines: @LysandreJik ## Information Model I am using: EleutherAI/gpt-neo-1.3B The problem arises when using: * my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: [Google Colaboratory](https://colab.research.google.com/drive/1qptTsxuRvxnTq2FI39a9p8VSquH5Qafl?usp=sharing) notebook You will need β€œLarge memory” instance, since while transferring to CUDA it even overshoots 13GB RAM limit. I use Torch 1.7.0+cu110 since instance has CUDA 11.2. But with the default 1.9.0+cu102 it is more or less the same. I’m trying to finetune 1.3B model. And so I search for the way to optimize RAM usage (to be able to use cpu_offload with deep_speed). I noted that after load of a model it takes much RAM. ![image](https://user-images.githubusercontent.com/21180686/130306142-402e8aec-a7d9-48ca-bbbd-6b6189294c77.png) After model is loaded: 11.51 GB total memory used 0.0 GB used by torch objects on GPU 2 MiB total mem used on GPU And when I move it to GPU it a) takes only 5GB in VRAM (perhaps another 1.3GB is taken by Torch). b) doesn’t free any RAM, even takes some 2.5GB more. So the problem I see: a) Model occupates much more space in RAM then in VRAM. b) It doesn’t free RAM upon moving to CUDA. Python garbage collector doesn’t help also. Any thoughts on this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13208/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13207/comments
https://api.github.com/repos/huggingface/transformers/issues/13207/events
https://github.com/huggingface/transformers/pull/13207
975,944,473
MDExOlB1bGxSZXF1ZXN0NzE3MDE3MzM4
13,207
Support for Training with BF16
{ "login": "JamesDeAntonis", "id": 33379057, "node_id": "MDQ6VXNlcjMzMzc5MDU3", "avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JamesDeAntonis", "html_url": "https://github.com/JamesDeAntonis", "followers_url": "https://api.github.com/users/JamesDeAntonis/followers", "following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}", "gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions", "organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs", "repos_url": "https://api.github.com/users/JamesDeAntonis/repos", "events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}", "received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "ok, pt-nightly installed so that I could test the new functionality.\r\n\r\nso I added:\r\n```\r\n- `require_torch_bf16`\r\n- `is_torch_bf16_available`\r\n- super basic test that validates that `--bf16` doesn't fail\r\n- placeholder for the future bf16 full eval test\r\n```\r\n\r\nSo now the tests should be expanded to actually validate that bf16 is happening, with some number checking - e.g. we could check that the numbers are indeed bf16.\r\n", "I'm not observing as much of a memory improvement as I expected.\r\n\r\nMemory improvements I'm seeing are 0-15%, whereas I expected around 40% (per derrick's observation [here](https://github.com/huggingface/transformers/pull/10956#issuecomment-841431396)). Is there anywhere in master where autocast is disabled for some reason? For example, that was going to be the case [here](https://github.com/huggingface/transformers/pull/10956/files#diff-ebaff1224cad711fd3cefb771ce17d1392ae2dfc7f74dc7da9dd014d7642a344R308), but that change is not currently in master.\r\n\r\nThe two questions I just commented are part of my digging into whether there is a bug somewhere.\r\n\r\nEDIT: I found it interesting that `fp16` was giving similarly lackluster gains as `bf16`. That suggests it's not a `bf16`-specific issue", "I saw some recent comments on the torch slack that suggestion that bf16 hasn't quite been figured out performance-wise and can actually be slower depending on the hardware. One issue being cuDNN has no `bfloat16` at the moment, the other issue is that many bf16 kernels are simply not there, so it falls back to some slower functionality.\r\n\r\nMay I suggest to ask this question on https://discuss.pytorch.org/ and hopefully some knowledgeable devs with experience could give us a definitive answer. Please share the link if you do.\r\n\r\nI think on our side the main priority for providing bf16 support is to overcome the overflow issue in models pretrained in mixed bf16, performance being secondary. But of course, it'd be great to actually benefit from the new Ampere cards which have a lot of power but almost a year passed and we still can't quite harness that power.\r\n\r\nBTW, which card are you testing it with?", ">BTW, which card are you testing it with?\r\n\r\nRTX A6000. I'm pretty sure it's not related to this pr, per [this](https://discuss.pytorch.org/t/amp-autocast-not-faster-than-fp32/111757/11?u=jadeantonis)", "Thank you for posting there, @JamesDeAntonis. Let's see if Piotr has some encouraging feedback. \r\n\r\nOtherwise the whole thing is very veiled at the moment as nobody wrote any definitive answers.", "Also fyi, I updated `s/fast_dtype/dtype/` as it changed in nightly. But the current nightly has a broken `is_bf16_supported()` function, so it won't work - I reported it - hope should be fixed in a day or two.\r\n\r\nSo best don't update your nightly just yet.", "Quoting from: https://github.com/pytorch/pytorch/issues/57806#issuecomment-834697571\r\n> Concerning the lousy \"speedup\" with amp on A100, first of all I'd expect less of a relative difference because A100 should use TF32 tensor cores by default for FP32 (non-amp) runs, which is 2X less throughput than FP16 for matmuls on paper, but much faster than not using tensor cores at all. This closes the performance gap to amp, so I do expect the FP32 vs amp difference to be more modest on Ampere. It's possible that with FP32 backed by TF32 library math, ops that benefit most from tensor cores (ie matmuls, convs) have been accelerated enough that the network is mainly bound by CPU overhead or by ops Amp doesn't affect as much, so turning on Amp doesn't squeeze much more blood from the stone.\r\n\r\nSo if that is so, then you're not seeing an improvement from amp/bf16 because behind the scenes it already uses tf32.\r\n\r\nWe definitely need some more definitive guides as currently we can only collect such comments shared here and there and no proper document that covers all the grounds.", ">So if that is so, then you're not seeing an improvement from amp/bf16 because behind the scenes it already uses tf32.\r\n\r\nInteresting, but what about the below snippet from [here](https://moocaholic.medium.com/fp64-fp32-fp16-bfloat16-tf32-and-other-members-of-the-zoo-a1ca7897d407)\r\n\r\n>For comparison, A100’s peak performances are:\r\n>FP32 without tensor core: 19.5 TFLOPS\r\n>TF32 tensor core: 156 TFLOPS (so, using TF32 in place of FP32 can give you an easy speed improvement).\r\n>FP16/BF16 tensor core: 312 TFLOPS (so, thoughtfully designed switch to FP16/BF16 can give you more speed improvements, but the costs are higher).\r\n\r\nIt looks like the gains are still \"immodest\" in the presence of an fp16/bf16 tensor core. Is the point that 156 TFLOPS is already so fast that further improvements are not worth the costs of making the switch from fp32 to fp/bf16? Because otherwise, it should still be at least somewhat faster, not slower.", "I don't yet have the understanding of this domain to comment intelligently. My feeling is that the devil is in the detail and will heavily depend on each model's components. And it's best to discuss this subject matter on the pytorch side where the experts who understand this domain are.\r\n\r\nUsing my limited understanding my answer to your question would be:\r\nMost likely if you were to take a single tensor and run it through an OP that natively supports TF32 and BF16 you should see the numbers you quoted. But since there is a lot of back and forth casting happening around amp and not all ops support these native functions, the overall results with hundreds of different ops combined in the single fwd/bwd pass of a model the results are quite different.\r\n\r\nIn lieu of having an expert advice the other approach is to run your code through a native torch profiler, watch which ops get invoked on what dtypes, look them up whether they support the new tensor cores, etc.\r\n", "Updates:\r\n- pt-nightly from 09.01 can be used with this PR (dates before that had a bug)\r\n- merged the 2 bf16 util functions as suggested by Sylvain\r\n\r\n@JamesDeAntonis, you now have a green light to address the proposed changes after updating your install of pt-nightly, then when it's done we will update/complete the tests and then we can merge this.\r\n\r\nIf something is unclear please let us know. If you're not sure how to deal with deprecation, then you can just complete the new API and will add the deprecation afterwards. But you can look up at other cli args deprecations done in `training_args.py`.\r\n\r\nThank you!", "Update on this: my teammate is investigating the slowdown and doing some tests on both inference and training. He should have some results pretty soon that we can work with in a resumed discussion, including some new commits. Thanks for all your help so far!", "Awesome, thank you for the update, @JamesDeAntonis!", "Hi @stas00, what do you think of these results and justifications? All numbers from `t5-3b` on A100 cards\r\n\r\n```\r\nbf16 train:\r\nINFO - Finished in 140.42053532600403s\r\nINFO - Peak memory usage: 68.577 GB\r\nfp32 train:\r\nINFO - Finished in 131.2668480873108s\r\nINFO - Peak memory usage: 71.861 GB\r\nbf16 generate 32 tokens:\r\nINFO - Finished in 1.271615743637085s\r\nINFO - Peak memory usage: 6.927 GB\r\nfp32 generate 32 tokens:\r\nINFO - Finished in 1.2117650508880615s\r\nINFO - Peak memory usage: 13.057 GB\r\n```\r\n\r\n## Some justifications:\r\n\r\nMemory:\r\n* 32-token generation: 47% improvement because, in the fp32 case, even though computations are done in 19-bit, all the 32-bit weights are stored in memory. in the bf16 case, memory is only allocated for 16-bit weights\r\n* Training: 5% improvement because the only difference between fp32 and bf16 is that fp32 does default auto-casting to tf32/bf19 while bf16 does auto-casting to bf16. so, the gains come from the 16% bit reduction during computation\r\n\r\nTime:\r\n* 5-6% time increase both times when using bf16. I don't understand why this is happening", "@stas00 one other detail is that we're having trouble training to the same loss as regular precision (1.5 for bf16 amp vs 1.2 for full precision). Furthermore, when we generate with the 1.5-loss model, we get gibberish regardless of whether generating at full precision or half. This leads me to question whether our branch is completely correct.\r\n\r\nWith this in mind, I don't understand why we wouldn't scale when training with bf16. Rationale: if fp32 is -1000.0 to 1000.0 (precise to tenth's place), fp16 is like the integers -500 to 500 and bf16 is like the even integers -1000 to 1000. To avoid underflow, fp16 amp convention in this analogy is to scale by a factor of 10 to make fp16's most precise unit (integer) analogous to fp32's most precise unit (tenth's place). By this logic, bf16 should be scaled by 20 to have the same effect.\r\n\r\nDo you agree with that logic, or do you understand where I go wrong?", "> Hi @stas00, what do you think of these results and justifications? All numbers from `t5-3b` on A100 cards\r\n> [...]\r\n> Memory:\r\n> \r\n> * 32-token generation: 47% improvement because, in the fp32 case, even though computations are done in 19-bit, all the 32-bit weights are stored in memory. in the bf16 case, memory is only allocated for 16-bit weights\r\n> \r\n> * Training: 5% improvement because the only difference between fp32 and bf16 is that fp32 does default auto-casting to tf32/bf19 while bf16 does auto-casting to bf16. so, the gains come from the 16% bit reduction during computation\r\n\r\nShouldn't bf16 be 2x faster than tf32 according to nvidia GPU specs? At least for some ops?\r\n\r\nI don't suppose we have a way to tell pytorch to tell cuda not to cast to tf32 - so that we could compare bf16 to the actual fp32.\r\n\r\nI'd say post all these benchmarks in that pytorch thread, since that's the bf16 experts are. And ask whether what you got makes sense and if it doesn't why and how can we/they fix that.", "> With this in mind, I don't understand why we wouldn't scale when training with bf16.\r\n\r\nExcellent catch! It's because we forgot to do that! It's currently done for fp16 only:\r\n\r\nhttps://github.com/huggingface/transformers/blob/5e3b4a70d3d17f2482d50aea230f7ed42b3a8fd0/src/transformers/trainer.py#L436-L444\r\n\r\nwhile at it perhaps check if there are other `if args.fp16` checks that need to have ` or args.bf16` added.", ">Excellent catch! It's because we forgot to do that! It's currently done for fp16 only:\r\n\r\nOk, and I think the default of `2 ** 16` would work for bf16, because precision is pooled up by 16 bits (by the way, I think `2 ** 16` is overkill for fp16 because precision for fp16 is only pooled up by only 13 bits [the other three bits are saved by decreasing range, ie the source of the original issue], but it doesn't really matter)", "Let's see if your latest code leads to a better outcome in your quality and speed benchmarks.", "πŸŽ‰πŸͺ„πŸ₯‡πŸ†πŸš€", "Unfortunately, it doesn't seem like scaling fixed the issue. The loss went down to the fp32 level (actually eclipsed it), but inference still gave gibberish", "Sorry to hear it didn't help, James.\r\n\r\nHere are some ideas to potentially proceed with:\r\n\r\nIs this something we could ask for the pytorch team to reproduce? \r\n\r\ni.e. ideally writing a few lines of code that they could reproduce the issue with?\r\n\r\nDo you know if the inference works ok, if you were to train in amp/bf16 but then doing inference in fp32? and if it has to do with amp/bf16 or full bf16?\r\n\r\nPerhaps something is wrong only in the inference stage?\r\n\r\nThe other or an additional approach could be to take a normally trained public model and to try to run inference on it in (1) amp/bf16 (2) full bf16 and comparing the outcome with the fp32 mode?", "Hi @JamesDeAntonis \r\nI am trying to train mt5-xxl-13B model with 8x40GB A100.\r\nI was wondering what is the condition for this PR. Is this ok to use it for training or any red flags?", "Hi James, I'm trying to fine-tune T5-3B on a single A100 GPU (40gb memory) and I tried this PR out of desperate search. It seems like a promising direction to use `bf16` as it's natively supported by pytorch. However, while `fp16` with `amp` didn't, this version of the code seems to give `CUDA_MEMORY_ERROR` even with batch size 1", "OK, since we have 2 half-baked PRs, \r\n\r\nhttps://github.com/huggingface/transformers/pull/13207\r\nhttps://github.com/huggingface/transformers/pull/14448\r\nI'm going to try to merge the 2 to keep the credits and start a new PR.\r\n\r\nIf you have something to push now is the time.", "@manuelciosici, FYI: we will deal with deepspeed in a separate PR https://github.com/huggingface/transformers/pull/14569 - in particular since the ZeRO3 support hasn't been merged yet and we always need a new release from deepspeed to be able to update our integration side.", "@sgugger, please kindly have a look. I merged 2 PRs and cleaned things up and added a single deprecation.\r\n\r\nI also reverted the earlier attempt to use a shared `--half_precision_full_eval` since it didn't make sense - `--fp16_full_eval` and `--bf16_full_eval` are similar but 2 different modes requiring different code. If we want a shared one then we have to additionally require either `--fp16` or `--bf16` and then adjust the logic accordingly. If you prefer that let me know.\r\n\r\nSince bf16 has a much larger dynamic range most of the fp16 workarounds of that type aren't needed. So I grep'ed for `if torch.float16` checks and I didn't see anything other 2 places. I'm sure I may have missed some, but it'll surely let itself known when we start using it.\r\n\r\nNote, I've updated the OP with the up-to-date list of changes, so please refer to it for an overview.\r\n\r\nSo I think we just need a couple of tests and if everybody is happy this is good to go. (tests added)\r\n\r\nThe CI failure is unrelated.", "OK, a few tests added.\r\n\r\n@JamesDeAntonis and @manuelciosici - please have a look - let me know if anything else is needed in your opinion. Thanks.", "@sgugger, would it help to document the bf16 API as experimental and a subject to change at a moment's notice? ", "Yes please!", "Thanks a lot for the review and the suggestions, @manuelciosici - all integrated, plus added a warning that this API is experimental, so if once we start using it we find that we could improve it we can." ]
1,629
1,638
1,638
CONTRIBUTOR
null
# What does this PR do? As seen in [this pr](https://github.com/huggingface/transformers/pull/10956), there is demand for bf16 compatibility in training of transformers models. The pytorch folks just added [this feature](https://github.com/pytorch/pytorch/pull/61002) to their master branch, so we are now able to work on adding it to this repo. This pr follows from [this issue](https://github.com/huggingface/transformers/issues/13170). Fixes https://github.com/huggingface/transformers/issues/13170 ------------------ (OP edited by @stas00) Also merged here and adapted changes proposed by @manuelciosici at https://github.com/huggingface/transformers/pull/14448 This PR: - adds helper utils: `require_torch_bf16` and `is_torch_bf16_available` - modifies `invert_attention_mask` and one `forward` in t5 to include bf16 mode switches HF Trainer: - adds `--bf16` and `--bf16_full_eval` modes - same as fp16 equivalents - renames and deprecates `--fp16_backend` and replaces it with `--half_precision_backend` - since we now have more than one half precision mode Tests: - adds `--bf16` and `--bf16_full_eval` tests @sgugger, @LysandreJik, Also tagging @patrickvonplaten, @patil-suraj since once this is merged you can start sending users that have problems with bf16 pre-trained models and have Amphere hardware to use this `--bf16` mode. Deepspeed bf16 support will follow soon.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13207/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13207", "html_url": "https://github.com/huggingface/transformers/pull/13207", "diff_url": "https://github.com/huggingface/transformers/pull/13207.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13207.patch", "merged_at": 1638324047000 }
https://api.github.com/repos/huggingface/transformers/issues/13206
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13206/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13206/comments
https://api.github.com/repos/huggingface/transformers/issues/13206/events
https://github.com/huggingface/transformers/issues/13206
975,722,590
MDU6SXNzdWU5NzU3MjI1OTA=
13,206
CausalLM vs HeadModel
{ "login": "StellaAthena", "id": 15899312, "node_id": "MDQ6VXNlcjE1ODk5MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StellaAthena", "html_url": "https://github.com/StellaAthena", "followers_url": "https://api.github.com/users/StellaAthena/followers", "following_url": "https://api.github.com/users/StellaAthena/following{/other_user}", "gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}", "starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions", "organizations_url": "https://api.github.com/users/StellaAthena/orgs", "repos_url": "https://api.github.com/users/StellaAthena/repos", "events_url": "https://api.github.com/users/StellaAthena/events{/privacy}", "received_events_url": "https://api.github.com/users/StellaAthena/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "They are exactly the same `LMHeadModel` was a badly chosen name in the beginning of the library - we are trying to have all causal language models called `...ForCausalLM` now", "> They are exactly the same `LMHeadModel` was a badly chosen name in the beginning of the library - we are trying to have all causal language models called `...ForCausalLM` now\r\n\r\nI see. FYI, there are already downstream users who are basing their codebases on `LMHeadModel` such as [Google's BIG-Bench](https://github.com/google/BIG-bench/blob/main/bigbench/models/huggingface_models.py). I worry that this disconnect will build significant technical debt if it is not resolved promptly.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
CONTRIBUTOR
null
@patrickvonplaten, @LysandreJik @sgugger GPT-Neo implements the class `GPTNeoForCausalLM` and GPT-2 implements the class `GPT2LMHeadModel`. These look like they're supposed to do roughly the same thing. What is the reasoning behind having different names? Do they have any functional differences (other than using different models obviously)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13206/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13205
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13205/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13205/comments
https://api.github.com/repos/huggingface/transformers/issues/13205/events
https://github.com/huggingface/transformers/pull/13205
975,669,794
MDExOlB1bGxSZXF1ZXN0NzE2Nzg4MjUz
13,205
Fixes #12941 where use_auth_token not been set up early enough
{ "login": "bennimmo", "id": 1629121, "node_id": "MDQ6VXNlcjE2MjkxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1629121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bennimmo", "html_url": "https://github.com/bennimmo", "followers_url": "https://api.github.com/users/bennimmo/followers", "following_url": "https://api.github.com/users/bennimmo/following{/other_user}", "gists_url": "https://api.github.com/users/bennimmo/gists{/gist_id}", "starred_url": "https://api.github.com/users/bennimmo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bennimmo/subscriptions", "organizations_url": "https://api.github.com/users/bennimmo/orgs", "repos_url": "https://api.github.com/users/bennimmo/repos", "events_url": "https://api.github.com/users/bennimmo/events{/privacy}", "received_events_url": "https://api.github.com/users/bennimmo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12941 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13205/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13205/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13205", "html_url": "https://github.com/huggingface/transformers/pull/13205", "diff_url": "https://github.com/huggingface/transformers/pull/13205.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13205.patch", "merged_at": 1630336790000 }
https://api.github.com/repos/huggingface/transformers/issues/13204
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13204/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13204/comments
https://api.github.com/repos/huggingface/transformers/issues/13204/events
https://github.com/huggingface/transformers/issues/13204
975,605,834
MDU6SXNzdWU5NzU2MDU4MzQ=
13,204
[Optimization] AdaFactor not working on TPU but works on GPU.
{ "login": "prikmm", "id": 47216475, "node_id": "MDQ6VXNlcjQ3MjE2NDc1", "avatar_url": "https://avatars.githubusercontent.com/u/47216475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prikmm", "html_url": "https://github.com/prikmm", "followers_url": "https://api.github.com/users/prikmm/followers", "following_url": "https://api.github.com/users/prikmm/following{/other_user}", "gists_url": "https://api.github.com/users/prikmm/gists{/gist_id}", "starred_url": "https://api.github.com/users/prikmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prikmm/subscriptions", "organizations_url": "https://api.github.com/users/prikmm/orgs", "repos_url": "https://api.github.com/users/prikmm/repos", "events_url": "https://api.github.com/users/prikmm/events{/privacy}", "received_events_url": "https://api.github.com/users/prikmm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Gently pinging @sgugger here", "Hey @patrickvonplaten, can I get an update on this issue? \r\n\r\nThanks!", "Adding Adafactor in the Transformers library was a mistake, Transformers is a library for models, not optimizers.\r\nI don't think this will be addressed @prikmm so you should look for another implementation of this optimizer to use.", "@sgugger It worked. I was initializing the `optimizer` and `lr_scheduler` in global scope.\r\n```python\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(....)\r\nWRAPPED_MODEL = xmp.MpModelWrapper(model)\r\noptimizer = Adafactor(model.parameters(), scale_parameter=False, \r\n relative_step=False, warmup_init=False,\r\n lr=1e-3)\r\nlr_scheduler = get_linear_schedule_with_warmup(optimizer,\r\n num_training_steps=Config.total_train_steps,\r\n num_warmup_steps =Config.warmup_steps )\r\n\r\ndef _mp_fn():\r\n .....\r\n trainer = Trainer(....., optimizers=(optimizer, lr_scheduler))\r\n trainer.train()\r\n .....\r\nxmp.spawn(_mp_fn, start_methods=\"fork\")\r\n```\r\nWhen I initialized them inside `_mp_fn()`, everything worked fine.\r\n```python\r\ndef _mp_fn():\r\n ......\r\n optimizer, lr_scheduler = get_optim_lr(model)\r\n trainer = Trainer(....., optimizers=(optimizer, lr_scheduler))\r\n trainer.train()\r\n ......\r\n\r\nxmp.spawn(_mp_fn, start_method=\"fork\")\r\n```\r\nI think in method-1, the optimizer gets linked to model weights present in host memory. And when the optimizer gets copied to each TPU device. It will still be linked to model weights present in host memory (or to nothing), and the loss will update the model weights in host memory (or it won't, I have not been able to check that), and not the model present in each TPU device. \r\n\r\nWhereas , in method-2, since, the optimizer is defined in TPU device scope, and uses the model present there. It is able to update model weights using the loss of that device. I tried `AdamW` using both the methods, and found `AdamW` too doesn't work in method-1 but works in method-2.\r\n\r\nFor GPU, I perform:\r\n```python\r\nmodel = model.to(\"cuda\")\r\n```\r\nbefore initializing the optimizer. So, here the optimizer is linked to right model weights (present in GPU). Hence, everything worked while training on single GPU.\r\n\r\nGenerally, when using GPU, if two or more variables in use are not on the same device, it will throw an error. This is not the case with TPU, it throws no error, because of which it took such a long time to solve.\r\n\r\nThis is what I have theorised. If I am wrong, please let me know?\r\n\r\nI use a single GPU majority of the time, so pardon me for my lack of TPU knowledge (It's increasing everyday) :)", "Ah yes, you should always define your optimizer after transferring your model on the TPU when working on TPUs, because moving the model to the TPU actually creates new tensors for each parameter. So in your case 1, the optimizer was completely disconnected from the model." ]
1,629
1,630
1,630
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 / 4.10.0.dev0 - Platform: Kaggle / Colab - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0a0+git1a7c23c (Kaggle) / 1.9.0+cu102 (Colab) - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes, TPU ## Information Model I am using (Bert, XLNet ...): T5 The tasks I am working on is: * [x] an official GLUE/SQUaD task: (XSum) * [ ] my own task or dataset: (give details below) I am trying to finetune t5-small on XSum using `AdaFactor` and `get_linear_schedule_with_warmup`. I am able to do this when I use GPU but when using TPU, the model doesn't converge. The train loss varies but doesn't decrease and validation loss stays constant. Linear Schedule works properly, I saw my `comet_ml` graph, and `lr` was changing the way it should. It's like the loss is not modifying the weights at all. Code for initializing optimizer and lr_scheduler: ```python optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) lr_scheduler = get_linear_schedule_with_warmup(optimizer, num_training_steps=Config.total_train_steps, num_warmup_steps =Config.warmup_steps ) ``` ## To reproduce The below given colabs are similar but I have provided GPU and TPU notebooks so that running and waiting for results is not needed: TPU: [Colab Link](https://colab.research.google.com/drive/1MAID8RhaLSevIyhhotUmxZAgjZj9IXR_?usp=sharing) GPU: [Colab Link](https://colab.research.google.com/drive/111i6_P7PTtpuQLMU26NBCx9qam-q7WSW?usp=sharing) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Training Loss should decrease and eventually converge (like it does on GPU) #### Edit: Provided links of two notebooks, for GPU and TPU each.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13204/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13203
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13203/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13203/comments
https://api.github.com/repos/huggingface/transformers/issues/13203/events
https://github.com/huggingface/transformers/issues/13203
975,586,025
MDU6SXNzdWU5NzU1ODYwMjU=
13,203
How do i get the CLS token from the model output?
{ "login": "mosh98", "id": 48658042, "node_id": "MDQ6VXNlcjQ4NjU4MDQy", "avatar_url": "https://avatars.githubusercontent.com/u/48658042?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mosh98", "html_url": "https://github.com/mosh98", "followers_url": "https://api.github.com/users/mosh98/followers", "following_url": "https://api.github.com/users/mosh98/following{/other_user}", "gists_url": "https://api.github.com/users/mosh98/gists{/gist_id}", "starred_url": "https://api.github.com/users/mosh98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mosh98/subscriptions", "organizations_url": "https://api.github.com/users/mosh98/orgs", "repos_url": "https://api.github.com/users/mosh98/repos", "events_url": "https://api.github.com/users/mosh98/events{/privacy}", "received_events_url": "https://api.github.com/users/mosh98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can get the final hidden state of the [CLS] token as follows:\r\n\r\n`cls_token_final_hidden_state = output.last_hidden_state[:,0,:]`\r\n\r\nThis is because the last hidden states are of shape (batch_size, sequence_length, hidden_size), and the [CLS] token is the first element across the sequence (also called time) dimension.", "Ah i see that makes sense.\r\n\r\nHuge thanks again Niels!" ]
1,629
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.0+cu102 (False)- Tensorflow version (GPU?): 2.6.0 (False)- Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - funnel: @sgugger - rag: @patrickvonplaten, @lhoestq Library: - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. Examples: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): roBerta The problem arises when using: * [X ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details below) My dataset is like a bunch of sentences with labels with them ranging from 0, 1, 2 , 3 ## To reproduce Steps to reproduce the behavior: 1. Looping throw each sentence 2. getting input Id's 3. trying to get the CLS vector for each example My questions are; Am i doing it right? How do i know i am getting the CLS token? ``` for idx, row in df.iterrows(): #Looping through each Row input_ids = torch.tensor(tokenizer.encode(row.Sentence)).unsqueeze(0) #Getting the input id's of the sentence output = model(input_ids) #Passing it to model print( output.last_hidden_state) #Here is where i want to get the [CLS] token vector ``` ## Expected behavior How do i get the [CLS] token for each example of Sentence? Asking this becuase, the transformer [documentation](https://huggingface.co/transformers/main_classes/output.html) does not specify how to get the [CLS] token vector Any Help is much Appreciated
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13203/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13202
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13202/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13202/comments
https://api.github.com/repos/huggingface/transformers/issues/13202/events
https://github.com/huggingface/transformers/issues/13202
975,498,029
MDU6SXNzdWU5NzU0OTgwMjk=
13,202
-100 when calculating perplexity of a model..
{ "login": "HongyuanLuke", "id": 30339670, "node_id": "MDQ6VXNlcjMwMzM5Njcw", "avatar_url": "https://avatars.githubusercontent.com/u/30339670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HongyuanLuke", "html_url": "https://github.com/HongyuanLuke", "followers_url": "https://api.github.com/users/HongyuanLuke/followers", "following_url": "https://api.github.com/users/HongyuanLuke/following{/other_user}", "gists_url": "https://api.github.com/users/HongyuanLuke/gists{/gist_id}", "starred_url": "https://api.github.com/users/HongyuanLuke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HongyuanLuke/subscriptions", "organizations_url": "https://api.github.com/users/HongyuanLuke/orgs", "repos_url": "https://api.github.com/users/HongyuanLuke/repos", "events_url": "https://api.github.com/users/HongyuanLuke/events{/privacy}", "received_events_url": "https://api.github.com/users/HongyuanLuke/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "-100 is the `ignore_index` of PyTorch's `CrossEntropyLoss`, as explained in their [docs](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html). It means that labels that are set to -100 to not contribute to the loss.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
Hi there, I see in : https://huggingface.co/transformers/perplexity.html there is a code block saying: ``` max_length = model.config.n_positions stride = 512 lls = [] for i in tqdm(range(0, encodings.input_ids.size(1), stride)): begin_loc = max(i + stride - max_length, 0) end_loc = min(i + stride, encodings.input_ids.size(1)) trg_len = end_loc - i # may be different from stride on last loop input_ids = encodings.input_ids[:,begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:,:-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) log_likelihood = outputs[0] * trg_len lls.append(log_likelihood) ppl = torch.exp(torch.stack(lls).sum() / end_loc) ``` I am wondering why we are setting the tokens to -100. Is it a hard-coded number?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13202/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13201
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13201/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13201/comments
https://api.github.com/repos/huggingface/transformers/issues/13201/events
https://github.com/huggingface/transformers/issues/13201
975,463,556
MDU6SXNzdWU5NzU0NjM1NTY=
13,201
Train Bart model only use one cpu core, Any solutions to use more cores?
{ "login": "BeanSprouts", "id": 3087746, "node_id": "MDQ6VXNlcjMwODc3NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3087746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BeanSprouts", "html_url": "https://github.com/BeanSprouts", "followers_url": "https://api.github.com/users/BeanSprouts/followers", "following_url": "https://api.github.com/users/BeanSprouts/following{/other_user}", "gists_url": "https://api.github.com/users/BeanSprouts/gists{/gist_id}", "starred_url": "https://api.github.com/users/BeanSprouts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BeanSprouts/subscriptions", "organizations_url": "https://api.github.com/users/BeanSprouts/orgs", "repos_url": "https://api.github.com/users/BeanSprouts/repos", "events_url": "https://api.github.com/users/BeanSprouts/events{/privacy}", "received_events_url": "https://api.github.com/users/BeanSprouts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The HuggingFace Trainer does not support multi-CPU training. From the [docs](https://huggingface.co/transformers/main_classes/trainer.html): \r\n> The API supports distributed training on multiple GPUs/TPUs, mixed precision through NVIDIA Apex and Native AMP for PyTorch and tf.keras.mixed_precision for TensorFlow.\r\n\r\nYou can perhaps use [HuggingFace Accelerate](https://github.com/huggingface/accelerate) for this, as it supports multi-CPU both on a single machine as well as multiple machines. From their README:\r\n\r\n> Supported integrations\r\nCPU only\r\nmulti-CPU on one node (machine)\r\nmulti-CPU on several nodes (machines)\r\nsingle GPU\r\nmulti-GPU on one node (machine)\r\nmulti-GPU on several nodes (machines)\r\nTPU\r\nFP16 with native AMP (apex on the roadmap)\r\nDeepSpeed support (experimental)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: linux on arm - Python version: 3.7 - PyTorch version (GPU?):CPU-1.9.0 - Tensorflow version (GPU?):None - Using GPU in script?: No - Using distributed or parallel set-up in script?:No I am training a Bart Model(BartForConditionalGeneration) The problem arises when using: when I run my train script, only use one core of my cpu. I doesn't have any GPUs, but I got a cpu with 96 cores. How can I make it to use more cores? my own scripts: `from transformers import BartForConditionalGeneration,BartConfig,BartTokenizerFast,LineByLineTextDataset,DataCollatorForLanguageModeling,Trainer,TrainingArguments from tokenizers import models, normalizers, pre_tokenizers,Tokenizer from tokenizers.trainers import BpeTrainer `_special_tokens = ["<s>","</s>","<unk>","<pad>","<mask>"]` def trainBartTokenizer(files, vocab_size, tokenize_save_floder): tokenizer = Tokenizer(models.BPE(unk_token='<unk>')) tokenizer.normalizer = normalizers.Sequence( [normalizers.NFD(), normalizers.Lowercase(), normalizers.Strip()] ) tokenizer.pre_tokenizer = pre_tokenizers.CharDelimiterSplit(" ") print(tokenizer.pre_tokenizer.pre_tokenize_str("This is an example!\r\n")) trainer = BpeTrainer(vocab_size=vocab_size, show_progress=True, special_tokens=_special_tokens) tokenizer.train(files=files,trainer=trainer) tokenizer.model.save(tokenize_save_floder) print("Tokenizer Trainning Completed! Vocab size {}".format(tokenizer.get_vocab_size())) def train_bart_model(config:BartConfig,tokenizer:BartTokenizerFast,corpus_file_path,model_save_path): model = BartForConditionalGeneration(config=config) print('model size:{}'.format(model.num_parameters())) dataset = LineByLineTextDataset(tokenizer=tokenizer,file_path=corpus_file_path,block_size=128) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False ) training_args = TrainingArguments( output_dir=model_save_path, overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=32, save_steps=10000, save_total_limit=2 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset ) trainer.train() trainer.save_model(model_save_path) `data_file_path = '/Users/beansprouts/Documents/corpus/small.txt' trainBartTokenizer([data_file_path],5000,'./bart_tokenize') tokenizer = BartTokenizerFast.from_pretrained('./bart_tokenize',max_len=512) tokenizer.normalizer = normalizers.Sequence( [normalizers.NFD(), normalizers.Lowercase(), normalizers.Strip()] ) tokenizer.pre_tokenizer = pre_tokenizers.CharDelimiterSplit(" ") config = BartConfig(vocab_size=tokenizer.vocab_size, max_position_embeddings=514) train_bart_model(config,tokenizer,data_file_path,'./bart_model')` @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13201/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13200
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13200/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13200/comments
https://api.github.com/repos/huggingface/transformers/issues/13200/events
https://github.com/huggingface/transformers/issues/13200
975,416,197
MDU6SXNzdWU5NzU0MTYxOTc=
13,200
Some tokenizers are not really picklable
{ "login": "ben-davidson-6", "id": 4704970, "node_id": "MDQ6VXNlcjQ3MDQ5NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4704970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ben-davidson-6", "html_url": "https://github.com/ben-davidson-6", "followers_url": "https://api.github.com/users/ben-davidson-6/followers", "following_url": "https://api.github.com/users/ben-davidson-6/following{/other_user}", "gists_url": "https://api.github.com/users/ben-davidson-6/gists{/gist_id}", "starred_url": "https://api.github.com/users/ben-davidson-6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ben-davidson-6/subscriptions", "organizations_url": "https://api.github.com/users/ben-davidson-6/orgs", "repos_url": "https://api.github.com/users/ben-davidson-6/repos", "events_url": "https://api.github.com/users/ben-davidson-6/events{/privacy}", "received_events_url": "https://api.github.com/users/ben-davidson-6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks you for opening this issue! Do you want to open a PR with your fix?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.9.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik ## Information The xlmr tokenizer is not really picklable, in that it depends on things on disk to be unpickled. This causes issues if you want to use tokenizers in a spark udf, which will pickle the tokenizer, and send it to other nodes to execute, as these other nodes will not have the same things on disk. The only tokenizer I know this happens with is XLMRobertaTokenizer but I imagine there may be more. ## To reproduce ```python import pickle import os import sentencepiece as spm from transformers import XLMRobertaTokenizer # location on disk of tokenizer tokenizer_directory = './xlmrBaseLocal' def unpickle_when_file_in_same_place_and_when_it_isnt(pickled_tokenizer): # this works because the vocab file hasnt moved pickle.loads(pickled_tokenizer) print('successfully unpickled when file NOT MOVED') # we move the vocab file and try to unpickle os.rename(tokenizer_directory, tokenizer_directory + 'Moved') try: pickle.loads(pickled_tokenizer) print('successfully unpickled when file MOVED') except OSError: print('failed to unpickle when file MOVED') # put tokenizer back os.rename(tokenizer_directory + 'Moved', tokenizer_directory) # load tokenizer and pickle it tokenizer = XLMRobertaTokenizer.from_pretrained(tokenizer_directory) pickled_tokenizer = pickle.dumps(tokenizer) # this prints # > successfully unpickled when file NOT MOVED # > failed to unpickle when file MOVED unpickle_when_file_in_same_place_and_when_it_isnt(pickled_tokenizer) # fix the pickling defined here # https://github.com/huggingface/transformers/blob/master/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L171 def __getstate__(self): state = self.__dict__.copy() state["sp_model"] = None state["sp_model_proto"] = self.sp_model.serialized_model_proto() return state def __setstate__(self, d): self.__dict__ = d # for backward compatibility if not hasattr(self, "sp_model_kwargs"): self.sp_model_kwargs = {} self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) self.sp_model.LoadFromSerializedProto(self.sp_model_proto) XLMRobertaTokenizer.__getstate__ = __getstate__ XLMRobertaTokenizer.__setstate__ = __setstate__ # repickle tokenizer = XLMRobertaTokenizer.from_pretrained(tokenizer_directory) pickled_tokenizer = pickle.dumps(tokenizer) # this prints # > successfully unpickled when file NOT MOVED # > successfully unpickled when file MOVED unpickle_when_file_in_same_place_and_when_it_isnt(pickled_tokenizer) ``` ## Expected behavior The expected behaviour would be that once the tokenizer is pickled and I have the prerequisite libraries, I should be able to unpickle it regardless of what is on disk and where.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13200/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13199
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13199/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13199/comments
https://api.github.com/repos/huggingface/transformers/issues/13199/events
https://github.com/huggingface/transformers/issues/13199
975,352,963
MDU6SXNzdWU5NzUzNTI5NjM=
13,199
How to use transformers for batch inference
{ "login": "wangdong1992", "id": 20061204, "node_id": "MDQ6VXNlcjIwMDYxMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/20061204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangdong1992", "html_url": "https://github.com/wangdong1992", "followers_url": "https://api.github.com/users/wangdong1992/followers", "following_url": "https://api.github.com/users/wangdong1992/following{/other_user}", "gists_url": "https://api.github.com/users/wangdong1992/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangdong1992/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangdong1992/subscriptions", "organizations_url": "https://api.github.com/users/wangdong1992/orgs", "repos_url": "https://api.github.com/users/wangdong1992/repos", "events_url": "https://api.github.com/users/wangdong1992/events{/privacy}", "received_events_url": "https://api.github.com/users/wangdong1992/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For encoder-only Transformer models (like ALBERT), refer to the docs (in this case of [`TFAlbertForSequenceClassification`](https://huggingface.co/docs/transformers/model_doc/albert#transformers.TFAlbertForSequenceClassification.call.example)):\r\n\r\n```\r\nfrom transformers import AlbertTokenizer, TFAlbertForSequenceClassification\r\nimport tensorflow as tf\r\n\r\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')\r\nmodel = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2')\r\n\r\ntexts = ['This is a sentence', 'This is another sentence']\r\ninputs = tokenizer(texts, return_tensors=\"tf\")\r\n\r\noutputs = model(inputs)\r\nlogits = outputs.logits\r\n```\r\nYou can just provide a list of strings to the tokenizer, and it will prepare them for the model.\r\n\r\nFor decoder-only Transformer models (like GPT-2, any GPT model basically), refer to [this answer](https://github.com/huggingface/transformers/issues/10704#issuecomment-798870853).", "@NielsRogge Can we also write a `loop for Pytorch batch DataLoaders ` and do inferencing. As DataLoaders are very fast?\r\n\r\n```\r\nfor batch in Batches:\r\n \r\n inp=tokenizer(batch, return_tensors=\"tf\")\r\n model(inp)\r\n```", "> @NielsRogge Can we also write a `loop for Pytorch batch DataLoaders ` and do inferencing. As DataLoaders are very fast?\r\n> \r\n> ```\r\n> for batch in Batches:\r\n> \r\n> inp=tokenizer(batch, return_tensors=\"tf\")\r\n> model(inp)\r\n> ```\r\n\r\nHi @pratikchhapolika, I am interested to know is writing loop for pytorch batch Dataloaders doable? ", "@Sun-SunQian you should probably ask this kind of question on the [forum](https://discuss.huggingface.co/)! Higher chances of getting an answer πŸ˜‰ ", "Hi! \r\nDid you find a solution for this? ", "Please see example here:\r\nhttps://huggingface.co/tiiuae/falcon-40b/discussions/50" ]
1,629
1,693
1,629
NONE
null
I use transformers to train text classification models,for a single text, it can be inferred normally. The code is as follows from transformers import BertTokenizer, TFAlbertForSequenceClassification text = 'This is a sentence' model_path ='../albert_chinese_tiny' tokenizer = BertTokenizer.from_pretrained(model_path) model = TFAlbertForSequenceClassification.from_pretrained('../model_tf/20210818') encoding = tokenizer(text, truncation=True, padding=True, max_length=30, return_tensors="tf") result = model(encoding) When I predict more than one text at a time, an error will be reported. The code is as follows texts = ['This is a sentence', 'This is another sentence'] encodings = [] model_path ='../albert_chinese_tiny' tokenizer = BertTokenizer.from_pretrained(model_path) model = TFAlbertForSequenceClassification.from_pretrained('../model_tf/20210818') for text in texts: encoding = tokenizer(text, truncation=True, padding=True, max_length=30, return_tensors="tf") encodings.append(encoding) result = model(np.array(encodings)) The error information is as follows: tensorflow.python.framework.errors_impl.InvalidArgumentError: Value for attr β€˜Tindices’ of string is not in the list of allowed values: int32, int64 ; NodeDef: {{node ResourceGather}}; Op<name=ResourceGather; signature=resource:resource, indices:Tindices β†’ output:dtype; attr=batch_dims:int,default=0; attr=validate_indices:bool,default=true; attr=dtype:type; attr=Tindices:type,allowed=[DT_INT32, DT_INT64]; is_stateful=true> [Op:ResourceGather]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13199/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13199/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13198
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13198/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13198/comments
https://api.github.com/repos/huggingface/transformers/issues/13198/events
https://github.com/huggingface/transformers/pull/13198
975,342,193
MDExOlB1bGxSZXF1ZXN0NzE2NTA2OTE2
13,198
Correct wrong function signatures on the docs website
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "After a series of investigations, here is the concluding matrix:\r\n\r\n| Env | Python version | Sphinx version | Correctness |\r\n|-------------------- |------------------|-----------------|-----------------|\r\n| Circle CI Docker Image | 3.6 | 3.2.1 | X |\r\n| Circle CI Docker Image | 3.6 | 3.5.4 | X |\r\n| Circle CI Docker Image | 3.7(3.7.11) | 3.2.1 | X |\r\n| Circle CI Docker Image | 3.7(3.7.11) | 3.5.4 | *O [Artifact](https://258083-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/trainer.html) |\r\n| Ubuntu 18.04 Anaconda | 3.6.13 | 3.2.1 | X |\r\n| Ubuntu 18.04 Anaconda | 3.7.11 | 3.2.1 | O |\r\n| Ubuntu 18.04 Anaconda | 3.8.5 | 3.2.1 | O |\r\n\r\nX: \r\n`model: torch.nn.modules.module.Module = None` (Union and PreTrainedModel missing)\r\nO:\r\n`model: Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None` (Correct)\r\n*O: \r\n`Optional[Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module]] = None`\r\n(An `Optional` type hint was added by Sphinx which wasn't defined in the code, maybe inferred from the default value `None`)\r\n\r\nAs shown in the above matrix, Sphinx (3.2.1 & 3.5.4) with python 3.6 failed to generate correct html in testing environments, should we upgrade CI/CD environments all together to python 3.7 in order to keep the consistency?\r\n\r\nBesides, I noticed that in `docs/source/conf.py`, the release version is `4.7.0`, which isn't the latest version `4.9.2` , should this also need to be updated?\r\n", "@sgugger correctly mentions I merged this without the last comment being taken into account - Sorry about that, Sylvain is pushing directly on `master` with the comment's request.", "I actually made a PR in #13337 :-) ", "This had an unintended side-effect: the search functionality doesn't seem to be working anymore on huggingface.co/transformers.\r\n\r\nI tracked the issue to Sphinx version v3.4.0. Checking out your useful table @qqaatw, switching back to v3.2.1 with Python v3.7x would be the second best choice?", "@LysandreJik I've checked the search functionality, it's not working indeed. \r\n\r\nAs you said, maybe we should switch back Sphinx's version to v3.2.1 but not with Python v3.7.11 because Sphinx v3.2.1 with Python v3.7.11 provided by CircleCI docker image seems not working either. It's weird though as I tested this combination on my machine (Ubuntu 18.04 Anaconda) and the output was correct.\r\n\r\nI think another try would be using [Next-gen language images](https://circleci.com/docs/2.0/circleci-images/#next-gen-language-images). According to what the website states, these images are faster to build and have improved reliability and stability. Perhaps switching to this one can solve this problem.", "Since we are exploring a move away from sphinx anyway, we will revert this commit for now to re-enable the search. If we end up not moving away from sphinx we can explore more which image to pick and which versions to use, but in the meantime, it's more important to have the search enabled than the sometime wrong signatures.", "Got it. Sorry for the inconvenience.", "No worries!" ]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? Trying to address #13171. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13198/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13198", "html_url": "https://github.com/huggingface/transformers/pull/13198", "diff_url": "https://github.com/huggingface/transformers/pull/13198.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13198.patch", "merged_at": 1630338025000 }
https://api.github.com/repos/huggingface/transformers/issues/13197
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13197/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13197/comments
https://api.github.com/repos/huggingface/transformers/issues/13197/events
https://github.com/huggingface/transformers/issues/13197
975,283,036
MDU6SXNzdWU5NzUyODMwMzY=
13,197
Training DetrForObjectDetection failed in a multiple-GPU environment.
{ "login": "jnishi", "id": 836541, "node_id": "MDQ6VXNlcjgzNjU0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jnishi", "html_url": "https://github.com/jnishi", "followers_url": "https://api.github.com/users/jnishi/followers", "following_url": "https://api.github.com/users/jnishi/following{/other_user}", "gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jnishi/subscriptions", "organizations_url": "https://api.github.com/users/jnishi/orgs", "repos_url": "https://api.github.com/users/jnishi/repos", "events_url": "https://api.github.com/users/jnishi/events{/privacy}", "received_events_url": "https://api.github.com/users/jnishi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nAs explained in the [docs](https://huggingface.co/transformers/model_doc/detr.html):\r\n> If you want to train the model in a distributed environment across multiple nodes, then one should update the num_boxes variable in the DetrLoss class of modeling_detr.py. When training on multiple nodes, this should be set to the average number of target boxes across all nodes, as can be seen in the original implementation here.\r\n\r\nI had to remove the distributed training-related code from the modeling file, which is perhaps a bit unfortunate, because now people need to fork the library in order for DETR to work properly in a distributed environment. cc @sgugger @LysandreJik ", "Hi, thank you for the quick reply.\r\n\r\n> When training on multiple nodes, this should be set to the average number of target boxes across all nodes, \r\n\r\nI'd like to ask you two questions.\r\n1. Do I need to insert `num_boxes / (the number of nodes)` after the following line https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2013 ? \r\nFor example, if I'm training on two GPUs, should I insert `num_boxes = num_boxes / 2` after the line?\r\n\r\n2. The error occurs at https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2004 which is before the declaration of `num_boxes`. Could you tell me more about how to solve this error?\r\n", "For now, the code has not been tested to work on multiple GPUs, so this is a good opportunity to make it work. We can perhaps write a guide on which things to take into account. \r\n\r\n> Do I need to insert num_boxes / (the number of nodes) after the following line https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2013 ?\r\nFor example, if I'm training on two GPUs, should I insert num_boxes = num_boxes / 2 after the line?\r\n\r\nThe [original implementation](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/models/detr.py#L230-L232) used the following code:\r\n```\r\nif is_dist_avail_and_initialized():\r\n torch.distributed.all_reduce(num_boxes)\r\nnum_boxes = torch.clamp(num_boxes / get_world_size(), min=1).item()\r\n```\r\nwith\r\n\r\n```\r\nimport torch.distributed as dist\r\n\r\ndef is_dist_avail_and_initialized():\r\n if not dist.is_available():\r\n return False\r\n if not dist.is_initialized():\r\n return False\r\n return True\r\n\r\ndef get_world_size():\r\n if not is_dist_avail_and_initialized():\r\n return 1\r\n return dist.get_world_size()\r\n```\r\nThe world size is 2 if you're training on a single node with 2 GPUs, so you can divide them indeed by 2.\r\n\r\n> The error occurs at https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py#L2004 which is before the declaration of num_boxes. Could you tell me more about how to solve this error?\r\n\r\nThis could have to do with the `targets` not being on the proper devices, which is the responsibility of the `Trainer`. In the original implementation, they use `DistributedSampler`. Can you perhaps print the `sizes` that are computed right before it? These should be a list containing the number of bounding boxes for every example in the batch. ", "> For now, the code has not been tested to work on multiple GPUs, so this is a good opportunity to make it work. We can perhaps write a guide on which things to take into account.\r\n\r\nIt would be great if you could support multiple GPUs.\r\n\r\n> The world size is 2 if you're training on a single node with 2 GPUs, so you can divide them indeed by 2.\r\n\r\nI found out that if you do it manually, divide by the number of GPUs.\r\n\r\n> This could have to do with the targets not being on the proper devices, which is the responsibility of the Trainer. In the original implementation, they use DistributedSampler. Can you perhaps print the sizes that are computed right before it? These should be a list containing the number of bounding boxes for every example in the batch.\r\n\r\nThe reason for the error is that DistributedSampler does not support `labels` data. \r\nThank you very much.", "1. If I want to extend this to panoptic segmentation using coco stuff classes, how should I change the class config to do it.\r\nI have only 1things categories balloon and 53 stuff categories from coco dataset\r\n2. How do I freeze the weights for training mask head for 25 epochs\r\n3. How do we edit the classifier layer of the model say by default this will have 92 class, but from the above example if I have only 2 class (balloon, 'N/A') how should I change them?\r\n", "> If I want to extend this to panoptic segmentation using coco stuff classes, how should I change the class config to do it.\r\nI have only 1things categories balloon and 53 stuff categories from coco dataset\r\n\r\nIf you want to do panoptic segmentation, you first need to load the model as follows:\r\n\r\n```\r\nfrom transformers import DetrForSegmentation\r\n\r\n# specify a custom number of classes\r\nmodel = DetrForSegmentation.from_pretrained(\"facebook/detr-resnet-50-panoptic\", num_labels=54, ignore_mismatched_sizes=True)\r\n```\r\nYou can possibly also add the `id2label` and `label2id` dictionaries as additional arguments. \r\n\r\n> How do I freeze the weights for training mask head for 25 epochs\r\n\r\n```\r\nfor name, param in model.named_parameters():\r\n if name.startswith('detr'):\r\n param.requires_grad = False\r\n```\r\n\r\n> How do we edit the classifier layer of the model say by default this will have 92 class, but from the above example if I have only 2 class (balloon, 'N/A') how should I change them?\r\n\r\nThere's a new argument called `ignore_mismatched_sizes` which you can set to `True`. If you then specify a different number of labels, no error will be thrown (only a warning), as shown above.\r\n", "> If you want to do panoptic segmentation, you first need to load the model as follows:\r\n\r\nInstead of using resnet50-panoptic how can I use my model from object detection (`DetrForObjectDetection` method) to train for panoptic segmentation", "So for panoptic segmentation, DETR works as follows:\r\n\r\n1) you first need to train a `DetrForObjectDetection` model to detect bounding boxes + classes (around both things + stuff classes). Let's say you have 10 classes in total (things + stuff), then you can initialize the model as follows:\r\n```\r\nfrom transformers import DetrForObjectDetection\r\n\r\n# replace COCO classification head by custom one \r\nobject_detection_model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50', num_labels=10, ignore_mismatched_sizes=True)\r\n# fine-tune this model on custom data\r\n```\r\n\r\nYou've probably already done this, see my tutorial notebook: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb\r\n2) next, you can initialize a `DetrForSegmentation` model with the weights you obtained in step 1. This can be done as follows:\r\n```\r\nfrom transformers import DetrConfig, DetrForSegmentation\r\n\r\nconfig = DetrConfig()\r\nmodel = DetrForSegmentation(config)\r\n# set the weights of the object detection model\r\nmodel.detr = object_detection_model\r\n```\r\n\r\nThis will give you a model that has all layers already initialized with some trained weights, except the mask head, which will be randomly initialized. \r\n3) next, you can freeze all layers except the mask head, and train for 25 epochs. Freezing can be done as follows:\r\n```\r\nfor name, param in model.named_parameters():\r\n if name.startswith('detr'):\r\n param.requires_grad = False\r\n```\r\n", "@NielsRogge Thanks for your reply.\r\n\r\nI still can't figure out on adding the **53 COCO stuff** class to my custom data in Objectdetection. I am following the above finetune notebook which you have shared.\r\n\r\nI have this doubt, should I download the COCO-17 val dataset and combine my custom data for the model to learn the stuff classes or just increase the class_emed layer from 100,4 (3+1 things class) to 100,57 (4+53). But in this case how to add this class to DetrConfig (id2class).\r\n\r\nthis is the notebook link :- [colab](https://colab.research.google.com/drive/1v1G2grxKrsnvVbJMMulY7xr4k9AwE5IF)\r\nthis custom data link:- [drive](https://drive.google.com/file/d/1ydE8KAojQk5HRfNG6GMLzkq-E_GeDOLr/view?usp=sharing)\r\n\r\nOne general question in your notebook the model is trained using `pl` rather that `torch` what is the reason for using `pl`\r\n\r\n", "If you want a neural network to learn additional classes, it's advised to add a new classification head and fine-tune the model on all classes you want. So indeed, now the class embedding layer should have 57 outputs.\r\n\r\n> One general question in your notebook the model is trained using pl rather that torch what is the reason for using pl\r\n\r\nBecause it's very easy to train PyTorch models. You can of course just train using native PyTorch or using HuggingFace Accelerate, or using HuggingFace's Trainer, etc.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <yes> ### Who can help @NielsRogge ## Information Model I am using (Bert, XLNet ...): DetrForObjectDetection The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Save this script as `run.py` It is the same as https://colab.research.google.com/drive/1oIHGwr1U0sw-6KW-MG60s-ksXA-kYyUO?usp=sharing#scrollTo=VCr7Y7zW5a2a 2. Put sample.json, sample.jpg, sample2.jpg in [detr_samples.tar.gz](https://github.com/huggingface/transformers/files/7019224/detr_samples.tar.gz) to the same directory. ```python from typing import Any, Dict, List, Union from dataclasses import dataclass import torch from torchvision.datasets import CocoDetection from transformers import ( DetrConfig, DetrFeatureExtractor, DetrForObjectDetection, HfArgumentParser, Trainer, TrainingArguments, ) class DetrTrainer(Trainer): # Overwrite _prepare_inputs method to make sure dict is also placed on device def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]: """ Prepare :obj:`inputs` before feeding them to the model, converting them to tensors if they are not already and handling potential state. """ for k, v in inputs.items(): if isinstance(v, torch.Tensor): kwargs = dict(device=self.args.device) if self.deepspeed and inputs[k].dtype != torch.int64: # NLP models inputs are int64 and those get adjusted to the right dtype of the # embedding. Other models such as wav2vec2's inputs are already float and thus # may need special handling to match the dtypes of the model kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype())) inputs[k] = v.to(**kwargs) # labels are a list of dictionaries, each dictionary being a COCO annotation if isinstance(v, list): for annotation_dict in v: for key, value in annotation_dict.items(): annotation_dict[key] = value.to(self.args.device) if self.args.past_index >= 0 and self._past is not None: inputs["mems"] = self._past return inputs def load_category(category): id2label = {} label2id = {} maxid = 0 for k, v in category.items(): id2label[int(k)] = v["name"] label2id[v["name"]] = int(k) maxid = max(maxid, int(k)) for i in range(maxid): if not (i in id2label): id2label[i] = None return id2label, label2id class DetrData(CocoDetection): def __init__(self, img_folder, annotations, feature_extractor, train=True): super(DetrData, self).__init__(img_folder, annotations) self.feature_extractor = feature_extractor def __getitem__(self, idx): # read in PIL image and target in COCO format img, target = super(DetrData, self).__getitem__(idx) # preprocess image and target (converting target to DETR format, resizing + normalization of both image and target) image_id = self.ids[idx] target = {'image_id': image_id, 'annotations': target} encoding = self.feature_extractor(images=img, annotations=target, return_tensors="pt") encoding["pixel_values"] = encoding["pixel_values"].squeeze() # remove batch dimension encoding["labels"] = encoding["labels"][0] # remove batch dimension return encoding @dataclass class DataCollatorDetr: feature_extractor: DetrFeatureExtractor def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: pixel_values = [item["pixel_values"] for item in features] encoding = self.feature_extractor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") encoding["labels"] = [item["labels"] for item in features] return encoding def main(): training_args = TrainingArguments(output_dir=".") feature_extractor = DetrFeatureExtractor() train_dataset = DetrData(img_folder=".", annotations="sample.json", feature_extractor=feature_extractor) id2label, label2id = load_category(train_dataset.coco.cats) config = DetrConfig.from_pretrained("facebook/detr-resnet-50") config.id2label = id2label config.label2id = label2id model = DetrForObjectDetection.from_pretrained( "facebook/detr-resnet-50", config=config) # Initialize our Trainer trainer = DetrTrainer( model=model, args=training_args, train_dataset=train_dataset, tokenizer=feature_extractor, data_collator=DataCollatorDetr(feature_extractor=feature_extractor), ) train_result = trainer.train() if __name__ == "__main__": main() ``` 3. Run by `python run.py` in a multiple-GPU environment. Then `IndexError` is caused. ``` Traceback (most recent call last): File "run.py", line 112, in <module> main() File "run.py", line 109, in main train_result = trainer.train() File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/trainer.py", line 1286, in train tr_loss += self.training_step(model, inputs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/trainer.py", line 1779, in training_step loss = self.compute_loss(model, inputs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/trainer.py", line 1811, in compute_loss outputs = model(**inputs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) IndexError: Caught IndexError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 1430, in forward loss_dict = criterion(outputs_loss, labels) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2004, in forward indices = self.matcher(outputs_without_aux, targets) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2132, in forward indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))] File "/home/christopher/detr_samples/env/lib/python3.8/site-packages/transformers/models/detr/modeling_detr.py", line 2132, in <listcomp> indices = [linear_sum_assignment(c[i]) for i, c in enumerate(cost_matrix.split(sizes, -1))] IndexError: index 1 is out of bounds for dimension 0 with size 1 ``` It works fine with a single GPU. ## Expected behavior Successfully complete training. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13197/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13196
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13196/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13196/comments
https://api.github.com/repos/huggingface/transformers/issues/13196/events
https://github.com/huggingface/transformers/pull/13196
975,242,787
MDExOlB1bGxSZXF1ZXN0NzE2NDIyNzE5
13,196
check torch_dtype in config as well
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "to fix code quality issues please run: `make fixup` and push the changes", "As answered in the issue: https://github.com/huggingface/transformers/issues/13195#issuecomment-903009666\r\nnot using `config.torch_type` is by design for v4 and will likely to change in v5.", "The original problem will be fixed by https://github.com/huggingface/transformers/pull/13209 - please don't hesitate to validate that it indeed solves it. Thank you!", "> As answered in the issue: [#13195 (comment)](https://github.com/huggingface/transformers/issues/13195#issuecomment-903009666)\r\n> not using `config.torch_type` is by design for v4 and will likely to change in v5.\r\n\r\nI see. So should this PR be closed or left open for later reference? Thank you!", "We can close it for now. It will still be here for reference in either form. " ]
1,629
1,630
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13195. A detailed problem description and reproducer is in the issue! ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @stas00 as we discussed this issue in #13076 . (As #13076 deals with a number of issues, I opened #13195 to focus on `torch_dtype` with AutoModel issue.)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13196/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13196/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13196", "html_url": "https://github.com/huggingface/transformers/pull/13196", "diff_url": "https://github.com/huggingface/transformers/pull/13196.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13196.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13195
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13195/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13195/comments
https://api.github.com/repos/huggingface/transformers/issues/13195/events
https://github.com/huggingface/transformers/issues/13195
975,239,584
MDU6SXNzdWU5NzUyMzk1ODQ=
13,195
'torch_dtype' keyword not working with 'AutoModel'
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> This is because AutoModel first puts torch_dtype argument passed to AutoModel.from_pretrained method into config and\r\nPretrainedModel.from_config, which is called by AutoModel.from_pretrained, checks for torch_dtype argument in only in kwargs and not in config.\r\n\r\nBut this is intentional. \r\n\r\nMy PR was originally designed to have the dtype figuring out to be fully automated, but that wasn't accepted, so the `config.dtype` is saved, but at the moment being ignored on purpose. i.e. the user has to actively set `torch_dtype`. See this part of the discussion https://github.com/huggingface/transformers/pull/12316#discussion_r659959617\r\n\r\nPerhaps we should document somewhere that `config.torch_dtype` is saved for the future use (probably v5) but currently isn't automatically used. The user can of course do `from_pretrained(..., torch_dtype=config.torch_dtype)`.", "This has been fixed in https://github.com/huggingface/transformers/pull/13209" ]
1,629
1,632
1,632
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> transformers version: 4.9.2 Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10 Python version: 3.8.5 PyTorch version (GPU?): 1.8.0a0+52ea372 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Using distributed or parallel set-up in script?: No ### Who can help @stas00 as he is the writer of the [#12316](https://github.com/huggingface/transformers/pull/12316). <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce 1. Inspect the model weight data type. ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O checkpoint.zip unzip checkpoint.zip python -c "import torch; from pprint import pprint as print; sd=torch.load('./release/mp_rank_00/model_optim_rng.pt'); d= {d.dtype: 1 for d in sd['model']['language_model']['transformer'].values()}; print(d.keys())" # dict_keys([torch.float16]) ``` 2. Try to load it with transformers in float16, which `torch_dtype` is supposed to be responsible for. But this only works with specific model classes and AutoModel blindly loads it into float32. ```bash git clone https://github.com/huggingface/transformers.git python3 transformers/src/transformers/models/megatron_bert/convert_megatron_gpt2_checkpoint.py checkpoint.zip # load correctly with the specific model class python -c "from transformers import GPT2LMHeadModel; print(GPT2LMHeadModel.from_pretrained('.', torch_dtype='auto').dtype)" # torch.float16 # but fails to load it into float 16 with AutoModelForCausalLM python -c "from transformers import AutoModelForCausalLM; print(AutoModelForCausalLM.from_pretrained('.', torch_dtype='auto').dtype)" # torch.float32 ``` 3. This is because AutoModel [first puts `torch_dtype` argument passed to `AutoModel.from_pretrained` method into config](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/configuration_utils.py#L576) and `PretrainedModel.from_config`, which is called by `AutoModel.from_pretrained`, checks for `torch_dtype` argument in [only in `kwargs` and not in config](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/modeling_utils.py#L1297). <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Setting`torch_dtype` to `auto` works correctly as explained in [#12316](https://github.com/huggingface/transformers/pull/12316). I will open a PR to address this issue :) <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13195/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13195/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13194
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13194/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13194/comments
https://api.github.com/repos/huggingface/transformers/issues/13194/events
https://github.com/huggingface/transformers/pull/13194
975,212,032
MDExOlB1bGxSZXF1ZXN0NzE2Mzk2ODk2
13,194
use float 16 in causal mask and masked bias
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @jdemouth and @novatig ", "@novatig it looks good to me. Are you ok with the changes? \r\n\r\n@hwijeen and @LysandreJik, sorry for the delay, I was on holidays ;)", "No worries, thanks for taking a look! :)", "This is a kindly reminder for @novatig :)", "Merging since @jdemouth approved - will reverse if @novatig disagrees.", "Sorry all, I did not see the notification in my inbox and it slipped my mind.\r\n\r\nA very belated LGTM" ]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13193 (issue). Problem description and reproducer is provided in the issue. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @NielsRogge as they reviewed [the original converting script PR](https://github.com/huggingface/transformers/pull/12007) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13194/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13194", "html_url": "https://github.com/huggingface/transformers/pull/13194", "diff_url": "https://github.com/huggingface/transformers/pull/13194.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13194.patch", "merged_at": 1630318165000 }
https://api.github.com/repos/huggingface/transformers/issues/13193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13193/comments
https://api.github.com/repos/huggingface/transformers/issues/13193/events
https://github.com/huggingface/transformers/issues/13193
975,204,802
MDU6SXNzdWU5NzUyMDQ4MDI=
13,193
Megatron conversion code converts some weights in fp16 to fp32(or uint8).
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> transformers version: 4.9.2 Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10 Python version: 3.8.5 PyTorch version (GPU?): 1.8.0a0+52ea372 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @novatig @jdemouth @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce 1. Check the data type of original megatron checkpoint. It's all in fp16. ```bash wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O checkpoint.zip unzip checkpoint.zip python -c "import torch; from pprint import pprint as print; sd=torch.load('./release/mp_rank_00/model_optim_rng.pt'); d= {d.dtype: 1 for d in sd['model']['language_model']['transformer'].values()}; print(d.keys())" # dict_keys([torch.float16]) ``` 2. But the [current conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) converts some into [float32](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L164) and [uint8](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L160). This leads to a model with data type which is not faithful to the original model, and potentially a problem as discussed in #13076 ``` python3 /hf/transformers-master/src/transformers/models/megatron_bert/convert_megatron_gpt2_checkpoint.py checkpoint.zip python -c "import torch; sd=torch.load('pytorch_model.bin'); d = {p.dtype:1 for p in sd.values() }; print(d.keys())" # dict_keys([torch.float16, torch.float32, torch.uint8]) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Converted checkpoint should have the same data type as the original one. <!-- A clear and concise description of what you would expect to happen. --> I will open a new PR to address this :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13193/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13192/comments
https://api.github.com/repos/huggingface/transformers/issues/13192/events
https://github.com/huggingface/transformers/issues/13192
975,190,711
MDU6SXNzdWU5NzUxOTA3MTE=
13,192
Incosistent behaviour between fast and slow RoBERTa tokenizers
{ "login": "ofirzaf", "id": 18296312, "node_id": "MDQ6VXNlcjE4Mjk2MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ofirzaf", "html_url": "https://github.com/ofirzaf", "followers_url": "https://api.github.com/users/ofirzaf/followers", "following_url": "https://api.github.com/users/ofirzaf/following{/other_user}", "gists_url": "https://api.github.com/users/ofirzaf/gists{/gist_id}", "starred_url": "https://api.github.com/users/ofirzaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ofirzaf/subscriptions", "organizations_url": "https://api.github.com/users/ofirzaf/orgs", "repos_url": "https://api.github.com/users/ofirzaf/repos", "events_url": "https://api.github.com/users/ofirzaf/events{/privacy}", "received_events_url": "https://api.github.com/users/ofirzaf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe @SaulLu do you have an idea here? :-)", "Hi @patrickvonplaten @SaulLu, Can I help in resolving this issue?\r\nBut as I'm quite new to this so I will need some guidance in where should I start from.", "@ofirzaf, thanks for the detailed issue!\r\n\r\n**About the missing id corresponding to the unknown token for the fast tokenizer** \r\n\r\nYes, I can see why the fast tokenizer does not take into account the unknow token. \r\n\r\nSince the original RoBERTa tokenizer does not need this token (since it is byte-based and contains the exhaustive list in its vocabulary), when converting from the slow to the fast version of the tokenizer the information of the unknown token you added in the kwargs is not passed (precisely, the information is \"lost\" in [this method](https://github.com/huggingface/transformers/blob/v4.9.2/src/transformers/convert_slow_tokenizer.py#L216)). \r\n\r\n@ofirzaf , in the short term, if you need to initialize the fast tokenizer from the slow version files, you can do so: \r\n```python\r\nspecial_tokens_map = {\"unk_token\": \"<unk>\"}\r\n\r\nkwargs = {}\r\nkwargs.update(special_tokens_map)\r\nkwargs.update(do_lower_case=False)\r\nfast_tok = RobertaTokenizerFast.from_pretrained(tmpdirname, use_fast=True, **kwargs)\r\nfast_tok.backend_tokenizer.model.unk_token = special_tokens_map[\"unk_token\"]\r\nfast_tok.save_pretrained(\"local_tok\")\r\n\r\nfast_tok = RobertaTokenizerFast.from_pretrained(\"local_tok\", use_fast=True)\r\n```\r\n\r\n@sourabh112, It's very kind of you to offer your help. However, in the immediate future, I'm still not sure we want to change this behaviour without ensuring that there will be no adverse effects (the GPT2 tokenizer being reused in several places) knowing that these are initially tokenizers that should not need the unknown token.\r\n\r\n@patrickvonplaten, @LysandreJik and @sgugger do you have an opinion on this? Could we at least add a short-term warning? \r\n\r\n**About the `return_special_tokens_mask`** \r\n\r\nIt seems to me that this behavior is common to all tokenizers. Special tokens are tokens added to transform the tokenized text into a format compatible with the input expected by the model. The unknown token is different in that it is a necessary token for the tokenisation algorithm.\r\n", "@SaulLu Thanks for the reply.\r\n\r\nI agree that this issue shouldn't occure when using the tokenizer for the reasons you mentioned.\r\nI think, however, that the test should be fixed to reflect that. As I mentioned, the example I brought here is straight from the library's built in tests.\r\n\r\nRegarding the special tokens mask, if the `<unk>` token shouldn't be considered as a special token, shouldn't it be removed from the special tokens list of the tokenizer?\r\n\r\nIn the OP I mentioned another issue I wanted to fix, can you take a look at the issue and the proposed fix and tell me if this is something you think is worth fixing/contributing or the team doens't think this is an issue?\r\n\r\nThanks", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-4.15.0-122-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.19 - JaxLib version: 0.1.70 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help - tokenizers: @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information I was testing a fix for #9933 and when debugging the test for RoBERTa tokenizers I found that fast and slow return different results for the test and neither of the results are what I expected. The code snipet below is a copy of `tests.test_tokenization_roberta.RobertaTokenizationTest.test_special_tokens_mask`. As you can see from the output, slow tokenizer outputs `<unk>` ids eventhough the flag state to not return special tokens. Also, the special tokens mask returned doesn't take into account the `<unk>` tokens as if they weren't special tokens. On the other hand, the fast tokenizer doesn't output those tokens since `<unk>` is defined as a special token, as expected. However, when adding special tokens the `<unk>` tokens are not added at all. So I am wondering which behaviour is correct since niether seems to be 100%? The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: run the following code ```python """Based on test from test_tokenization_roberta.RobertaTokenizationTest.test_special_tokens_mask """ import os import json import tempfile import shutil from transformers import RobertaTokenizer, RobertaTokenizerFast from transformers.models.roberta.tokenization_roberta import VOCAB_FILES_NAMES # Setup vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "\u0120", "\u0120l", "\u0120n", "\u0120lo", "\u0120low", "er", "\u0120lowest", "\u0120newer", "\u0120wider", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""] special_tokens_map = {"unk_token": "<unk>"} tmpdirname = tempfile.mkdtemp() vocab_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) merges_file = os.path.join(tmpdirname, VOCAB_FILES_NAMES["merges_file"]) with open(vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") with open(merges_file, "w", encoding="utf-8") as fp: fp.write("\n".join(merges)) kwargs = {} kwargs.update(special_tokens_map) kwargs.update(do_lower_case=False) slow_tok = RobertaTokenizer.from_pretrained(tmpdirname, use_fast=False, **kwargs) fast_tok = RobertaTokenizerFast.from_pretrained(tmpdirname, use_fast=True, **kwargs) sequence = "Encode this." print("Slow tokenizer:") print(f" Encoding: {slow_tok.encode(sequence, add_special_tokens=False)}") encoded = slow_tok.encode_plus(sequence, add_special_tokens=True, return_special_tokens_mask=True) print(f" Encoding with special: {encoded['input_ids']}") print(f" Special tokens mask: {encoded['special_tokens_mask']}") print("Fast tokenizer") print(f" Encoding: {fast_tok.encode(sequence, add_special_tokens=False)}") encoded = fast_tok.encode_plus(sequence, add_special_tokens=True, return_special_tokens_mask=True) print(f" Encoding with special: {encoded['input_ids']}") print(f" Special tokens mask: {encoded['special_tokens_mask']}") shutil.rmtree(tmpdirname) ``` The output I get from running this code: ``` file /tmp/tmpros2tpnz/config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. file /tmp/tmpros2tpnz/config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. file /tmp/tmpros2tpnz/config.json not found Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Slow tokenizer: Encoding: [19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19] Encoding with special: [20, 19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19, 21] Special tokens mask: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] Fast tokenizer Encoding: [9, 1, 8, 3, 10, 6, 7, 5] Encoding with special: [20, 9, 1, 8, 3, 10, 6, 7, 5, 21] Special tokens mask: [1, 0, 0, 0, 0, 0, 0, 0, 0, 1] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Both tokenizers should output the same results: ``` Slow tokenizer: Encoding: [9, 1, 8, 3, 10, 6, 7, 5] Encoding with special: [20, 19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19, 21] Special tokens mask: [1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1] Fast tokenizer Encoding: [9, 1, 8, 3, 10, 6, 7, 5] Encoding with special: [20, 19, 9, 19, 1, 8, 3, 10, 6, 19, 7, 5, 19, 21] Special tokens mask: [1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1] ``` <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13192/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13192/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13191/comments
https://api.github.com/repos/huggingface/transformers/issues/13191/events
https://github.com/huggingface/transformers/issues/13191
975,174,581
MDU6SXNzdWU5NzUxNzQ1ODE=
13,191
Why repeat initializing loss modules in every forward?
{ "login": "zhiqiangdon", "id": 25371851, "node_id": "MDQ6VXNlcjI1MzcxODUx", "avatar_url": "https://avatars.githubusercontent.com/u/25371851?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhiqiangdon", "html_url": "https://github.com/zhiqiangdon", "followers_url": "https://api.github.com/users/zhiqiangdon/followers", "following_url": "https://api.github.com/users/zhiqiangdon/following{/other_user}", "gists_url": "https://api.github.com/users/zhiqiangdon/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhiqiangdon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhiqiangdon/subscriptions", "organizations_url": "https://api.github.com/users/zhiqiangdon/orgs", "repos_url": "https://api.github.com/users/zhiqiangdon/repos", "events_url": "https://api.github.com/users/zhiqiangdon/events{/privacy}", "received_events_url": "https://api.github.com/users/zhiqiangdon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The module `nn.CrossEntropyLoss` does not contain any weights so here we don't allocate any memory really when initializing the module at every forward. However if we would do this with `nn.Linear(...)` at every forward step it should be considered bad practice IMO since in this case we would allocate a big tensor (the weights of the linear layer) at every forward step.", "@patrickvonplaten , thanks for the answer. Pytorch has `functional.cross_entropy()`, which should be more suitable to use in forward. Although `nn.CrossEntropyLoss ` doesn't cause overhead, it doesn't follow Pytorch's convention that initializes a module in init and use it in forward. It confused me a little lit when reading the code. I was wondering any specific reason of using a `nn` module instead of a `functional` method in forward.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
Hello, I find that your implementations usually initialize loss modules, e.g., nn.CrossEntropyLoss, inside models' forward functions. I am curious about the reason of doing this. Generally, in Pytorch, a module should be initialized in __init__ and used in forward. Does the frequent initialization cause overhead and memory issues? Thanks,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13191/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13190/comments
https://api.github.com/repos/huggingface/transformers/issues/13190/events
https://github.com/huggingface/transformers/issues/13190
974,943,723
MDU6SXNzdWU5NzQ5NDM3MjM=
13,190
[Documentation] PLEASE HELP with very simple tasks!!!
{ "login": "asigalov61", "id": 56325539, "node_id": "MDQ6VXNlcjU2MzI1NTM5", "avatar_url": "https://avatars.githubusercontent.com/u/56325539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asigalov61", "html_url": "https://github.com/asigalov61", "followers_url": "https://api.github.com/users/asigalov61/followers", "following_url": "https://api.github.com/users/asigalov61/following{/other_user}", "gists_url": "https://api.github.com/users/asigalov61/gists{/gist_id}", "starred_url": "https://api.github.com/users/asigalov61/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asigalov61/subscriptions", "organizations_url": "https://api.github.com/users/asigalov61/orgs", "repos_url": "https://api.github.com/users/asigalov61/repos", "events_url": "https://api.github.com/users/asigalov61/events{/privacy}", "received_events_url": "https://api.github.com/users/asigalov61/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey Alex,\r\n\r\nI'm sorry that you didn't manage to find what you were looking for in the docs.\r\n\r\n1) Please not that \"bert-like\" models should not be used for sequence generation, but rather for sequence classification. The model \"`allenai/scibert_scivocab_uncased`\" is essentially a `bert-base-uncased` model you can checkout [here](https://huggingface.co/bert-base-uncased) where you can see some examples. Only seq2seq and lm-head models should make use of `generate`. This doc might help you for more explanation: https://huggingface.co/transformers/model_summary.html\r\n\r\n2) Re: Examples:\r\n- We try to have at least one example for every model architecture which you can find under the model pages in the docs, *e.g.* here for BERT: https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification\r\n- Also we have a couple of \"Quickstart\" sections on how to use our models here: https://huggingface.co/transformers/training.html which you can open as a Google Colab (there is a button on the top right)\r\n- There is also the Hugging Face course which dives a bit deeper into how everything works here: https://huggingface.co/course/chapter1", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
Hello hugginface team,. First of all, I wanted to report a bug I am getting in Google Colab. When I do: from transformers import AutoTokenizer, AutoModel ``` tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased') input_ids = tokenizer('I enjoy walking with my cute dog', return_tensors='pt').input_ids model = AutoModel.from_pretrained('allenai/scibert_scivocab_uncased') model.eval() model.generate(input_ids) ``` I get: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-10-85c40f45fd28> in <module>() 1 input_ids = tokenizer('I enjoy walking with my cute dog', return_tensors='pt').input_ids ----> 2 model.generate(input_ids=input_ids) 2 frames /usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 26 def decorate_context(*args, **kwargs): 27 with self.__class__(): ---> 28 return func(*args, **kwargs) 29 return cast(F, decorate_context) 30 /usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs) 996 return_dict_in_generate=return_dict_in_generate, 997 synced_gpus=synced_gpus, --> 998 **model_kwargs, 999 ) 1000 /usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs) 1301 continue # don't waste resources running the code we don't need 1302 -> 1303 next_token_logits = outputs.logits[:, -1, :] 1304 1305 # Store scores, attentions and hidden_states when required AttributeError: 'BaseModelOutputWithPoolingAndCrossAttentions' object has no attribute 'logits' ``` =========================================================================== Secondly, I am reporting to you another very serious issue IMHO that needs to be addressed ASAP!!! THERE ARE NO CLEAR AND SIMPLE EXAMPLES ON HOW TO USE HUGGINFACE models/software ANYWHERE !!!! WTF??? I do not mean to be rude but this is ridiculous and insulting. I have wasted hours going through your docs, w/o ANY success. Everything is either absolutely unclear or does not work properly. WHAT I PERSONALLY NEED: GOOGLE COLABS that show in a few lines of code how to train Huggingface models from scratch (NOT A SINGLE EXAMPLE ANYWHERE). And also most of your examples and colabs are either incomplete/not working or very specific so they can't be used elsewhere!!! ==================================================================== I would really appreciate it if you would address all of these issues ASAP because otherwise, I will not be able to use Huggingface transformers nor would I recommend it to anyone. Thank you very much for listening to my criticism. I do not mean to chastise, only to help make huggingface better! :) Alex.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13190/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13189/comments
https://api.github.com/repos/huggingface/transformers/issues/13189/events
https://github.com/huggingface/transformers/issues/13189
974,929,597
MDU6SXNzdWU5NzQ5Mjk1OTc=
13,189
Question about xla_spawn.py script and torch_xla.distributed.xla_multiprocessing
{ "login": "quantitative-technologies", "id": 29150871, "node_id": "MDQ6VXNlcjI5MTUwODcx", "avatar_url": "https://avatars.githubusercontent.com/u/29150871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/quantitative-technologies", "html_url": "https://github.com/quantitative-technologies", "followers_url": "https://api.github.com/users/quantitative-technologies/followers", "following_url": "https://api.github.com/users/quantitative-technologies/following{/other_user}", "gists_url": "https://api.github.com/users/quantitative-technologies/gists{/gist_id}", "starred_url": "https://api.github.com/users/quantitative-technologies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/quantitative-technologies/subscriptions", "organizations_url": "https://api.github.com/users/quantitative-technologies/orgs", "repos_url": "https://api.github.com/users/quantitative-technologies/repos", "events_url": "https://api.github.com/users/quantitative-technologies/events{/privacy}", "received_events_url": "https://api.github.com/users/quantitative-technologies/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm unaware of what might be causing this - maybe @sgugger has an answer for you!", "I have confirmed that the issue is not with the `start_method=\"fork\"`. \r\n\r\nYour `run_glue.py` script uses `HfArgumentParser` to set up the `training_args` parameter of the `Trainer`. If I instead set the `training_args` manually, I get the same errors on the TPUs as in the colab notebook, even though the script uses `start_method=\"spawn\"`. \r\n\r\nI haven't attempted to figure out exactly what is being set in `training_args` to allow the large BERTSs to be trained on TPUs. ", "The training arguments handle the initialization logic for the distributed setup, so they should only be initiliazed inside the `_mp_fn` you launch in parallel.\r\n\r\nTo launch your training from a colab, you should check the `notebook_launcher` from Accelerate.", "Yes, right. I found out about the training arguments after spending some time experimenting. \r\n\r\nI am using a python script, which is called from colab via the shell, and everything is working fine. \r\n\r\nI did try `accelerate` once before but could not get it working, but I create an new issue if I go back to that route and still have problems." ]
1,629
1,630
1,630
CONTRIBUTOR
null
I am able to fine-tune a large BERT model using your `examples/xla_spawn.py` script, by calling it from a colab notebook shell. However, when I try essentially the same thing in a colab notebook, putting the code in a cell and calling torch_xla.distributed.xla_multiprocessing.spawn((_mp_fn, start_method="fork") also in a colab cell, I get errors that the TPUs have run out of memory, when trying to train. Is this because the start_method `fork` is less memory efficient? Or should this also work with "native" colab code? I can give a MRE colab if it's helpful.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13189/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13188/comments
https://api.github.com/repos/huggingface/transformers/issues/13188/events
https://github.com/huggingface/transformers/pull/13188
974,895,263
MDExOlB1bGxSZXF1ZXN0NzE2MTMxNzA1
13,188
Fall back to `observed_batch_size` when the `dataloader` does not know the `batch_size`.
{ "login": "mbforbes", "id": 1170062, "node_id": "MDQ6VXNlcjExNzAwNjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mbforbes", "html_url": "https://github.com/mbforbes", "followers_url": "https://api.github.com/users/mbforbes/followers", "following_url": "https://api.github.com/users/mbforbes/following{/other_user}", "gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}", "starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions", "organizations_url": "https://api.github.com/users/mbforbes/orgs", "repos_url": "https://api.github.com/users/mbforbes/repos", "events_url": "https://api.github.com/users/mbforbes/events{/privacy}", "received_events_url": "https://api.github.com/users/mbforbes/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Apologies for the ping @sgugger, but might you be able to take a look at this? tl;dr with this 2-line change, users can now provide batch samplers without evaluation crashing πŸ₯³ ", "Thanks, Sylvain! Gaah, sorry, your inbox must have been crazy by the time you came back πŸ˜… β€” I hope you had a nice break!\r\n\r\n", "It was great, thanks for asking!" ]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? Motivated by #12995, this adds support for users to provide a `batch_sampler` to the DataLoader instead of a (single index) `sampler`. (The [pytorch docs](https://pytorch.org/docs/stable/data.html) has more info on these two sampler types.) When we provide a `batch_sampler`, a DataLoader doesn't know the batch size, so it's set to `None`. Currently, the Trainer retrieves the batch size from the data loader: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L2172 ... leading to a crash a few lines later when it tries to use it: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L2217 ```txt TypeError: repeat(): argument 'repeats' (position 1) must be tuple of ints, not NoneType ``` Fortunately, the observed batch size is calculated between those two spots, so this change simply uses it instead if the batch size wasn't found on the data loader. I added the `None` check just to ensure this does not change existing behavior, though I would imagine it would not even without the check. _Re: testing: I was not sure how much code you want surrounding this fix / added support, as I don't think Transformers includes any batch samplers itself yet, so I didn't include a test. Let me know otherwise and I can take a stab at it!_ ## Who can review? I suggest @sgugger due to issue context and Trainer :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13188/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13188", "html_url": "https://github.com/huggingface/transformers/pull/13188", "diff_url": "https://github.com/huggingface/transformers/pull/13188.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13188.patch", "merged_at": 1630336355000 }
https://api.github.com/repos/huggingface/transformers/issues/13187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13187/comments
https://api.github.com/repos/huggingface/transformers/issues/13187/events
https://github.com/huggingface/transformers/issues/13187
974,818,222
MDU6SXNzdWU5NzQ4MTgyMjI=
13,187
Unable to load model by ignoring size mismatch; TypeError: __init__() got an unexpected keyword argument 'ignore_mismatched_sizes'
{ "login": "swapnil3597", "id": 30098342, "node_id": "MDQ6VXNlcjMwMDk4MzQy", "avatar_url": "https://avatars.githubusercontent.com/u/30098342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swapnil3597", "html_url": "https://github.com/swapnil3597", "followers_url": "https://api.github.com/users/swapnil3597/followers", "following_url": "https://api.github.com/users/swapnil3597/following{/other_user}", "gists_url": "https://api.github.com/users/swapnil3597/gists{/gist_id}", "starred_url": "https://api.github.com/users/swapnil3597/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swapnil3597/subscriptions", "organizations_url": "https://api.github.com/users/swapnil3597/orgs", "repos_url": "https://api.github.com/users/swapnil3597/repos", "events_url": "https://api.github.com/users/swapnil3597/events{/privacy}", "received_events_url": "https://api.github.com/users/swapnil3597/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "HI, what version of transformers are you using? As `ignore_mismatched_sizes` option was newly added at v4.9.0, you should probably upgrade to v4.9.0+ in order to use it.\r\n\r\nI tested the following snippet on Colab, it worked as expected. The transformers version I used is v4.9.0:\r\n```\r\nfrom transformers import BertTokenizer, BertForSequenceClassification\r\n\r\npretrained_path = \"./test_path\"\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\nmodel.save_pretrained(pretrained_path)\r\nmodel = BertForSequenceClassification.from_pretrained( \r\n pretrained_path,\r\n num_labels = 27, \r\n ignore_mismatched_sizes=True)\r\n```\r\n\r\nOutput:\r\n\r\n```\r\nSome weights of BertForSequenceClassification were not initialized from the model checkpoint at ./test_path and are newly initialized because the shapes did not match:\r\n- classifier.weight: found shape torch.Size([2, 768]) in the checkpoint and torch.Size([27, 768]) in the model instantiated\r\n- classifier.bias: found shape torch.Size([2]) in the checkpoint and torch.Size([27]) in the model instantiated\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
I want to save the pre-trained model at a local path and later again load it using `from_pretrained` method. I'm doing this as I want to use hugging face on server with no internet. I used following script to save the model: ```python3 from transformers import BertTokenizer, BertForSequenceClassification pretrained_path = "pretrained_models/bert_base_uncased_pretrained/" model = BertForSequenceClassification.from_pretrained('bert-base-uncased') model.save_pretrained(pretrained_path) ``` So I tried 2 approaches to load model from local path, but both aren't working. ### Approach 1: Code Snippet 1: ``` model = BertForSequenceClassification.from_pretrained( pretrained_path, num_labels = 27) ``` Error 1: ```bash Traceback (most recent call last): File "<stdin>", line 5, in <module> File "/path/lib/python3.6/site-packages/transformers/models/auto/auto_factory.py", line 395, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/path/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1220, in from_pretrained model, state_dict, pretrained_model_name_or_path, _fast_init=_fast_init File "/path/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1360, in _load_state_dict_into_model raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([27, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([27]). ``` ---- ### Approach 2: Code Snippet 2: ```python3 model = BertForSequenceClassification.from_pretrained( pretrained_path, num_labels = 27, ignore_mismatched_sizes=True) ``` Error 2: ```bash Traceback (most recent call last): File "<stdin>", line 4, in <module> File "/path/python3.6/site-packages/transformers/modeling_utils.py", line 1179, in from_pretrained model = cls(config, *model_args, **model_kwargs) TypeError: __init__() got an unexpected keyword argument 'ignore_mismatched_sizes' ``` Kindly specify the way to load model with size mismatch, or any other way to save and load model from local machine every time with different number of classes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13187/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13187/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13186/comments
https://api.github.com/repos/huggingface/transformers/issues/13186/events
https://github.com/huggingface/transformers/pull/13186
974,771,111
MDExOlB1bGxSZXF1ZXN0NzE2MDI5MzM1
13,186
Add SpeechEncoderDecoder & Speech2Text2
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten I've updated the tarball with the fastBPE codes file (`bpe.10k`). Please re-download and let me know if you have questions :)", "-Hi @patrickvonplaten , \r\nI was trying to try https://huggingface.co/facebook/s2t-wav2vec2-large-en-tr however I'm getting an error when I'm trying to implement the model. \r\nthere is no error message or stack trace that is available so I can share it. \r\nI also tried to run it as a python script but it did work too.\r\n\r\n```\r\nfrom transformers import SpeechEncoderDecoderConfig\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"main.py\", line 1, in <module>\r\n from transformers import SpeechEncoderDecoderConfig\r\nImportError: cannot import name 'SpeechEncoderDecoderConfig' from 'transformers' (/venv/lib/python3.7/site-packages/transformers/__init__.py)\r\n```\r\n\r\nHowever, Pycharm gives me an error that this package is not available.\r\n![image](https://user-images.githubusercontent.com/9295206/133619519-a493069f-bec8-4554-be8c-c1e181b62e04.png)\r\n\r\n transformers `__version__ = \"4.10.2\"`\r\npython `3.7.4`\r\n" ]
1,629
1,631
1,630
MEMBER
null
This PR adds Facebook's new Speech Translation models - see [paper here](https://arxiv.org/pdf/2104.06678.pdf) that are based on a pretrained Wav2Vec2 and achieve SOTA on CoVoST-2 @kahne . Since those checkpoints are based on `Wav2Vec2`, we can use this PR to create the `SpeechEncoderDecoder` class which essentially allows one to use any pretrained speech encoder with any text decoder model. The Speech Translation models are converted to fit the format of `SpeechEncoderDecoderModel` and should be used as follows: ```python import torch from transformers import Speech2Text2Processor, SpeechEncoderDecoder from datasets import load_dataset import soundfile as sf model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-de") processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt") generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) transcription = processor.batch_decode(generated_ids) ``` Since the decoder and tokenizer is different from the previous `Speech2Text` model: https://github.com/huggingface/transformers/tree/master/src/transformers/models/speech_to_text a new model folder speech_to_text_2 is created. Currently, the tokenizer only supports decoding and not encoding (which is only needed for training) because the tokenizer merges files are not published (cc @kahne) The model can only be used in combination with `SpeechEncoderDecoderModel`. The `SpeechEncoderDecoderModel` is also fully added in this PR and tests for `Wav2Vec2Bert`, `Speech2TextBert`, `Wav2Vec2SpeechToText2` are added. The ASR pipeline is slighly adapted to make it work with `SpeechEncoderDecoder`. @LysandreJik @anton-l - it would be great if you could take a look at the general model architecture @Narsil - it would be very nice if you could check the changes to the pipeline All models are uploaded and can be accessed here: https://huggingface.co/models?other=speech2text2 ## Future TODO: - Currently the tokenizer support only decoding, not training. If the community is interested in getting tokenizer training support for `Speech2Text2` in the future, please ping @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13186/reactions", "total_count": 6, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13186/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13186", "html_url": "https://github.com/huggingface/transformers/pull/13186", "diff_url": "https://github.com/huggingface/transformers/pull/13186.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13186.patch", "merged_at": 1630496011000 }
https://api.github.com/repos/huggingface/transformers/issues/13185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13185/comments
https://api.github.com/repos/huggingface/transformers/issues/13185/events
https://github.com/huggingface/transformers/pull/13185
974,770,438
MDExOlB1bGxSZXF1ZXN0NzE2MDI4NzQw
13,185
Adding CvT Model : Convolution based Image Transformers
{ "login": "AnugunjNaman", "id": 42839570, "node_id": "MDQ6VXNlcjQyODM5NTcw", "avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnugunjNaman", "html_url": "https://github.com/AnugunjNaman", "followers_url": "https://api.github.com/users/AnugunjNaman/followers", "following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}", "gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions", "organizations_url": "https://api.github.com/users/AnugunjNaman/orgs", "repos_url": "https://api.github.com/users/AnugunjNaman/repos", "events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}", "received_events_url": "https://api.github.com/users/AnugunjNaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge. Currently I have loaded model using another script for testing. The model works fine on samples images I have tested. But I need help at few steps:\r\n1 . Adaption to base classes, especially for pretrained models\r\n2. How to upload to hugging face archive\r\n3. I also need to understand few thing in feature extractor part too.\r\n\r\nSo, yeah further guidance needed from here", "```python\r\nfrom transformers import CvTConfig, CvTModel, BeitFeatureExtractor, BeitForImageClassification\r\nimport torch\r\nfrom PIL import Image\r\nimport requests\r\n\r\n\r\nurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'\r\n\r\n# tabby cat image\r\n\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nfeature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-224-pt22k')\r\n\r\nmodel1 = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-224')\r\ninputs = feature_extractor(images=image, return_tensors=\"pt\")\r\nout1 = model1(**inputs)\r\nlogit1 = out1.logits\r\nprint(model1.config.id2label[logit1.argmax(-1).item()])\r\n\r\n\r\nconfig = CvTConfig()\r\nmodel2 = CvTModel(config)\r\nmodel_file = '/home/naman/CvT/models/CvT-21-384x384-IN-1k.pth'\r\n\r\nstate_dict = torch.load(model_file, map_location=\"cpu\")\r\nmodel2.load_state_dict(state_dict, strict=False)\r\n\r\nlogit2 = model2(inputs['pixel_values'])\r\npred = logit2.argmax(-1).item()\r\nprint(model1.config.id2label[pred])\r\n````\r\n\r\nYou can test it here. I have used BeITFeatureExtractor which is similar to CvT I think.", "Hi,\r\n\r\nThanks for your PR. \r\n\r\n> Adaption to base classes, especially for pretrained models\r\n\r\nI've seen that currently, the modeling file is a copy from the original repository. However, to add CvT to this library, we need the follow the same implementation as other models like ViT and BEiT (i.e. the HuggingFace API). Therefore, the `Block` class for example (which is used in the original timm-based implementation) will have to be translated to a `CvtLayer` class (similar to `ViTLayer`). I also opt to use `CvtModel` instead of `CvTModel`, as it will be more difficult for people to type ;) we should have done this for ViT too actually, and we've done it for BEiT now (`BeitModel` instead of `BEiTModel`).\r\n\r\nLooking at the modeling file, the main difference between ViT and CvT seems to happen in the attention layer. So probably, you can just copy everything from `modeling_vit.py`, rename every Vit from that file to Cvt, and then update the attention layer.\r\n\r\nThe code example looks great already! Does it predict a reasonable class (like cat or remote)?\r\n\r\nDo you have an email address? Then I set up a Slack channel to further guide you.", "@NielsRogge Yeah prediction is good. I tested it on a small set of 20 images it okay there. Yeah I will need little guidance. πŸ˜…. Yeah email is [email protected]\r\n\r\nIt's midnight here. I will work on it tomorrow. ", "@NielsRogge I have done code as hugging face api. I have problems in tests. I need help there.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@AnugunjNaman Do you have any updates? I'd like to help contribute if need be.", "Yup, sorry yeah you can help. I got busy in job search since it was my final year. We can contact and continue from there. Can you write me your email? We can set up a time to discuss it.", "Hey @AnugunjNaman, my email is [email protected]. Happy to discuss more!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,646
1,646
CONTRIBUTOR
null
# What does this PR do? Adding CvT Model : Convolution based Image Transformers A new architecture, named Convolutional vision Transformers (CvT), that improves Vision Transformers (ViT) in performance and efficiently by introducing convolutions into ViT to yield the best of both designes. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (e.g. shift, scale, and distortion invariance) while maintaining the merits of Transformers (e.g. dynamic attention, global context, and better generalization). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [NO ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [YES ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [https://github.com/huggingface/transformers/issues/13158 ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/13158 to it if that's the case. - [No ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ No] Did you write any new necessary tests? ## Who can review? @NielsRogge I have few queries and doubts and need help for further addition of pretrained models and adaption to respective base classes Models: -cvt @@NielsRogge Library:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13185/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13185/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13185", "html_url": "https://github.com/huggingface/transformers/pull/13185", "diff_url": "https://github.com/huggingface/transformers/pull/13185.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13185.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13184/comments
https://api.github.com/repos/huggingface/transformers/issues/13184/events
https://github.com/huggingface/transformers/pull/13184
974,767,681
MDExOlB1bGxSZXF1ZXN0NzE2MDI2MzY5
13,184
Custom errors and BatchSizeError
{ "login": "AmbiTyga", "id": 39136064, "node_id": "MDQ6VXNlcjM5MTM2MDY0", "avatar_url": "https://avatars.githubusercontent.com/u/39136064?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmbiTyga", "html_url": "https://github.com/AmbiTyga", "followers_url": "https://api.github.com/users/AmbiTyga/followers", "following_url": "https://api.github.com/users/AmbiTyga/following{/other_user}", "gists_url": "https://api.github.com/users/AmbiTyga/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmbiTyga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmbiTyga/subscriptions", "organizations_url": "https://api.github.com/users/AmbiTyga/orgs", "repos_url": "https://api.github.com/users/AmbiTyga/repos", "events_url": "https://api.github.com/users/AmbiTyga/events{/privacy}", "received_events_url": "https://api.github.com/users/AmbiTyga/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just adding a best practice note: You want to inherit from `Exception` and not `BaseException`.\r\n\r\nhttps://docs.python.org/3/library/exceptions.html\r\n> The built-in exception classes can be subclassed to define new exceptions; programmers are encouraged to derive new exceptions from the Exception class or one of its subclasses, and not from BaseException. More information on defining exceptions is available in the Python Tutorial under User-defined Exceptions.\r\n\r\nIMO, I'd say that this could be a [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError)\r\n\r\nBut that's just my opinion. If the core team has an opinion on the matter, listen to them :)", "> Just adding a best practice note: You want to inherit from `Exception` and not `BaseException`.\r\n> \r\n> https://docs.python.org/3/library/exceptions.html\r\n> \r\n> > The built-in exception classes can be subclassed to define new exceptions; programmers are encouraged to derive new exceptions from the Exception class or one of its subclasses, and not from BaseException. More information on defining exceptions is available in the Python Tutorial under User-defined Exceptions.\r\n> \r\n> IMO, I'd say that this could be a [ValueError](https://docs.python.org/3/library/exceptions.html#ValueError)\r\n> \r\n> But that's just my opinion. If the core team has an opinion on the matter, listen to them :)\r\n\r\nThe reason behind using Custom Exception is to help users know what's the error from their side is, BatchSizeError sounds more clear and directly addresses that the problem is with the batch size.", "You should still inherit from `Exception` and not `BaseException`, per the official Python docs\r\n\r\nhttps://docs.python.org/3/library/exceptions.html#BaseException\r\n>The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use Exception). If str() is called on an instance of this class, the representation of the argument(s) to the instance are returned, or the empty string when there were no arguments.\r\n\r\nInheriting from BaseException can cause problems with having KeyboardInterrupt exceptions getting clobbered and having programs hang.", "> You should still inherit from `Exception` and not `BaseException`, per the official Python docs\r\n> \r\n> https://docs.python.org/3/library/exceptions.html#BaseException\r\n> \r\n> > The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use Exception). If str() is called on an instance of this class, the representation of the argument(s) to the instance are returned, or the empty string when there were no arguments.\r\n> \r\n> Inheriting from BaseException can cause problems with having KeyboardInterrupt exceptions getting clobbered and having programs hang.\r\n\r\nYes, I have changed that. 🀝\r\n`BaseException` -> `Exception`\r\n", "Ok, I am using `ValueError` and made all other changes as well. Please go through it and let me know if there's anything else that is needed. @LysandreJik ", "Thanks, @LysandreJik. :)" ]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? The PR addresses the issue #12789. I have added a file `custom_exceptions.py` holding a class instance to be used by `modeling_gpt2.py` to replace the assert based error with suitable exception based error. We can add other errors as well to address the type of error happening. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - @willfrey - @sgugger - Anyone in the community is free to review the PR once the tests have passed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13184", "html_url": "https://github.com/huggingface/transformers/pull/13184", "diff_url": "https://github.com/huggingface/transformers/pull/13184.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13184.patch", "merged_at": 1629810061000 }
https://api.github.com/repos/huggingface/transformers/issues/13183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13183/comments
https://api.github.com/repos/huggingface/transformers/issues/13183/events
https://github.com/huggingface/transformers/pull/13183
974,651,760
MDExOlB1bGxSZXF1ZXN0NzE1OTI0OTAz
13,183
Fix LUKE tests
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? 3 tests defined in `test_tokenization_luke.py` were having a timeout because they were too slow: ``` FAILED tests/test_tokenization_luke.py::Luke::test_add_special_tokens FAILED tests/test_tokenization_luke.py::Luke::test_maximum_encoding_length_pair_input FAILED tests/test_tokenization_luke.py::Luke::test_maximum_encoding_length_single_input ``` This was caused by the `get_clean_sequence` method (used in each of those methods), which is defined in `test_tokenization_common.py` and was inherited by default. By overwriting this method with a much simpler one, the tests are significantly faster. No bottleneck anymore.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13183/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13183", "html_url": "https://github.com/huggingface/transformers/pull/13183", "diff_url": "https://github.com/huggingface/transformers/pull/13183.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13183.patch", "merged_at": 1629704496000 }
https://api.github.com/repos/huggingface/transformers/issues/13182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13182/comments
https://api.github.com/repos/huggingface/transformers/issues/13182/events
https://github.com/huggingface/transformers/issues/13182
974,604,926
MDU6SXNzdWU5NzQ2MDQ5MjY=
13,182
T5TokenizerFast not reversible when text contains special tokens
{ "login": "zorikg", "id": 37661625, "node_id": "MDQ6VXNlcjM3NjYxNjI1", "avatar_url": "https://avatars.githubusercontent.com/u/37661625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zorikg", "html_url": "https://github.com/zorikg", "followers_url": "https://api.github.com/users/zorikg/followers", "following_url": "https://api.github.com/users/zorikg/following{/other_user}", "gists_url": "https://api.github.com/users/zorikg/gists{/gist_id}", "starred_url": "https://api.github.com/users/zorikg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zorikg/subscriptions", "organizations_url": "https://api.github.com/users/zorikg/orgs", "repos_url": "https://api.github.com/users/zorikg/repos", "events_url": "https://api.github.com/users/zorikg/events{/privacy}", "received_events_url": "https://api.github.com/users/zorikg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Heey @zorikg,\r\n\r\nIs this a problem in your case? Some special tokens always strip away the space on the left so that we can assure the same expected behavior for the two use cases. *e.g.* when thinking about \"<mask>\" prediction, some users process the text in the form:\r\n\r\n\"`The capital of <mask> is Paris`\" while others use \"`The capital of<mask> is Paris`\"\r\n\r\n=> we want both cases to yield the correct <mask> token = France so that for some special tokens we think it's better to just always strip away the white space on the left (could be on the right as well)\r\n\r\n", "Hey @patrickvonplaten,\r\n\r\nIn your example both cases contain the same string \"The capital of is Paris\" (typo?) and I am not sure what is the difference between them, could you clarify?\r\n\r\nIt seems that you don't only strip away white space form the left, you also add a white space to the right.\r\n\r\nThis is indeed a problem in my use case. I work under the assumption that sentence piece tokenizer should be fully reversible, which meant that `detokenize(tokenize(x)) == x`. \r\n\r\nIn my scenario I look for answer spans inside a paragraph and checking if it contains certain subtext. I have many bugs around this issue when text contains unknown tokens. For example:\r\n```\r\ntokenizer = T5TokenizerFast.from_pretrained('t5-base')\r\ns = 'maternal grandfather Κ»Aikanaka'\r\ns_encode_decode = tokenizer.decode(tokenizer(s)['input_ids'])\r\nprint(s_encode_decode)\r\ns_encode_decode_2 = tokenizer.decode(tokenizer(s_encode_decode)['input_ids'])\r\nprint(s_encode_decode_2)\r\n```\r\nThe first print is `maternal grandfather <unk>Aikanaka</s>` and the second is `maternal grandfather<unk> Aikanaka</s></s>`. \r\n\r\nDue to the fact that I may encode & decode the paragraph and the answer many times I had many cases where I thought that string did not contain certain substring but it actually did (because spaces around the <unk> token were mismatched). \r\n\r\nI do think that the contract should be that if I encode and decode I get the same result. If there are other use cases, I would consider supporting them with explicit argument, right now I feel that the API is a bit misleading and it actually took us a lot of time to figure out the reason for our bug. WDYT?\r\n\r\nThanks!", "Sorry @zorikg, I forgot to put the text in `code format`. Now my example above should make more sense...\r\n\r\nBut looking more into it, I think this looks like a bug to me...\r\n\r\nThe following should work IMO:\r\n\r\n```python\r\nfrom transformers import T5TokenizerFast, AddedToken\r\n\r\ntokenizer = T5TokenizerFast.from_pretrained('t5-base')\r\ntokenizer.unk_token = AddedToken(\"<unk>\", lstrip=False, rstrip=False)\r\n\r\ns = \"Hello <unk>there\"\r\n\r\ntokenizer.decode(tokenizer(s)['input_ids']) == s\r\n```\r\n\r\ncc'ing @LysandreJik @SaulLu - what do you think about this?", "Thanks for the answer @patrickvonplaten. \r\n\r\nUnfortunately I ran your code and `tokenizer.decode(tokenizer(s)['input_ids']) == s` returns `False` :(\r\n\r\nAlso it seems that the type of `tokenizer.unk_token` is `str` and not `AddedToken`.\r\n\r\nDo you have other workaround? Also I need this behavior to be consistent for all special tokens, including the `eos` token and all the additional tokens with special ids.", "Thank you for the issue @zorikg , \r\n\r\nIndeed, I share your point of view, this behaviour is surprising. \r\n\r\nAs a side note, because it could cause you some problems, tokenizers have not been designed with the idea of being reversible (the normalization operation can be non-reversible).\r\n\r\nAfter some research, I don't see a solution to achieve what you want. Maybe @n1t0 pr @Narsil has an idea on the `tokenizer_backend` side ?\r\n\r\nI think we should spend some time investigating how the `rstrip` and `lstrip` attributes are taken into account as the output does not seem natural to me. For example, on this example, I don't understand why 1) there is a not `\"▁\"` between `'▁grandfather'` and `'<unk>'` and 2) the `\"▁A\"` start with a `\"▁\"`. \r\n```python\r\ntokenizer = T5TokenizerFast.from_pretrained('t5-base', unk_token=AddedToken(\"<unk>\", lstrip=False, rstrip=False))\r\ns = 'maternal grandfather <unk>Aikanaka'\r\n\r\ns_encode_decode_tokens = tokenizer.convert_ids_to_tokens(tokenizer(s)['input_ids'])\r\n```\r\nOutput:\r\n```\r\n['▁maternal', '▁grandfather', '<unk>', '▁A', 'i', 'kan', 'aka', '</s>']\r\n```\r\n[Edit]: as it is written in the documentation: \r\n> lstrip (bool, defaults to False) – Defines whether this token should strip all potential whitespaces on its left side. If True, this token will greedily match any whitespace on its left. For example if we try to match the token [MASK] with lstrip=True, in the text \"I saw a [MASK]\", we would match on \" [MASK]\". (Note the space on the left).\r\n\r\n> rstrip (bool, defaults to False) – Defines whether this token should strip all potential whitespaces on its right side. If True, this token will greedily match any whitespace on its right. It works just like lstrip but on the right.\r\n", "Hmm, looked into it a bit, it's not `transformers` that's swallowing the extra space, it's T5 specific.\r\n\r\nTo do that, I checked with the slow tokenizer to get `tokenizers` out of the equation.\r\nIf you look at that, within `src/transformers/tokenization_utils_base.py::tokenize` you can check that everything gets split properly, BUT the t5 tokenization uses for `_tokenize` : `self.sp_model.encode(text, out_type=str)`\r\n\r\nAnd if you check, \r\n```python\r\n# notice extra space\r\nself._tokenize(\"maternal grandfather \", out_type=str)\r\n# ['▁maternal', '▁grandfather']\r\n# SPACE GONE\r\n```\r\n\r\nThe fact that `tokenizers` just replicates that behavior seems ok to me.\r\nAnyway, the culprit is NOT transformers/tokenizers but really `T5`. (or `spm`)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
## Environment info - `transformers` version: 4.8.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Who can help @patrickvonplaten, @patil-suraj ## Information I am using `T5TokenizerFast` initialized with `t5-base` tokenizer. ## To reproduce ``` from transformers import T5TokenizerFast def main(): tokenizer = T5TokenizerFast.from_pretrained('t5-base') """Note that all those strings will be decoded to the same string!""" s1 = "Hello <unk>world" s2 = "Hello<unk> world" s3 = "Hello <unk> world" s4 = "Hello<unk>world" for s in [s1, s2, s3, s4]: assert tokenizer.decode(tokenizer(s)['input_ids']) == 'Hello<unk> world</s>' if __name__ == "__main__": main() ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13182/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13181/comments
https://api.github.com/repos/huggingface/transformers/issues/13181/events
https://github.com/huggingface/transformers/pull/13181
974,589,705
MDExOlB1bGxSZXF1ZXN0NzE1ODcxMTIy
13,181
SageMaker: Fix sagemaker DDP & metric logs
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot @philschmid !" ]
1,629
1,630
1,629
MEMBER
null
# What does this PR do? This PR fixes the fix introduced in #12853. Since `sm_dist.Barrier()` is not available in `smd 1.0.0 2021-01-26` release and smd `1.2.0`, which are used for the DLC with PyTorch 1.7.1 & 1.8.1 (they maintained ones). Further is the fix `sm_dist.barrier()` also working with `smd 1.0.0 2020-12-06`. cc @sgugger Additionally, does this PR update: * the SageMaker test image_uris * instances type for distributed training -> there are capacity issues with the 24xlarge * and moves the adding of the `StreamHandler(sys.stdout)` for logs to the `trainer_pt_utils.py` to also cover the `log_metrics function` more to this below. --- When running a training job on SageMaker all stdout or stderr are sent to Amazon CloudWatch Logs. With the introduction of the new `log_metrics` function, SageMaker lost its output. Therefore I moved the `StreamHandler(sys.stdout)` to the `trainer_pt_utils.py` and removed it in the `trainer`. More information can be found here #10633
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13181/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13181", "html_url": "https://github.com/huggingface/transformers/pull/13181", "diff_url": "https://github.com/huggingface/transformers/pull/13181.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13181.patch", "merged_at": 1629706687000 }
https://api.github.com/repos/huggingface/transformers/issues/13180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13180/comments
https://api.github.com/repos/huggingface/transformers/issues/13180/events
https://github.com/huggingface/transformers/issues/13180
974,466,879
MDU6SXNzdWU5NzQ0NjY4Nzk=
13,180
Conversion of Wav2vec2 model to TFWav2vec2 model
{ "login": "harveenchadha", "id": 30959215, "node_id": "MDQ6VXNlcjMwOTU5MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/30959215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harveenchadha", "html_url": "https://github.com/harveenchadha", "followers_url": "https://api.github.com/users/harveenchadha/followers", "following_url": "https://api.github.com/users/harveenchadha/following{/other_user}", "gists_url": "https://api.github.com/users/harveenchadha/gists{/gist_id}", "starred_url": "https://api.github.com/users/harveenchadha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harveenchadha/subscriptions", "organizations_url": "https://api.github.com/users/harveenchadha/orgs", "repos_url": "https://api.github.com/users/harveenchadha/repos", "events_url": "https://api.github.com/users/harveenchadha/events{/privacy}", "received_events_url": "https://api.github.com/users/harveenchadha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @harveenchadha, to convert from PT to TF, you can just do:\r\n\r\n```python\r\nfrom transformers import TFWav2Vec2Model\r\n\r\nmodel = TFWav2Vec2Model.from_pretrained(\"<path/to/hf.bin model folder>\", from_pt=True)\r\nmodel.save_pretrained(\"<path/to/save/.h5>\")\r\n```", "Also note that (`.pt`) and (`.bin`) is the same format in most cases as far as I understand: https://stackoverflow.com/questions/57245332/what-are-the-difference-between-bin-and-pt-pytorch-saved-model-types#:~:text=1%20Answer&text=There%20is%20no%20difference%20as,torch%20can%20read%20either%20." ]
1,629
1,630
1,630
NONE
null
Hi, I trained model using fairseq toolkit. I have successfully converted the model from fairseq to huggingface .bin model. How can I convert to pure pytorch (.pt) and tensorflow (.h5) format. Are there any scripts for that?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13180/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13179/comments
https://api.github.com/repos/huggingface/transformers/issues/13179/events
https://github.com/huggingface/transformers/pull/13179
974,413,601
MDExOlB1bGxSZXF1ZXN0NzE1NzIyMTg3
13,179
Correct order of overflowing_tokens for slow tokenizer
{ "login": "Apoorvgarg-creator", "id": 57873504, "node_id": "MDQ6VXNlcjU3ODczNTA0", "avatar_url": "https://avatars.githubusercontent.com/u/57873504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Apoorvgarg-creator", "html_url": "https://github.com/Apoorvgarg-creator", "followers_url": "https://api.github.com/users/Apoorvgarg-creator/followers", "following_url": "https://api.github.com/users/Apoorvgarg-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Apoorvgarg-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Apoorvgarg-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Apoorvgarg-creator/subscriptions", "organizations_url": "https://api.github.com/users/Apoorvgarg-creator/orgs", "repos_url": "https://api.github.com/users/Apoorvgarg-creator/repos", "events_url": "https://api.github.com/users/Apoorvgarg-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Apoorvgarg-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you very much for working on this PR.\r\n\r\nDid you check the test `test_maximum_encoding_length_single_input` and `test_maximum_encoding_length_pair_input` passed locally? :slightly_smiling_face: \r\n\r\nI think it would be good to have some tests to check that the tokens are in the right order now. What do you think? The first thing I see would be to complete the tests performed in methods [`test_maximum_encoding_length_single_input`](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L846) and [`test_maximum_encoding_length_pair_input` ](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L938) in file `test_tokenization_common.py`. \r\n\r\nFor example, currently, sometimes we check that the content of the overflowing tokens corresponds to what we expect for the [fast tokenizers](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L928) but not for the [slow ones](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L937) (and [here](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L1079) and [here](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L1111)). \r\n\r\nOur goal is really to check in the tests the resulting overflowing tokens for all cases , i.e. all `TruncationStrategy` and with 1 sequence or a pair of sequences. \r\n\r\nDon't hesitate to tell me if you need more help to complete these tests. :slightly_smiling_face: \r\n\r\n\r\n\r\n", "> Did you check the test `test_maximum_encoding_length_single_input` and `test_maximum_encoding_length_pair_input` passed locally?\r\n No. These two tests weren't passed locally either.\r\n\r\n\r\n> I think it would be good to have some tests to check that the tokens are in the right order now.\r\nI ran the updated code on several tokenizers to verify the result. They all passed. I will try making better test cases.\r\n\r\nAs mentioned, Firstly I will try to resolve the ` test_maximum_encoding_length_single_input` and `test_maximum_encoding_length_pair_input`.\r\n\r\nThank you @SaulLu.", "@SaulLu I would like to make a request.\r\nI want to know the correct order of overflowing tokens for the test case : \r\n\r\n```\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\nseq = [\"hello my name is Ted Mosby \", \"I am an Architect in Boston \"]\r\n\r\nencoding = tokenizer(seq[0],seq[1], padding=True, max_length=6, truncation=True, return_overflowing_tokens=True)\r\n\r\nprint(tokenizer.decode(encoding.input_ids))\r\n\r\nprint(tokenizer.decode(encoding.overflowing_tokens))\r\n```", "You make a very good point! Indeed, we have an API choice to make here. \r\n\r\nI'm going to list the possibilities I see here because I don't see an \"ideal\" solution that would fit into the current framework (i.e. a single list). I take the example you gave:\r\n```python\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nseq = [\"hello my name is Ted Mosby \", \"I am an Architect in Boston \"]\r\nencoding = tokenizer(seq[0],seq[1], padding=True, max_length=6, truncation=True, return_overflowing_tokens=True)\r\n\r\nprint(tokenizer.decode(encoding.input_ids))\r\n```\r\nOutput\r\n```\r\n[CLS] hello my [SEP] i [SEP]\r\n```\r\nthe possibilities I see for the output of `encoding.overflowing_tokens`:\r\n1. Concatenate in the same list the overflowing tokens of the sequence 1 and the sequence 2. As a result ``print(tokenizer.decode(encoding.overflowing_tokens))`` will return:\r\n```\r\n'name is Ted Mo ##sby am an Architect in Boston'\r\n```\r\nAdvantage: the output format is the same; Disadvantage: we can't distinguish between the first and the second sequence\r\n2. Create a tuple of 2 lists for the overflowing tokens of the sequence 1 and the sequence 2. As a result ``print(tokenizer.decode(encoding.overflowing_tokens[0]), tokenizer.decode(encoding.overflowing_tokens[1]))`` will return:\r\n```\r\n'name is Ted Mo ##sby' 'am an Architect in Boston'\r\n```\r\nAdvantage: we can distinguish between the first and the second sequence; Disadvantage: the output format is not the same. This can be seen as a temporary micro-change if we ever want to standardize the API of fast and slow tokenizers in a second PR.\r\n3. Raise an error because as many comments in the tests show, before it was not possible to return overflowing tokens for slow tokenizers with the longest_first strategy\r\n\r\n@LysandreJik , @sgugger, @patil-suraj or @patrickvonplaten I think your point of view can be useful here. Should we change the output format ? Should we also aim to have the same behavior for the slow and fast tokenizers ? :slightly_smiling_face: ", "> Disadvantage: we can't distinguish between the first and the second sequence\r\n\r\nFor the above Disadvantage, The following method might resolve it -\r\nThe use of special tokens might help in this problem \r\nFor instance: On the same test case mentioned above -\r\n` [CLS] name is Ted Mo ##sby [SEP] am an Architect in Boston [SEP] `\r\n \r\n@SaulLu Thank you for helping me out with the test case.", "For `test_maximum_encoding_length_single_input` earlier the order was not correct for the slow tokenizer (i.e. reverse order if stride = 0 ) but now I guess we can add this line of code \r\n\r\nhttps://github.com/huggingface/transformers/blob/143738214cb83e471f3a43652617c8881370342c/tests/test_tokenization_common.py#L928 \r\n\r\nfor Slow tokenizer as well.\r\n\r\n@SaulLu ", "@SaulLu @LysandreJik @NielsRogge @patrickvonplaten, could you please review the changes I have done in the code.", "Hello, thank you for your PR! Could you please add some tests to ensure correct behavior? You can add them in `tests/tests_tokenization_common.py` so that all tokenizers get tested. Thank you!", "> Hello, thank you for your PR! Could you please add some tests to ensure correct behavior? You can add them in `tests/tests_tokenization_common.py` so that all tokenizers get tested. Thank you!\r\n\r\n@LysandreJik Thank you for reviewing the PR. Sure, I will add the necessary test to ensure the correct behavior.", "@LysandreJik All the necessary tests have been added.", "@LysandreJik, could you please review the tests I have added to the code. \r\nThank you", "Thanks for the ping @Apoorvgarg-creator, @SaulLu will take over and review :)", "@Apoorvgarg-creator , thanks again for your work, I am trying to look at your PR quickly.\r\n\r\n> @SaulLu I would like to make a request.\r\nI want to know the correct order of overflowing tokens for the test case :\r\n\r\nIn the meantime, sorry for the delay, but after discussing it, for this case (pair of sequences and `longest_first` strategy) we think it would be better to return an error.", "> @Apoorvgarg-creator , thanks again for your work, I am trying to look at your PR quickly.\r\n> \r\n> > @SaulLu I would like to make a request.\r\n> > I want to know the correct order of overflowing tokens for the test case :\r\n> \r\n> In the meantime, sorry for the delay, but after discussing it, for this case (pair of sequences and `longest_first` strategy) we think it would be better to return an error.\r\n\r\nSo the code should raise an error message whenever we try to return overflowing tokens for a pair of sequences with the `longest_first` strategy.\r\n\r\nAnd For single_input, I have corrected the order and also added the necessary tests in `test_tokenization_common.py`. Do these require any changes?\r\n\r\n@SaulLu , Thank you for the review.", "<img width=\"1076\" alt=\"Screenshot 2021-08-27 at 9 04 24 AM\" src=\"https://user-images.githubusercontent.com/57873504/131067867-0f681b61-a82c-44ec-97f1-fa40b44fd3f3.png\">\r\n\r\n[Documentation/Preprocessing data]( https://huggingface.co/transformers/preprocessing.html ),Here they have mentioned when truncation_strategy is set to 'True' it means `only_first` instead of `longest_first`.\r\n@sgugger ", "> <img alt=\"Screenshot 2021-08-27 at 9 04 24 AM\" width=\"1076\" src=\"https://user-images.githubusercontent.com/57873504/131067867-0f681b61-a82c-44ec-97f1-fa40b44fd3f3.png\">\r\n> \r\n> [Documentation/Preprocessing data](https://huggingface.co/transformers/preprocessing.html),Here they have mentioned when truncation_strategy is set to 'True' it means `only_first` instead of `longest_first`.\r\n\r\n\r\nGreat catch. Indeed, the documentation seems to differ between the section [\"Everything you always wanted to know about padding and truncation\"](https://huggingface.co/transformers/preprocessing.html?highlight=truncation#everything-you-always-wanted-to-know-about-padding-and-truncation) and [the docstring of the _call__ methode of `PreTrainedTokenizerBase`](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__). It would be best if you opened a dedicated issue so that we can deal with the problems separately?", "> It would be best if you opened a dedicated issue so that we can deal with the problems separately?\r\n\r\nSure. I will make a separate issue for this.\r\nThank you for the reviews, @SaulLu. I will do the dedicated changes at the earliest.", "> I didn't check if that the case or, did you check it?\r\n\r\nNo, I haven't. But I will go through the things you have mentioned above. ", "@SaulLu, All the changes that I could find in the docstring have been done.", "@sgugger Thank you for reviewing the PR. I have made the changes mentioned above. Do I need to change every use case of `tokenizer.encode_plus` or only in the `test_maximum_encoding_length_pair_input`.", "@LysandreJik @SaulLu, all the dedicated changes have been resolved. Could you please review the PR?\r\nThank you ", "Thanks for fixing this, great work!", "@SaulLu @LysandreJik @sgugger @NielsRogge, Thank you for the guidance. It was an insightful experience and I hope to contribute more." ]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? When using a slow tokenizer (LayoutLM, Bert, Alberta, etc.), the `overflowing_tokens` were obtained in the wrong order. I have made the necessary changes that will produce the `overflowing_tokens` in the correct order. ## Tasks summary - - [x] making sure overflowing tokens are returned in the correct order for all `truncation_strategy` for a sequences of input ids. - [x] if a pair of sequences of input ids (or batch of pairs) is provided, an error should be raised for the `truncation_strategy=True` or `longest_first` stating _"Not possible to return overflowing tokens for pair of sequences with the `longest_first`.Please select another truncation strategy than `longest_first`, for instance `only_second` or `only_first`."_ - [x] Replaced the deprecated method `encode_plus` to regular `__call__` method in `test\test_tokenization_common.py`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes the issue [ huggingface/transformers#13148 ](https://github.com/huggingface/transformers/issues/13148 ) Fixes huggingface#13148 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),Pull Request section? Yes πŸ‘πŸ» - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes, [ huggingface/transformers#13148 ](https://github.com/huggingface/transformers/issues/13148 ) - [x] Did you write any new necessary tests?Yes πŸ‘πŸ» , Required tests are added in `tests/test_tokenization_common.py` - [x] Did you make sure to update the documentation with your changes? Anyone in the community is free to review the PR once the tests have passed. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @NielsRogge @LysandreJik @n1t0 @SaulLu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13179/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13179", "html_url": "https://github.com/huggingface/transformers/pull/13179", "diff_url": "https://github.com/huggingface/transformers/pull/13179.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13179.patch", "merged_at": 1630576703000 }
https://api.github.com/repos/huggingface/transformers/issues/13178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13178/comments
https://api.github.com/repos/huggingface/transformers/issues/13178/events
https://github.com/huggingface/transformers/issues/13178
974,332,545
MDU6SXNzdWU5NzQzMzI1NDU=
13,178
how to finetune based huggingface: run_glue.py
{ "login": "lbda1", "id": 25147325, "node_id": "MDQ6VXNlcjI1MTQ3MzI1", "avatar_url": "https://avatars.githubusercontent.com/u/25147325?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lbda1", "html_url": "https://github.com/lbda1", "followers_url": "https://api.github.com/users/lbda1/followers", "following_url": "https://api.github.com/users/lbda1/following{/other_user}", "gists_url": "https://api.github.com/users/lbda1/gists{/gist_id}", "starred_url": "https://api.github.com/users/lbda1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lbda1/subscriptions", "organizations_url": "https://api.github.com/users/lbda1/orgs", "repos_url": "https://api.github.com/users/lbda1/repos", "events_url": "https://api.github.com/users/lbda1/events{/privacy}", "received_events_url": "https://api.github.com/users/lbda1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you provide a Colab notebook to reproduce your issue?", "what is the format of the classification data which used in run_glue.py?", "The [README](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) mentions that \"your own data in a csv or a JSON file (the script might need some tweaks in that case, refer to the comments inside for help)\". Looking at the comments, it says:\r\n\r\n # Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below)\r\n # or specify a GLUE benchmark task (the dataset will be downloaded automatically from the datasets Hub).\r\n #\r\n # For CSV/JSON files, this script will use as labels the column called 'label' and as pair of sentences the\r\n # sentences in columns called 'sentence1' and 'sentence2' if such column exists or the first two columns not named\r\n # label if at least two columns are provided.\r\n #\r\n # If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this\r\n # single column. You can easily tweak this behavior (see below)\r\n\r\nA bit further down, the dataset is read as follows:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": data_args.train_file, \"validation\": data_args.validation_file}\r\nraw_datasets = load_dataset(\"csv\", data_files=data_files, cache_dir=model_args.cache_dir)\r\n```\r\n\r\nYou can perhaps isolate the error by running the code above.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
python transformers-master/examples/pytorch/text-classification/run_glue.py \ --model_name_or_path chinese_bert-base \ --train_file=transformers-master/dataset/class/train.csv \ --validation_file=transformers-master/dataset/class/dev.csv \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir output When I use run_glue.py, and train.csv/dev.csv data format has two columns which contain β€œsentence” and β€œlabel”. But It reports an error "pandas.errors. ParserError: Error tokenizing data. C error: Expected 1 fields in line 10, saw 2". What is reason of this error? It is the data format error?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13178/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13177/comments
https://api.github.com/repos/huggingface/transformers/issues/13177/events
https://github.com/huggingface/transformers/issues/13177
974,284,022
MDU6SXNzdWU5NzQyODQwMjI=
13,177
Bug of PyTorch group_beam_search function
{ "login": "Changyu-Guo", "id": 43259972, "node_id": "MDQ6VXNlcjQzMjU5OTcy", "avatar_url": "https://avatars.githubusercontent.com/u/43259972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Changyu-Guo", "html_url": "https://github.com/Changyu-Guo", "followers_url": "https://api.github.com/users/Changyu-Guo/followers", "following_url": "https://api.github.com/users/Changyu-Guo/following{/other_user}", "gists_url": "https://api.github.com/users/Changyu-Guo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Changyu-Guo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Changyu-Guo/subscriptions", "organizations_url": "https://api.github.com/users/Changyu-Guo/orgs", "repos_url": "https://api.github.com/users/Changyu-Guo/repos", "events_url": "https://api.github.com/users/Changyu-Guo/events{/privacy}", "received_events_url": "https://api.github.com/users/Changyu-Guo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @Changyu-Guo,\r\n\r\nThanks a lot for the issue! I see what you mean & I think you're right! \r\nWe should probably move the \r\n\r\n```\r\nif output_scores:\r\n processed_score = torch.zeros_like(outputs.logits[:, -1, :])\r\n```\r\n\r\nout of the inner loop no? To be sure that beam group indices 0 - (last - 1) are not always 0...would you like to open a PR to fix it? :-)", "Hi @patrickvonplaten, Can I take on this issue( If it is not assigned to someone else )? It looks fairly simple to me. \r\nBut as I'm quite new to this so I might need some guidance.", "@patrickvonplaten Sorry for the late reply, you are right. Move the\r\n```python\r\nif output_scores:\r\n processed_score = torch.zeros_like(outputs.logits[:, -1, :])\r\n```\r\nout of the inner `for loop` will solve this problem.\r\n\r\nI think @sourabh112 can take on this issue, perhaps you should carefully read \"[How to contribute to transformers?](https://huggingface.co/transformers/contributing.html)\" first.", "I have read the [contributing guidelines](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/CONTRIBUTING.md) and made the changes. Should I make/run some test cases (Help with some examples would be appreciable) to make sure that now output scores are giving expected values or directly make a PR?", "@patrickvonplaten Should I make a PR?" ]
1,629
1,629
1,629
NONE
null
## Environment info - `transformers` version: 4.9.1 - Platform: Ubuntu 18.04 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1+cu111 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - text generation: @patrickvonplaten ## Information Code Location: `src/transformers/generation_utils.py --> lines 2411 - 2480 (in group_beam_search function)` ```python for beam_group_idx in range(num_beam_groups): group_start_idx = beam_group_idx * num_sub_beams group_end_idx = min(group_start_idx + num_sub_beams, num_beams) group_size = group_end_idx - group_start_idx # indices of beams of current group among all sentences in batch batch_group_indices = [] ########################################################### if output_scores: processed_score = torch.zeros_like(outputs.logits[:, -1, :]) ########################################################### for batch_idx in range(batch_size): batch_group_indices.extend( [batch_idx * num_beams + idx for idx in range(group_start_idx, group_end_idx)] ) group_input_ids = input_ids[batch_group_indices] # select outputs of beams of current group only next_token_logits = outputs.logits[batch_group_indices, -1, :] # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id` # cannot be generated both before and after the `nn.functional.log_softmax` operation. next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len) next_token_scores = nn.functional.log_softmax( next_token_logits, dim=-1 ) # (batch_size * group_size, vocab_size) vocab_size = next_token_scores.shape[-1] next_token_scores = logits_processor( group_input_ids, next_token_scores, current_tokens=current_tokens, beam_group_idx=beam_group_idx ) next_token_scores = next_token_scores + beam_scores[batch_group_indices].unsqueeze(-1).expand_as( next_token_scores ) ########################################################### if output_scores: processed_score[batch_group_indices] = next_token_scores ########################################################### # reshape for beam search next_token_scores = next_token_scores.view(batch_size, group_size * vocab_size) next_token_scores, next_tokens = torch.topk( next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True ) next_indices = next_tokens // vocab_size next_tokens = next_tokens % vocab_size # stateless beam_outputs = beam_scorer.process( group_input_ids, next_token_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id, ) beam_scores[batch_group_indices] = beam_outputs["next_beam_scores"] beam_next_tokens = beam_outputs["next_beam_tokens"] beam_idx = beam_outputs["next_beam_indices"] input_ids[batch_group_indices] = group_input_ids[beam_idx] group_input_ids = torch.cat([group_input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) current_tokens[batch_group_indices] = group_input_ids[:, -1] # (beam_idx // group_size) -> batch_idx # (beam_idx % group_size) -> offset of idx inside the group reordering_indices[batch_group_indices] = ( num_beams * (beam_idx // group_size) + group_start_idx + (beam_idx % group_size) ) ``` ```python if output_scores: processed_score = torch.zeros_like(outputs.logits[:, -1, :]) ``` ```python if output_scores: processed_score[batch_group_indices] = next_token_scores ``` -------------------------------------------------- When `output_scores=True` is set, the `processed_score` will be reset by `torch.zeros_like` in each `for loop`. I'm wondering if this is a bug. It will cause the `output scores` to not match expectations. (Except for the last beam group, the rest are all 0).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13177/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13176/comments
https://api.github.com/repos/huggingface/transformers/issues/13176/events
https://github.com/huggingface/transformers/issues/13176
974,079,810
MDU6SXNzdWU5NzQwNzk4MTA=
13,176
GPT2 error when we try to run torch.jit.script
{ "login": "vdantu", "id": 36211508, "node_id": "MDQ6VXNlcjM2MjExNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/36211508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vdantu", "html_url": "https://github.com/vdantu", "followers_url": "https://api.github.com/users/vdantu/followers", "following_url": "https://api.github.com/users/vdantu/following{/other_user}", "gists_url": "https://api.github.com/users/vdantu/gists{/gist_id}", "starred_url": "https://api.github.com/users/vdantu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vdantu/subscriptions", "organizations_url": "https://api.github.com/users/vdantu/orgs", "repos_url": "https://api.github.com/users/vdantu/repos", "events_url": "https://api.github.com/users/vdantu/events{/privacy}", "received_events_url": "https://api.github.com/users/vdantu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you try the following: \r\n\r\n```python\r\nfrom transformers import GPT2LMHeadModel, GPT2Config\r\nimport torch\r\nconfiguration = GPT2Config(n_embd=1600, n_layer=48, n_head=25, use_cache=False)\r\nmodel = GPT2LMHeadModel(configuration)\r\nscript_model = torch.jit.script(model.base_model)\r\n```\r\n\r\njust to first see whether disabling the cache solves the problem", "> Can you try the following:\r\n> \r\n> ```python\r\n> from transformers import GPT2LMHeadModel, GPT2Config\r\n> import torch\r\n> configuration = GPT2Config(n_embd=1600, n_layer=48, n_head=25, use_cache=False)\r\n> model = GPT2LMHeadModel(configuration)\r\n> script_model = torch.jit.script(model.base_model)\r\n> ```\r\n> \r\n> just to first see whether disabling the cache solves the problem\r\n\r\nI still see the same error.\r\n```\r\ntorch.jit.frontend.UnsupportedNodeError: GeneratorExp aren't supported:\r\n File \"/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 756\r\n # Ensure layer_past is on same device as hidden_states (might not be correct)\r\n if layer_past is not None:\r\n layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past)\r\n ~ <--- HERE\r\n # Ensure that attention_mask is always on the same device as hidden_states\r\n if attention_mask is not None:\r\n```\r\n\r\nThis seems like GPT model as it is implemented today is not supported by `torch.jit.script` because the model uses generator expr and Script doesn't support it yet. Is this accurate or am I missing anything? Also, would `torch.jit.trace`ing the GPT2 model cause any correctness issues? ", "Hello! The documentation regarding TorchScript can be found [here](https://huggingface.co/transformers/serialization.html#torchscript).\r\n\r\nYou're correct in your analysis: `torch.jit.script` isn't supported, while `torch.jit.trace` is supported!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Ubuntu 18.04 - Python version: Python3.6 - PyTorch version (GPU?): 1.9.0 GPU - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce I was trying to run torch.jit.script and I get the following error from JIT frontend ``` File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/torch/jit/frontend.py", line 330, in __call__ raise UnsupportedNodeError(ctx, node) torch.jit.frontend.UnsupportedNodeError: GeneratorExp aren't supported: File "/home/ubuntu/anaconda3/envs/python3/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 756 # Ensure layer_past is on same device as hidden_states (might not be correct) if layer_past is not None: layer_past = tuple(past_state.to(hidden_states.device) for past_state in layer_past) ~ <--- HERE # Ensure that attention_mask is always on the same device as hidden_states if attention_mask is not None: ``` I am curious to know if this is a known issue or learn if I am doing something wrong. #### Sample Code: ``` from transformers import GPT2LMHeadModel, GPT2Config import torch configuration = GPT2Config(n_embd=1600, n_layer=48, n_head=25) model = GPT2LMHeadModel(configuration) script_model = torch.jit.script(model.base_model) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13176/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13175/comments
https://api.github.com/repos/huggingface/transformers/issues/13175/events
https://github.com/huggingface/transformers/issues/13175
974,060,745
MDU6SXNzdWU5NzQwNjA3NDU=
13,175
GPT-Neo ONNX Inference with past is broken
{ "login": "whiteRa2bit", "id": 28367451, "node_id": "MDQ6VXNlcjI4MzY3NDUx", "avatar_url": "https://avatars.githubusercontent.com/u/28367451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/whiteRa2bit", "html_url": "https://github.com/whiteRa2bit", "followers_url": "https://api.github.com/users/whiteRa2bit/followers", "following_url": "https://api.github.com/users/whiteRa2bit/following{/other_user}", "gists_url": "https://api.github.com/users/whiteRa2bit/gists{/gist_id}", "starred_url": "https://api.github.com/users/whiteRa2bit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whiteRa2bit/subscriptions", "organizations_url": "https://api.github.com/users/whiteRa2bit/orgs", "repos_url": "https://api.github.com/users/whiteRa2bit/repos", "events_url": "https://api.github.com/users/whiteRa2bit/events{/privacy}", "received_events_url": "https://api.github.com/users/whiteRa2bit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Gently pinging @mfuntowicz here", "@michaelbenayoun @mfuntowicz @sgugger @LysandreJik would you be so kind to assist in resolving this issue?", "Hello @whiteRa2bit, thanks for testing out the experimental `-with-past` feature of the ONNX export! @michaelbenayoun and @mfuntowicz are the best suited to answer, but they're off until early next week. We'll make sure to attend to this issue as soon as they're back! Thank you for your understanding.", "@LysandreJik, thanks a lot for letting me know!", "An update from my side:\r\nInference works fine with the sequence length equals 1, while for all other lengths it breaks with the error I described above:\r\n\r\nI tried to visualize the converted onnx graph using netron and found the node where the error occurs:\r\n![image](https://user-images.githubusercontent.com/28367451/131477923-36584e89-1bf9-4023-9c49-37efd6896890.png)\r\n", "Hi @whiteRa2bit,\r\nI've actually made the same observation this morning, I am working on it!", "#13491 along with #13524 solve the issue, but be careful of 2 things:\r\n\r\n- when exporting the model with past keys and values, the attention mask should have a sequence length of past_sequence_length + input_ids_sequence_length\r\n- ORT seems to not like inputs produced by np.empty (it produces NaN on my end compared to proper output when using np.zeros or np.ones for instance)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,634
1,634
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 (1fec32adc6a4840123d5ec5ff5cf419c02342b5a) - Platform: Linux - Python version: 3.8.8 - PyTorch version (GPU?): 1.9.0a0+2ecb2c7, True - Tensorflow version (GPU?): Not Installed, False - Using GPU in script?: Yes (3090) - Using distributed or parallel set-up in script?: No ### Who can help The issue is connected with a pull #12911: @michaelbenayoun @mfuntowicz @sgugger @LysandreJik ## Information Model I am using is gpt-neo 1.3B The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Model export ``` from pathlib import Path from transformers import GPTNeoForCausalLM, GPT2TokenizerFast, GPTNeoConfig from transformers.models.gpt_neo import GPTNeoOnnxConfig from transformers.onnx.convert import export MODEL_PATH = 'EleutherAI/gpt-neo-1.3B' TASK = 'causal-lm' ONNX_MODEL_PATH = Path("onnx_dir/gpt_neo_13b.onnx") ONNX_MODEL_PATH.parent.mkdir(exist_ok=True, parents=True) def main(): tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_PATH) config = GPTNeoConfig.from_pretrained(MODEL_PATH) onnx_config = GPTNeoOnnxConfig.with_past(config, task=TASK) model = GPTNeoForCausalLM(config=config).from_pretrained(MODEL_PATH) onnx_inputs, onnx_outputs = export(tokenizer=tokenizer, model=model, config=onnx_config, opset=12, output=ONNX_MODEL_PATH) print(f'Inputs: {onnx_inputs}') print(f'Outputs: {onnx_outputs}') if __name__ == '__main__': main() ``` 2. Inference code ``` import numpy as np import onnxruntime as ort from transformers import GPT2TokenizerFast, GPTNeoConfig from pathlib import Path MODEL_PATH = 'EleutherAI/gpt-neo-1.3B' ONNX_MODEL_PATH = Path("onnx_dir/gpt_neo_13b.onnx") PROMPTS = ['Hello there'] def _get_inputs(prompts, tokenizer, config): encodings_dict = tokenizer.batch_encode_plus(prompts) # Shape: [batch_size, seq_length] input_ids = np.array(encodings_dict["input_ids"], dtype=np.int64) # Shape: [batch_size, seq_length] attention_mask = np.array(encodings_dict["attention_mask"], dtype=np.float32) batch_size, seq_length = input_ids.shape past_seq_length = 0 num_attention_heads = config.num_attention_heads hidden_size = config.hidden_size even_present_state_shape = [ batch_size, num_attention_heads, past_seq_length, hidden_size // num_attention_heads ] odd_present_state_shape = [batch_size, past_seq_length, hidden_size] onnx_inputs = {} for idx in range(config.num_layers): if idx % 2 == 0: onnx_inputs[f'past_key_values.{idx}.key'] = np.empty(even_present_state_shape, dtype=np.float32) onnx_inputs[f'past_key_values.{idx}.value'] = np.empty(even_present_state_shape, dtype=np.float32) else: onnx_inputs[f'past_key_values.{idx}.key_value'] = np.empty(odd_present_state_shape, dtype=np.float32) onnx_inputs['input_ids'] = input_ids onnx_inputs['attention_mask'] = attention_mask return onnx_inputs def main(): config = GPTNeoConfig.from_pretrained(MODEL_PATH) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_PATH) ort_session = ort.InferenceSession(str(ONNX_MODEL_PATH)) onnx_inputs = _get_inputs(PROMPTS, tokenizer, config) outputs = ort_session.run(['logits'], onnx_inputs) if __name__ == '__main__': main() ``` The inference code runs into the following error: ``` Traceback (most recent call last): .... File "inference.py", line 60, in main outputs = ort_session.run(['logits'], onnx_inputs) File "/opt/conda/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 188, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_501' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,1,1, 4096}, requested shape:{1,1,1,16,128} ``` ## Expected behavior Onnx Inference for a model with past states should work. While converting without past states the inference works fine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13175/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13175/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13174/comments
https://api.github.com/repos/huggingface/transformers/issues/13174/events
https://github.com/huggingface/transformers/issues/13174
974,058,355
MDU6SXNzdWU5NzQwNTgzNTU=
13,174
[Benchmark]
{ "login": "Giahuynh1402", "id": 88047126, "node_id": "MDQ6VXNlcjg4MDQ3MTI2", "avatar_url": "https://avatars.githubusercontent.com/u/88047126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Giahuynh1402", "html_url": "https://github.com/Giahuynh1402", "followers_url": "https://api.github.com/users/Giahuynh1402/followers", "following_url": "https://api.github.com/users/Giahuynh1402/following{/other_user}", "gists_url": "https://api.github.com/users/Giahuynh1402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Giahuynh1402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Giahuynh1402/subscriptions", "organizations_url": "https://api.github.com/users/Giahuynh1402/orgs", "repos_url": "https://api.github.com/users/Giahuynh1402/repos", "events_url": "https://api.github.com/users/Giahuynh1402/events{/privacy}", "received_events_url": "https://api.github.com/users/Giahuynh1402/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing this for now as it doesn't contain any information" ]
1,629
1,629
1,629
NONE
null
# πŸ–₯ Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13174/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13173/comments
https://api.github.com/repos/huggingface/transformers/issues/13173/events
https://github.com/huggingface/transformers/issues/13173
974,040,547
MDU6SXNzdWU5NzQwNDA1NDc=
13,173
enable mixed precision for Tensorflow training benchmarks
{ "login": "harishneit", "id": 628454, "node_id": "MDQ6VXNlcjYyODQ1NA==", "avatar_url": "https://avatars.githubusercontent.com/u/628454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harishneit", "html_url": "https://github.com/harishneit", "followers_url": "https://api.github.com/users/harishneit/followers", "following_url": "https://api.github.com/users/harishneit/following{/other_user}", "gists_url": "https://api.github.com/users/harishneit/gists{/gist_id}", "starred_url": "https://api.github.com/users/harishneit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harishneit/subscriptions", "organizations_url": "https://api.github.com/users/harishneit/orgs", "repos_url": "https://api.github.com/users/harishneit/repos", "events_url": "https://api.github.com/users/harishneit/events{/privacy}", "received_events_url": "https://api.github.com/users/harishneit/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[ "Sorry for the slow reply - this is definitely something we'd be interested in seeing! Can I ask why you used `tf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite` in your fork rather than `set_global_policy` or similar? It's not necessarily wrong, but I'm curious what the tradeoffs are there!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
# πŸš€ Feature request Currently [Tensorflow Benchmarks](https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_tf.py) implemented in transformers package only supports training in FP32 mode and FP16 support is [unimplemented](https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_tf.py#L173). It could be helpful for the community to be able benchmark the training for the models in FP16 mode as using mixed precision greatly improves the performance of training. ## Motivation Enabling mixed precision in training is [shown](https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540) to significantly improve the throughput of the training process. We wanted to implement the missing support for fp16 for training in [Tensorflow Benchmarks](https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_tf.py) to gauge the performance uplift we noticed for 1.5x improvement in performance for `bert-base-uncased` using batch size of `8` and sequence length of `128`. ## Your contribution The code for this is implemented in [amp_tf_training_benchmarks](https://github.com/huggingface/transformers/compare/master...harishneit:amp_tf_training_benchmarks) branch in a fork. I can submit a pull request with tests if the community is interested in this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13173/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13172/comments
https://api.github.com/repos/huggingface/transformers/issues/13172/events
https://github.com/huggingface/transformers/issues/13172
974,006,150
MDU6SXNzdWU5NzQwMDYxNTA=
13,172
No module named: Regex while importing GPT2Tokenizer
{ "login": "alierenak", "id": 48334667, "node_id": "MDQ6VXNlcjQ4MzM0NjY3", "avatar_url": "https://avatars.githubusercontent.com/u/48334667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alierenak", "html_url": "https://github.com/alierenak", "followers_url": "https://api.github.com/users/alierenak/followers", "following_url": "https://api.github.com/users/alierenak/following{/other_user}", "gists_url": "https://api.github.com/users/alierenak/gists{/gist_id}", "starred_url": "https://api.github.com/users/alierenak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alierenak/subscriptions", "organizations_url": "https://api.github.com/users/alierenak/orgs", "repos_url": "https://api.github.com/users/alierenak/repos", "events_url": "https://api.github.com/users/alierenak/events{/privacy}", "received_events_url": "https://api.github.com/users/alierenak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I can't reproduce the error. I ran the following code on Colab without any error:\r\n\r\n```\r\nfrom transformers import GPT2Tokenizer\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n```\r\n\r\nTested python version: 3.7.1, transformers version: v4.9.0 and v4.9.2.", "Thats true, it should work. Probably problem is environmental but the case that the line I added raise an error since it should be `re`. e.g: [this one](https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/research_projects/seq2seq-distillation/sentence_splitter.py#L1)\r\n\r\n", "@akalieren The `regex` package should not be the problem as it is automatically installed with transformers. The reason why it used `regex` instead of `re` can be found at the following comment in that file. \r\n\r\nhttps://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/bertweet/tokenization_bertweet.py#L461\r\n\r\nHowever, I think using `GPT2Tokenizer` should not be linked to `Bertweet` since they are not dependent to each other. Did you add additional code to use them together?", "I deleted environment and created again. It is working now as it is expected. I jumped [this issue](https://github.com/conda-forge/conda-forge.github.io/issues/1161) from the link you sent. I guess the problem is about conda environment. \r\n\r\nProbably `Bertweet` is called from [`__init__.py`](https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/__init__.py#L19)" ]
1,629
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0 - Platform: Linux - Python version: 3.6.13 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT2 Tokenizer The problem arises when using: * [ X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I am still confused about this error but it is very simple to reproduce: ``` from transformers import GPT2Tokenizer tokenizer = GPT2.from_pretrained('gpt2') ``` The problem is occurred due to this line: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/bertweet/tokenization_bertweet.py#L25 Is it about my python version? Normally regex imported as `re`, but can't understand why it is happened! Thanks. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Able to initialize GPT2 <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13172/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13171/comments
https://api.github.com/repos/huggingface/transformers/issues/13171/events
https://github.com/huggingface/transformers/issues/13171
973,978,825
MDU6SXNzdWU5NzM5Nzg4MjU=
13,171
[Docs] Function signatures on website not correctly reflecting current code.
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Would you like to open a PR to correct it @qqaatw ? :-)", "@patrickvonplaten I could try, but the problem seems to be related to the CI/CD since function signatures are automatically generated by Sphinx, and the problem didn't occur on the docs I built manually on my machine. (Tested ubuntu 18.04 with python 3.8 / windows 10 with python 3.8) ", "Closed as the PR has been merged." ]
1,629
1,630
1,630
CONTRIBUTOR
null
## Environment info Tested v4.9.2 and master. ### Who can help @sgugger ## Information In [Trainer docs](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer), the first argument `model` in the function signature should be `model: Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None` as defined in the [code](https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L267). But it currently shows `model: torch.nn.modules.module.Module = None`, which seems to be outdated, however. I've tested building the docs on my system, and the resulting html is correct.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13171/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13170/comments
https://api.github.com/repos/huggingface/transformers/issues/13170/events
https://github.com/huggingface/transformers/issues/13170
973,953,079
MDU6SXNzdWU5NzM5NTMwNzk=
13,170
Using `bf16` instead of `fp16`
{ "login": "JamesDeAntonis", "id": 33379057, "node_id": "MDQ6VXNlcjMzMzc5MDU3", "avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JamesDeAntonis", "html_url": "https://github.com/JamesDeAntonis", "followers_url": "https://api.github.com/users/JamesDeAntonis/followers", "following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}", "gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions", "organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs", "repos_url": "https://api.github.com/users/JamesDeAntonis/repos", "events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}", "received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }, { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }, { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Very much looking forward to enable bf16 in PyTorch :-) Think we should probably wait though until the next PyTorch release is out. But it would be a good idea to have it supported as soon as the release is out then. cc @LysandreJik @sgugger ", "Indeed. I will install pt-nightly and work on this.\r\n\r\nThank you for staying on top of the pytorch development, @JamesDeAntonis ", "@stas00 I'm going to open a pr shortly for my [branch](https://github.com/JamesDeAntonis/transformers/tree/bf16). Feel free to check that out for starter" ]
1,629
1,638
1,638
CONTRIBUTOR
null
# πŸš€ Feature request As seen in [this pr](https://github.com/huggingface/transformers/pull/10956), there is demand for `bf16` compatibility in training of transformers models. The pytorch folks just [added this feature](https://github.com/pytorch/pytorch/pull/61002) to their master branch, so we are now able to work on adding it to this repo. ## Motivation Related to [this issue](https://github.com/huggingface/transformers/pull/10956) and [this pytorch pr](https://github.com/pytorch/pytorch/pull/61002). This feature would allow for proper half-precision training of google-trained models, for example any `T5` model. ## Your contribution I am currently working on a PR for this [here](https://github.com/JamesDeAntonis/transformers/tree/bf16), and would gladly field any suggestions and contributions. @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13170/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13170/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13169/comments
https://api.github.com/repos/huggingface/transformers/issues/13169/events
https://github.com/huggingface/transformers/issues/13169
973,824,194
MDU6SXNzdWU5NzM4MjQxOTQ=
13,169
RobertaTokenizerFast object has no attribute '_convert_token_to_id'
{ "login": "demongolem-biz", "id": 79917829, "node_id": "MDQ6VXNlcjc5OTE3ODI5", "avatar_url": "https://avatars.githubusercontent.com/u/79917829?v=4", "gravatar_id": "", "url": "https://api.github.com/users/demongolem-biz", "html_url": "https://github.com/demongolem-biz", "followers_url": "https://api.github.com/users/demongolem-biz/followers", "following_url": "https://api.github.com/users/demongolem-biz/following{/other_user}", "gists_url": "https://api.github.com/users/demongolem-biz/gists{/gist_id}", "starred_url": "https://api.github.com/users/demongolem-biz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/demongolem-biz/subscriptions", "organizations_url": "https://api.github.com/users/demongolem-biz/orgs", "repos_url": "https://api.github.com/users/demongolem-biz/repos", "events_url": "https://api.github.com/users/demongolem-biz/events{/privacy}", "received_events_url": "https://api.github.com/users/demongolem-biz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @demongolem-biz,\r\n\r\nCould you maybe post a short, reproducible code snippet that showcases the problem? Also note that `_convert_token_to_id` is a private method and that the corresponding public method `convert_token_to_id` is the one to be checked.", "The code is not mine, I am trying to use someone else's, but here is the `__init__` function from the beginning to the point when `_convert_token_to_id` is used. I know tokenizer is being passed in, but runtime the error message claims the tokenizer object is a RobertaTokenizerFast.\r\n\r\n```\r\n` def __init__(self, args, tokenizer, cache_dir=None, mode=\"train\", use_demo=False):\r\n self.args = args\r\n self.task_name = args.task_name\r\n self.processor = processors_mapping[args.task_name]\r\n self.tokenizer = tokenizer\r\n self.mode = mode\r\n\r\n # If not using demonstrations, use use_demo=True\r\n self.use_demo = use_demo\r\n if self.use_demo:\r\n logger.info(\"Use demonstrations\")\r\n assert mode in [\"train\", \"dev\", \"test\"]\r\n\r\n # Get label list and (for prompt) label word list\r\n self.label_list = self.processor.get_labels()\r\n self.num_labels = len(self.label_list)\r\n if args.prompt:\r\n assert args.mapping is not None\r\n self.label_to_word = eval(args.mapping)\r\n\r\n for key in self.label_to_word:\r\n # For RoBERTa/BART/T5, tokenization also considers space, so we use space+word as label words.\r\n if self.label_to_word[key][0] not in ['<', '[', '.', ',']:\r\n # Make sure space+word is in the vocabulary\r\n assert len(tokenizer.tokenize(' ' + self.label_to_word[key])) == 1\r\n self.label_to_word[key] = tokenizer._convert_token_to_id(tokenizer.tokenize(' ' + self.label_to_word[key])[0])\r\n else:\r\n self.label_to_word[key] = tokenizer._convert_token_to_id(self.label_to_word[key])\r\n logger.info(\"Label {} to word {} ({})\".format(key, tokenizer._convert_id_to_token(self.label_to_word[key]), self.label_to_word[key]))\r\n```\r\n \r\n\r\n`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9 - Platform: Linux - Python version: 3.6 - Tensorflow version (GPU?): 2.5 - Using GPU in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @patrickvonplaten, @LysandreJik Models: Roberta(Tokenizer)Fast ## Information Model I am using (Bert, XLNet ...): Roberta(Tokenizer)Fast The problem arises when using: * [ ] the official example scripts: (give details below) * [X ] my own modified scripts: (give details below) LM-BFF The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details below) Fine-tuning of existing language model for my own task ## To reproduce Steps to reproduce the behavior: 1) Create RobertaFastTokenizer 2) Try to call _convert_token_to_id ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The example given by the maintainers of LM-BFF supposedly runs successfully, at least with what they claim is version 3.4 or higher of transformers. However, upon my inspection of source code of transformers, I see that _convert_token_to_id is not associated with fast tokenizers only with standard tokenizers. For example, if I view transformers.tokenization_gpt2 (Roberta tokenizer being built on gpt2) for v 3.4.0, I see _convert_token_to_id present and implemented. However, if I go to transformers.tokenization_gpt2_fast, it is not there. Is this a bug, is this something that was removed at some point, or are we simply not able to access _convert_token_to_id when using a Fast tokenizer?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13169/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13168/comments
https://api.github.com/repos/huggingface/transformers/issues/13168/events
https://github.com/huggingface/transformers/issues/13168
973,809,208
MDU6SXNzdWU5NzM4MDkyMDg=
13,168
Issue with `Speech2TextFeatureExtractor` method `from_pretrained` and `from_dict`
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[ { "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false } ]
[ "@philschmid, I think you are missing some dependencies which is why `Speech2TextFeatureExtractor` refers to the dummy object, defined here: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/utils/dummy_speech_objects.py#L5\r\n\r\nwhich doesn't have a `from_pretrained(...)` method.\r\n\r\nCan you try install `transformers` with `pip install -e \".[speech]\"` ? That should fix the error", "Some `torchaudio` dependency is missing I think: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py#L23", "@LysandreJik @sgugger - I think those kinds of errors have shown up for often already...would it make sense to add a `.from_pretrained(....)` method to all dummy objects that give a better error message?", "<del>Maybe just add nice error messages to `.from_pretrained(....)` and `.__init__(...)`</del> `__init__` already has it - just `from_pretrained(...)` then I think would be nice", "The suggestion from @patrickvonplaten solved it, should we close it for now? ", "Just for posterity: @patrickvonplaten and I agreed that `Speech2Text` could be refactored to use `requires_backends(\"speech\")` similarly to DETR and [TAPAS](https://github.com/huggingface/transformers/blob/ab7551cd7ff84cb5b7328bc37a06e06fa19f02bb/src/transformers/models/tapas/modeling_tapas.py#L803) to provide a user-friendly error on model loading." ]
1,629
1,629
1,629
MEMBER
null
## Environment info - `transformers` version: `master` - Platform: ubuntu - Python version: Python 3.7.11 (default, Jul 3 2021, 18:01:19) - PyTorch version (GPU?): 1.9.0+cu102 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - There is an issue when trying to load the `Speech2TextFeatureExtractor` from a local path. **How to reproduce** ```python !git lfs install !git clone https://huggingface.co/facebook/s2t-small-mustc-en-fr-st from transformers import AutoFeatureExtractor extractor = AutoFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") ``` producing ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-6-c149f0c996ea> in <module>() 3 from transformers import AutoFeatureExtractor 4 ----> 5 extractor = AutoFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") /usr/local/lib/python3.7/dist-packages/transformers/models/auto/feature_extraction_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 158 159 if model_type is not None: --> 160 return FEATURE_EXTRACTOR_MAPPING[type(config)].from_dict(config_dict, **kwargs) 161 elif "feature_extractor_type" in config_dict: 162 feature_extractor_class = feature_extractor_class_from_name(config_dict["feature_extractor_type"]) AttributeError: type object 'Speech2TextFeatureExtractor' has no attribute 'from_dict' ```` also the `Speech2TextFeatureExtractor` doesn't work ```python !git lfs install !git clone https://huggingface.co/facebook/s2t-small-mustc-en-fr-st from transformers import Speech2TextFeatureExtractor extractor = Speech2TextFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") ``` producing ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-7-c87c386101dd> in <module>() 3 from transformers import Speech2TextFeatureExtractor 4 ----> 5 extractor = Speech2TextFeatureExtractor.from_pretrained("s2t-small-mustc-en-fr-st") AttributeError: type object 'Speech2TextFeatureExtractor' has no attribute 'from_pretrained' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13168/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13167/comments
https://api.github.com/repos/huggingface/transformers/issues/13167/events
https://github.com/huggingface/transformers/pull/13167
973,707,132
MDExOlB1bGxSZXF1ZXN0NzE1MTE1NDU1
13,167
Update namespaces inside torch.utils.data to the latest.
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Is this compatible with older versions of PyTorch? Given the issue history this looks good though", "@patrickvonplaten Yes, it's backward compatible with PyTorch 1.2.0+, which meets transformers' PyTorch requirements: 1.3.1+.\r\n\r\nRef:\r\nhttps://pytorch.org/docs/1.2.0/data.html\r\nhttps://github.com/pytorch/pytorch/blob/v1.2.0/torch/utils/data/__init__.py" ]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? Address #13036 . 1. Replace `torch.utils.data.dataset` with `torch.utils.data` 2. Replace `torch.utils.data.sampler` with `torch.utils.data` 3. Replace `torch.utils.data.dataloader` with `torch.utils.data` ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13167/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13167", "html_url": "https://github.com/huggingface/transformers/pull/13167", "diff_url": "https://github.com/huggingface/transformers/pull/13167.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13167.patch", "merged_at": 1629376192000 }
https://api.github.com/repos/huggingface/transformers/issues/13166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13166/comments
https://api.github.com/repos/huggingface/transformers/issues/13166/events
https://github.com/huggingface/transformers/pull/13166
973,692,526
MDExOlB1bGxSZXF1ZXN0NzE1MTAyNzcx
13,166
[AutoFeatureExtractor] Fix loading of local folders if config.json exists
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger - merging for now. Let me know if something is not well and we can change afterwards. Tests have been added so this PR should be more or less safe." ]
1,629
1,630
1,629
MEMBER
null
# What does this PR do? Currently there is a problem when loading the feature extractor locally via `AutoFeatureExtractor` as spotted by @philschmid: ```bash !git lfs install !git clone https://huggingface.co/facebook/wav2vec2-base-960h ``` and then: ```python from transformers import AutoFeatureExtractor extractor = AutoFeatureExtractor.from_pretrained("wav2vec2-base-960h") ``` This leads to an error. This PR fixes it and also improves the error message.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13166/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13166", "html_url": "https://github.com/huggingface/transformers/pull/13166", "diff_url": "https://github.com/huggingface/transformers/pull/13166.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13166.patch", "merged_at": 1629296294000 }
https://api.github.com/repos/huggingface/transformers/issues/13165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13165/comments
https://api.github.com/repos/huggingface/transformers/issues/13165/events
https://github.com/huggingface/transformers/issues/13165
973,648,477
MDU6SXNzdWU5NzM2NDg0Nzc=
13,165
Performance issues in the program
{ "login": "DLPerf", "id": 88604684, "node_id": "MDQ6VXNlcjg4NjA0Njg0", "avatar_url": "https://avatars.githubusercontent.com/u/88604684?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DLPerf", "html_url": "https://github.com/DLPerf", "followers_url": "https://api.github.com/users/DLPerf/followers", "following_url": "https://api.github.com/users/DLPerf/following{/other_user}", "gists_url": "https://api.github.com/users/DLPerf/gists{/gist_id}", "starred_url": "https://api.github.com/users/DLPerf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DLPerf/subscriptions", "organizations_url": "https://api.github.com/users/DLPerf/orgs", "repos_url": "https://api.github.com/users/DLPerf/repos", "events_url": "https://api.github.com/users/DLPerf/events{/privacy}", "received_events_url": "https://api.github.com/users/DLPerf/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[ "Hi @DLPerf, thanks for the warning! We're actually in the process of refactoring our Datasets to automatically support conversion to TF Datasets, at which point I'll be removing this part of our examples and replacing it with a call to the conversion method.\r\n\r\nHowever, if you have any insights for how we can improve the performance of our conversion methods there, that would be very helpful! You can review the code at https://github.com/huggingface/datasets/pull/2731 and leave suggestions as comments on that PR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
Hello,I found a performance issue in the definition of `convert_dataset_for_tensorflow` , examples/tensorflow/text-classification/run_glue.py, [tf_dataset = tf_dataset.batch(batch_size=batch_size, drop_remainder=drop_remainder).map](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/text-classification/run_glue.py#L83) was called without **num_parallel_calls**. I think it will increase the efficiency of your program if you add this. The same issues also exist in [tf_dataset.batch(batch_size=batch_size, drop_remainder=drop_remainder).map](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/text-classification/run_text_classification.py#L98) , [.map(densify_ragged_batch)](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/multiple-choice/run_swag.py#L109) and [tf_dataset.batch(batch_size=batch_size, drop_remainder=drop_remainder).map](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/examples/tensorflow/question-answering/run_qa.py#L253) Here is [the documemtation of tensorflow](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset?hl=en#map) to support this thing. Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13165/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13164/comments
https://api.github.com/repos/huggingface/transformers/issues/13164/events
https://github.com/huggingface/transformers/issues/13164
973,469,588
MDU6SXNzdWU5NzM0Njk1ODg=
13,164
Missing weight in pretrained model `pegasus-xsum`
{ "login": "Linyxus", "id": 11204124, "node_id": "MDQ6VXNlcjExMjA0MTI0", "avatar_url": "https://avatars.githubusercontent.com/u/11204124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linyxus", "html_url": "https://github.com/Linyxus", "followers_url": "https://api.github.com/users/Linyxus/followers", "following_url": "https://api.github.com/users/Linyxus/following{/other_user}", "gists_url": "https://api.github.com/users/Linyxus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linyxus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linyxus/subscriptions", "organizations_url": "https://api.github.com/users/Linyxus/orgs", "repos_url": "https://api.github.com/users/Linyxus/repos", "events_url": "https://api.github.com/users/Linyxus/events{/privacy}", "received_events_url": "https://api.github.com/users/Linyxus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just tested this, it works fine for me, `lm_head` is included. Colab notebook here: https://colab.research.google.com/drive/1oCrC3Tb07C7V1l-0Fx6_c8xdEbyqx9Km?usp=sharing\r\n\r\nIt's best to use `.from_pretrained` instead of `torch.load`.", "A lot of thanks for your reply! :^)\r\n\r\nI just figured out that the `lm_head.weight` actually maps the internal embeddings back to word predictions, whose weights will be tied to the input word embeddings! So the lm_head's weight will definitely not be included in the pretrained weight file. I did not realize this due to my lack of knowledge about NLP models :^(. Thanks again for your time!" ]
1,629
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Linux - Python version: 3.9.6 - PyTorch version (GPU?): 1.9.0 with GPU - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten ## Information I am using the PegasusForConditionalGeneration model. I found that the pretrained weight `google/pegasus-xsum` hosted by HuggingFace does not have the weight `lm_head.weight` defined here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/pegasus/modeling_pegasus.py#L1210. ## To reproduce Steps to reproduce the behavior: 1. Download the `google/pegasus-xsum` weight from HuggingFace. 2. Load it use `torch.load`. 3. List its keys, no `lm_head.weight` is contained! <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> We should have the `lm_head` weights, because the weight file actually contains the bias `final_logits_bias` defined here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/pegasus/modeling_pegasus.py#L1209. And the pretrained model name `google/pegasus-xsum` suggests that it is finetuned on the XSum dataset (which is a ConditionalGeneration task), so the weight for `lm_head` should be contained to make the finetuned model complete!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13164/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13163/comments
https://api.github.com/repos/huggingface/transformers/issues/13163/events
https://github.com/huggingface/transformers/issues/13163
973,398,096
MDU6SXNzdWU5NzMzOTgwOTY=
13,163
is there any <SOS> or <EOS> token in reformer-enwik8?
{ "login": "xichenpan", "id": 48356412, "node_id": "MDQ6VXNlcjQ4MzU2NDEy", "avatar_url": "https://avatars.githubusercontent.com/u/48356412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xichenpan", "html_url": "https://github.com/xichenpan", "followers_url": "https://api.github.com/users/xichenpan/followers", "following_url": "https://api.github.com/users/xichenpan/following{/other_user}", "gists_url": "https://api.github.com/users/xichenpan/gists{/gist_id}", "starred_url": "https://api.github.com/users/xichenpan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xichenpan/subscriptions", "organizations_url": "https://api.github.com/users/xichenpan/orgs", "repos_url": "https://api.github.com/users/xichenpan/repos", "events_url": "https://api.github.com/users/xichenpan/events{/privacy}", "received_events_url": "https://api.github.com/users/xichenpan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think `reformer-enwik8` was not trained using a `<SOS> or <EOS>` token - it's just the model to evaluate Reformer's compression capabilities on enwik8, see paper: https://arxiv.org/abs/2001.04451", "> I think `reformer-enwik8` was not trained using a or token - it's just the model to evaluate Reformer's compression capabilities on enwik8, see paper: https://arxiv.org/abs/2001.04451\r\n\r\nThanks a lot for the quick reply.\r\nbtw, is there any other pretrained character-level language model provide by huggingface now?", "To add a related question: Is there any way of knowing which characters are part of the vocabulary of the pre-trained enwik-8 model? To my knowledge there only exists information on the `vocab_size` which is set to 258, but no information on which characters are part of the vocabulary of the pre-trained model.", "Reformer simple uses Python's `chr()` and `ord()` methods to tokenize and decode. See: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8 ", "This is true. But those methods work with the Unicode character set (i.e. up to 1,114,111) which does not correspond to a `vocab_size` of 258. This can also be seen with the shape of `outputs.scores` which is (1, 258). It seems to me that the vocab_size is just the first 258 characters of the unicode standard, i.e. Basic Latin & Latin-1 Supplement. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,635
1,635
NONE
null
## Environment info None ### Who can help @patrickvonplaten Models: reformer-enwik8 ## Information Model I am using reformer-enwik8: The problem arises when using: ``` def encode(list_of_strings, pad_token_id=0): max_length = max([len(string) for string in list_of_strings]) # create emtpy tensors attention_masks = torch.zeros((len(list_of_strings), max_length), dtype=torch.long) input_ids = torch.full((len(list_of_strings), max_length), pad_token_id, dtype=torch.long) for idx, string in enumerate(list_of_strings): # make sure string is in byte format if not isinstance(string, bytes): string = str.encode(string) input_ids[idx, :len(string)] = torch.tensor([x + 2 for x in string]) attention_masks[idx, :len(string)] = 1 return input_ids, attention_masks model = ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8") ids, masks = encode(["I COULD LABEL THIS ON THE INGREDIENTS AS MEAT".capitalize()]) logits = model(input_ids=ids, attention_mask=masks)["logits"] ``` The tasks I am working on is: try to get LM prob of one certain sequence ## Expected behavior I expect 0 and 1 represent `<SOS>` and `<EOS>` respectively, but I don't know if it is correct
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13163/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13162/comments
https://api.github.com/repos/huggingface/transformers/issues/13162/events
https://github.com/huggingface/transformers/issues/13162
973,376,404
MDU6SXNzdWU5NzMzNzY0MDQ=
13,162
Fine-tuned Robust Wav2Vec 2.0 models
{ "login": "Nithin-Holla", "id": 19574344, "node_id": "MDQ6VXNlcjE5NTc0MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/19574344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nithin-Holla", "html_url": "https://github.com/Nithin-Holla", "followers_url": "https://api.github.com/users/Nithin-Holla/followers", "following_url": "https://api.github.com/users/Nithin-Holla/following{/other_user}", "gists_url": "https://api.github.com/users/Nithin-Holla/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nithin-Holla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nithin-Holla/subscriptions", "organizations_url": "https://api.github.com/users/Nithin-Holla/orgs", "repos_url": "https://api.github.com/users/Nithin-Holla/repos", "events_url": "https://api.github.com/users/Nithin-Holla/events{/privacy}", "received_events_url": "https://api.github.com/users/Nithin-Holla/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Adding them now :-) \r\n\r\nBTW @Nithin-Holla, \r\n\r\nIt could be a cool project to fine-tune the `wav2vec2-large-robust` checkpoint on the [AMI dataset](https://groups.inf.ed.ac.uk/ami/corpus/datasets.shtml) since the model was not pretrained on AMI and AMI is a conversational dataset instead of a read-out corpus. Would be pretty interesting to see what performance can be achieved there (also compared to \"non-robust\" Wav2Vec2. \r\n\r\nIf you'd be interested in such a project, let me know I'd be more than happy to help you there :-)", "@patrickvonplaten Awesome, thanks for the speedy addition!\r\n\r\nSure, I'd be interested in working on fine-tuning on the AMI dataset :)", "Awesome, I'll send you a mail :-) \r\n\r\nBTW, there is still a problem with the just published checkpoints: https://github.com/pytorch/fairseq/issues/3799", "Hi @Nithin-Holla @patrickvonplaten Wondering if this thread went anywhere? I'm attempting to finetune Hubert-large on the AMI dataset, would be interested to see where you guys got to and share results.", "Yeah, we didn't manage to get good results yet sadly with AMI - it's mainly due AMI not being chunked by default.", "Thanks @patrickvonplaten. I think It would be cool to make a robust spoken-to-written-language engine. I'm thinking we could supplement the AMI corpus, eg. youtube, or even make our own spoken wikipedia/similar. If you have started a working group on this, let me know. Feel free to send me an email. :)\r\n\r\nFrom my experiments, even though WER isn't \"great,\" I see the finetuned model picking features of meeting conversation, which encourages me to see possibility here. I think the AMI data mimics adult spoken language better (than just reading of text, aka librispeech), and _together_ is like how humans learn language (from hearing -> reading + writing). \r\n\r\n", "ps. I had chunked AMI to 10-second chunks, using actually the scripts you already published on HG.", "Hey @i-am-neo,\r\n\r\nWe've run quite some experiments here: https://huggingface.co/ami-wav2vec2, but didn't get super good results so far compared to other research papers.\r\nI think it's mainly due the way we've chunked AMI. Think we need to align ourselves with how other people chunked the data.\r\n\r\nHere is how the data should be chunked (didn't manage to take a look into this yet - sadly):\r\n\r\n> The first question we have is whether IHM corresponds to all 4 individual headset audio files are used, *i.e.* the Individual headsets 120M four individual WAV headsets data or whether it corresponds to the single headset mix, *i.e.*, the Headset mix 30M single wav file (on https://groups.inf.ed.ac.uk/ami/download/)\r\n\r\n> The Kaldi recipe that we wrote for the paper(s) uses separate channels for each headset (see: [here](https://github.com/kaldi-asr/kaldi/blob/master/egs/ami/s5/local/ami_download.sh)), though I would be surprised if the mix-headset variant resulted in statistically different WERs, saving tons of bandwidth (of course, one never knows until you the experiment). The segmentations we use for separate individual headsets are of course compatible with the mixed channel waveform.\r\n\r\n> The second question that we have is how the long audio clips are exactly chunked. Sadly we couldn't really find an \"official\" pre-processing script and t[here](https://github.com/kaldi-asr/kaldi/blob/master/egs/ami/s5/local/ami_xml2text.sh) is very little exact information on how the data is chunked. Do you guys have an official preprocessing script that you could maybe share? \r\n\r\n> The script here starts with the original AMI annotations in XML format. The AMI data comes with manual segmentations and timings for interpunction signs, and this is how we broke the long utterances. One caveat here, for end2end models you probably do this anyways as decoding graphs are much smaller than in HMM days, thus decoding time for these will not be an issue. That script can either query textual annotations from the AMI-provided JAVA tool, or download exported version in textual form. In the Kaldi recipe we did not want to make a dependency on JAVA thus we follow from exported files by default, though you can grasp from the script how to do so if you wish.\r\n", "Hi @patrickvonplaten, thanks. I got similar loss and wer results with two flavors of hubert (3 epochs, max 10-sec chunks, single-headset).\r\nWere your experiment results run with the 20-sec max chunks? Which version of the dataset?\r\n\r\nI suspect there's more to the results than how we chunked. One can find in the single-headset version -\r\n1. two speakers speaking over each other\r\n2. and text like `i d i don't thi i don't think that it would be a a structural weakness` and `yes that's wi uh this will definitely`. \r\nExamples [here](https://colab.research.google.com/drive/1vSAabtxv_4HHi3mEdht4B6ud60djtj1z?usp=sharing).\r\nSome of the timings also seem a bit off, though I have to find time to look into it further.\r\n\r\nWhat are your thoughts?\r\n\r\nI agree with chunking by `interpunction` - it seems more \"natural\" to me, though I grouped short phrases into a minimum of 6 words if possible.", "We used the processing as described here: https://huggingface.co/datasets/ami#dataset-preprocessing\r\n\r\nThink we should apply the official Kaldi preprocessing though. Agree the target text is quite noisy ", "Hi @patrickvonplaten\r\nYes, the preprocess steps I had used for training was almost verbatim from [https://huggingface.co/datasets/ami#dataset-preprocessing](https://huggingface.co/datasets/ami#dataset-preprocessing), except for setting `MAX_LENGTH_IN_SECONDS = 10.0`. \r\n\r\nI've taken a closer look at the segment timestamps, and think they are actually inaccurate, likely due to \r\n\r\n> The AMI data comes with _manual_ segmentations and timings for interpunction signs, and this is how we broke the long utterances\r\n\r\n(italics mine). Have a look at some examples [here](https://colab.research.google.com/drive/1hUrREy7kV1HuUqJlNvzJqbr_KwZZBX2Y?usp=sharing).\r\n\r\nIt looks to me that cleanly labeled audio is freely mixed in with inaccurately-labeled ones, enough that we can't use AMI by itself without someone going in to clean the dataset manually. Still, I think it a helpful exercise to find other datasets similar to AMI to train a robust transcription model, either by supplementing AMI, or replacing it. You?", "Agree that it'd be very important to test Wav2Vec2 on more \"real-world\" / \"robust\" data. Sadly, I don't know any other dataset besides AMI that could fit that use-case. Do you have any ideas?", "I'm thinking Youtube.\r\n\r\nWe can try training first on some English Youtube public AMI-like videos. (I suspect we'd need to hook up a \"spoken\" version of a language model for decoding as well, but this is probably the easier part of the task). If the WER/whatever-metric-we-choose from training looks promising, we could take a subset of [AudioSet](https://research.google.com/audioset///index.html) (those identified as human speech) and build out a larger dataset.\r\n\r\nTo me, Hubert-large seems a more robust acoustic model than Wav2Vec2 to start with. Let me know what you think." ]
1,629
1,650
1,631
CONTRIBUTOR
null
# 🌟 New model addition ## Model description The pretrained Robust Wav2Vec 2.0 model is already available on the Hugging Face model hub (https://huggingface.co/facebook/wav2vec2-large-robust). Facebook also released two fine-tuned models -- one which is fine-tuned on Librispeech and another which is fine-tuned on Switchboard. Would be good to have these fine-tuned models on the hub as well. ## Open source status * [x] the model weights are available on fairseq: https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13162/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13161/comments
https://api.github.com/repos/huggingface/transformers/issues/13161/events
https://github.com/huggingface/transformers/issues/13161
973,371,943
MDU6SXNzdWU5NzMzNzE5NDM=
13,161
Cannot run run_mlm.py on a Japanese dataset - AttributeError: module transformers.models.mbart50 has no attribute BertJapaneseTokenizerFast
{ "login": "jungminc88", "id": 25294395, "node_id": "MDQ6VXNlcjI1Mjk0Mzk1", "avatar_url": "https://avatars.githubusercontent.com/u/25294395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungminc88", "html_url": "https://github.com/jungminc88", "followers_url": "https://api.github.com/users/jungminc88/followers", "following_url": "https://api.github.com/users/jungminc88/following{/other_user}", "gists_url": "https://api.github.com/users/jungminc88/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungminc88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungminc88/subscriptions", "organizations_url": "https://api.github.com/users/jungminc88/orgs", "repos_url": "https://api.github.com/users/jungminc88/repos", "events_url": "https://api.github.com/users/jungminc88/events{/privacy}", "received_events_url": "https://api.github.com/users/jungminc88/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nI think that you need to run the whole word masking script which can be found [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/mlm_wwm) instead of the regular `run_mlm.py` script (as you're doing whole word masking instead of just masking tokens).\r\n\r\nI've created a Colab notebook, it seems to work fine! https://colab.research.google.com/drive/1d2yGWLYy44KgSId1WbSfusX0Jp8JhKyD?usp=sharing", "It worked! Thank you so much!", "I needed to run run_mlm.py, not run_mlm_wwm.py, this time, and tried to run\r\n`python run_mlm.py --model_name_or_path cl-tohoku/bert-base-japanese --train_file /path/to/train/file.txt --do_train --output_dir output_dir/`\r\nand got the same error message:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_mlm.py\", line 550, in <module>\r\n main()\r\n File \"run_mlm.py\", line 337, in main\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs)\r\n File \"/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/bert-japanese/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py\", line 431, in from_pretrained\r\n tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)\r\n File \"/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/bert-japanese/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py\", line 226, in tokenizer_class_from_name\r\n return getattr(module, class_name)\r\n File \"/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/bert-japanese/lib/python3.7/site-packages/transformers/file_utils.py\", line 1995, in __getattr__\r\n raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\r\nAttributeError: module transformers.models.rembert has no attribute BertJapaneseTokenizerFast\r\n```\r\nI cannot figure out how to resolve this. I would greatly appreciate if you could look into it.\r\n@NielsRogge \r\n", "I found the root of your issue and the PR mentioned above should fix it.", "Thank you very much!" ]
1,629
1,630
1,630
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-4.18.0-25-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: - albert, bert, xlm: @LysandreJik Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) transformers/examples/pytorch/language-modeling/run_mlm.py The tasks I am working on is: * [x] my own task or dataset: (give details below) It's a Japanese corpus in .txt format. ## To reproduce Steps to reproduce the behavior: 1. I followed the instructions at https://huggingface.co/transformers/examples.html: git cloned the transformers repository, installed it, along with requirements in language-modeling. 2. I tried to run it with `python run_mlm.py --model_name_or_path cl-tohoku/bert-base-japanese-whole-word-masking --train_file /path/to/train/file.txt --do_train --output_dir output_dir/ ` Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 337, in main tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs) File "/my/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 424, in from_pretrained tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) File "/my/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 219, in tokenizer_class_from_name return getattr(module, class_name) File "/my/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/file_utils.py", line 1992, in __getattr__ raise AttributeError(f"module {self.__name__} has no attribute {name}") AttributeError: module transformers.models.mbart50 has no attribute BertJapaneseTokenizerFast ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should be done without an error. I have done this in July, and it went through without a problem.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13161/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13160/comments
https://api.github.com/repos/huggingface/transformers/issues/13160/events
https://github.com/huggingface/transformers/issues/13160
973,320,453
MDU6SXNzdWU5NzMzMjA0NTM=
13,160
Advice needed: Adding more FSMT models
{ "login": "jvamvas", "id": 5830820, "node_id": "MDQ6VXNlcjU4MzA4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jvamvas", "html_url": "https://github.com/jvamvas", "followers_url": "https://api.github.com/users/jvamvas/followers", "following_url": "https://api.github.com/users/jvamvas/following{/other_user}", "gists_url": "https://api.github.com/users/jvamvas/gists{/gist_id}", "starred_url": "https://api.github.com/users/jvamvas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jvamvas/subscriptions", "organizations_url": "https://api.github.com/users/jvamvas/orgs", "repos_url": "https://api.github.com/users/jvamvas/repos", "events_url": "https://api.github.com/users/jvamvas/events{/privacy}", "received_events_url": "https://api.github.com/users/jvamvas/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "@patil-suraj I am still very motivated to work on the pull request :) Just let me know if you need more information to answer my question.\r\n\r\nIn case you're interested, the paper describing our models is now public (https://openreview.net/forum?id=RvO9DqoWI9V). I believe the models could be of value to others in the community." ]
1,629
1,631
null
CONTRIBUTOR
null
# 🌟 New model addition ## Model description I am planning to contribute a series of FSMT models to the model hub. The models have been trained for a paper that is currently under review. Before working on a PR I wanted to ask for some advice: ### normalize_before The new models have been trained with Fairseq's option `normalize_before=True`, while the existing FSMT implementation uses `normalize_before=False`. I understand that copy-pasting model code is preferred to extending the configuration. This would mean that a near-duplicate module `fsmt_prenorm` needs to be created. Is this correct? ### Adequate base branch The FSMT module is currently being refactored (https://github.com/huggingface/transformers/pull/11218). Do you recommend that I start from the master branch or from the PR's feature branch, which is nearly completed?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13160/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/13159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13159/comments
https://api.github.com/repos/huggingface/transformers/issues/13159/events
https://github.com/huggingface/transformers/pull/13159
973,269,939
MDExOlB1bGxSZXF1ZXN0NzE0NzM5OTQz
13,159
Fix load_tf_weights alias.
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Models having `load_tf_weights` are checked, no bug found!", "This looks correct to me!" ]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? 1. Address #13154 2. I'm checking except for Albert, whether other models have the same problem or not. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13159/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13159", "html_url": "https://github.com/huggingface/transformers/pull/13159", "diff_url": "https://github.com/huggingface/transformers/pull/13159.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13159.patch", "merged_at": 1629734913000 }
https://api.github.com/repos/huggingface/transformers/issues/13158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13158/comments
https://api.github.com/repos/huggingface/transformers/issues/13158/events
https://github.com/huggingface/transformers/issues/13158
973,267,926
MDU6SXNzdWU5NzMyNjc5MjY=
13,158
CvT: Convolution based Image Transformers
{ "login": "AnugunjNaman", "id": 42839570, "node_id": "MDQ6VXNlcjQyODM5NTcw", "avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnugunjNaman", "html_url": "https://github.com/AnugunjNaman", "followers_url": "https://api.github.com/users/AnugunjNaman/followers", "following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}", "gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions", "organizations_url": "https://api.github.com/users/AnugunjNaman/orgs", "repos_url": "https://api.github.com/users/AnugunjNaman/repos", "events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}", "received_events_url": "https://api.github.com/users/AnugunjNaman/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "I would like to work on this @LysandreJik if you feel it's a nice addition.", "Great suggestion! How is this model different from Facebook AI's [ConViT](https://github.com/facebookresearch/convit)?\r\n\r\nCurrently, we have [ViT](https://huggingface.co/transformers/model_doc/vit.html), [DeiT](https://huggingface.co/transformers/model_doc/deit.html) and [BEiT](https://huggingface.co/transformers/master/model_doc/beit.html) in the library. It would be cool to have a Vision Transformer with convolutional inductive biases in the library, as it's probably better in terms of sample efficiency/FLOPS. Perhaps you can compare CvT and ConViT, and add the best of the two to the library? I can help you if you want (I've contributed the aforementioned ones πŸ˜‰ ).", "@NielsRogge yeah sure. Any help is great help. I haven't read ConvViT in depth but on skimming through it they have attempted to do something similar to convolutions. While CvT use pure convolution and here in this architecture they eliminate need for positional embedding, simplifying design for vision tasks with variable input resolution. Position Embedding is often realized by fixed-length learn-able vectors, limiting the trained model adaptation of variable-length input. This seems a good architecture even on metrics. Your thoughts? If you agree then I can move forward with your help since this my first contribution here.", "> Position Embedding is often realized by fixed-length learn-able vectors, limiting the trained model adaptation of variable-length input.\r\n\r\nYeah indeed, models like ViT and BEiT require interpolation of the pre-trained position embeddings when fine-tuning, which is a pain.\r\n\r\nDo you know how to get started to add a model? Most info can be found [here](https://huggingface.co/transformers/contributing.html) and [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model).", "@NielsRogge yeah. I have gone through it. I can try following similarly as given ViT, BEiT. I can start it now. If I get stuck I will get back to you. ", "The issue is resolved with PR #17299" ]
1,629
1,652
1,652
CONTRIBUTOR
null
# 🌟 New model addition ## Model description A new architecture, named Convolutional vision Transformers (CvT), that improves Vision Transformers (ViT) in performance and efficiently by introducing convolutions into ViT to yield the best of both designes. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (e.g. shift, scale, and distortion invariance) while maintaining the merits of Transformers (e.g. dynamic attention, global context, and better generalization). ## Open source status * [ https://github.com/microsoft/CvT] the model implementation is available: The Microsoft Model is OpenSource and would be a good addition to huggingface library * [ https://1drv.ms/u/s!AhIXJn_J-blW9RzF3rMW7SsLHa8h?e=blQ0Al] the model weights are available: The pretrained weights are present in drive * [https://github.com/leoxiaobin] is the authors: @leoxiaobin
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13158/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13157/comments
https://api.github.com/repos/huggingface/transformers/issues/13157/events
https://github.com/huggingface/transformers/issues/13157
973,245,736
MDU6SXNzdWU5NzMyNDU3MzY=
13,157
export BART model to ONNX failed with [Segmentation fault (core dumped)]
{ "login": "PanQiWei", "id": 46810637, "node_id": "MDQ6VXNlcjQ2ODEwNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/46810637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PanQiWei", "html_url": "https://github.com/PanQiWei", "followers_url": "https://api.github.com/users/PanQiWei/followers", "following_url": "https://api.github.com/users/PanQiWei/following{/other_user}", "gists_url": "https://api.github.com/users/PanQiWei/gists{/gist_id}", "starred_url": "https://api.github.com/users/PanQiWei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PanQiWei/subscriptions", "organizations_url": "https://api.github.com/users/PanQiWei/orgs", "repos_url": "https://api.github.com/users/PanQiWei/repos", "events_url": "https://api.github.com/users/PanQiWei/events{/privacy}", "received_events_url": "https://api.github.com/users/PanQiWei/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "Hello @PanQiWei!\r\n\r\nI have successfully exported the model with your command:\r\n\r\n```\r\nValidating ONNX model...\r\n -[βœ“] ONNX model outputs' name match reference model ({'last_hidden_state', 'encoder_last_hidden_state'}\r\n - Validating ONNX Model output \"last_hidden_state\":\r\n -[βœ“] (2, 8, 1024) matches (2, 8, 1024)\r\n -[βœ“] all values close (atol: 0.0001)\r\n - Validating ONNX Model output \"encoder_last_hidden_state\":\r\n -[βœ“] (2, 8, 1024) matches (2, 8, 1024)\r\n -[βœ“] all values close (atol: 0.0001)\r\nAll good, model saved at: lidiya-bart-large-xsum-samsum/model.onnx\r\n```\r\n\r\nWould you mind mentioning the versions of PyTorch and onnxruntime you have installed in your environment? Thank you!", "Hi @LysandreJik !\r\n\r\nFirst of all, thank you for your replay!\r\n\r\nI'm currently using ```pytorch==1.9.2``` with cuda version 10.2 and ```onnxruntime==1.8.1```\r\n\r\nFor πŸ€—transformers I tried both ```v1.9.2``` and ```the unrealesed version by installing from source```.\r\n\r\nI tried saveral times to export Bart model into ONNX but all got the failed information as given above.", "A segmentation fault isn't easy to debug, but I wonder if this isn't a memory error under the hood. Are you using a google colab?\r\n\r\nCan you try exporting the following model, which is much smaller, to see if this succeeds or not? `sshleifer/distilbart-cnn-12-6`", "I'm using GPU provided by my company, which is a RTX2080 GPU.\r\n\r\nI ran the command and replace model name with ```sshleifer/distilbart-cnn-12-6```, this time the error message changed, as shown below:\r\n\r\n```\r\nUsing framework PyTorch: 1.9.0+cu102\r\nOverriding 1 configuration item(s)\r\n - use_cache -> False\r\n/root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:212: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\r\n/root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:218: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:249: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\r\n/root/miniconda3/envs/gpc/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:863: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if input_shape[-1] > 1:\r\nValidating ONNX model...\r\nFloating point exception (core dumped)\r\n```", "Hmmm I'm failing at reproducing :( I have the following versions, could you try installing them to see if it changes something?\r\n```\r\nonnx 1.9.0\r\nonnxruntime 1.8.1\r\ntorch 1.9.0\r\n```\r\n\r\nI can also upload the converted model to the hub under a repository if that's helpful for you.", "I re-install the libraries with the versions as yours, but still faild. 😒 \r\n\r\nIt would be wonderful and thankful if you could upload the converted model! ❀️ \r\n\r\nAgain, thank you for your help! πŸ˜„", "Hello again! I've uploaded the converted model here: https://huggingface.co/lysandre/onnx-bart/tree/main (search for model.onnx)", "Thanks soooo much!! πŸ˜† It's my first time to try an ONNX model, can't wait to see the improvement in my tasks, thank you! ❀️ " ]
1,629
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:`v4.10.0-dev0` - Platform:Ubuntu 18.04.3 LTS - `Python` version:`v3.8.11` - `PyTorch` version (GPU?):`v1.9.0-cu102`(TRUE) - `Tensorflow` version (GPU?):`None` - `onnx` version:`v1.10.1` - `onnxruntim` version:`v1.8.1` - Using GPU in script?:`False` - Using distributed or parallel set-up in script?:`False` ### Who can help @patrickvonplaten @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using **BART**: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts following the **command line example** given in the official [Export transformers models](https://huggingface.co/transformers/serialization.html#onnx-onnxruntime) document. ## To reproduce Steps to reproduce the behavior: 1.run the following command line in console: ``` python -m transformers.onnx --model="lidiya/bart-large-xsum-samsum" --feature=default "lidiya-bart-large-xsum-samsum" ``` <details> <summary>Full log</summary> <pre> Some weights of the model checkpoint at lidiya/bart-large-xsum-samsum were not used when initializing BartModel: ['lm_head.weight', 'final_logits_bias'] - This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Using framework PyTorch: 1.9.0+cu102 Overriding 1 configuration item(s) - use_cache -> False /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:212: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:218: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:249: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /root/miniconda3/envs/speedup/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:863: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: Validating ONNX model... Traceback (most recent call last): Segmentation fault (core dumped) </pre> </details> <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Exporting **BART** model to onnx successfully and can be run on onnxruntime to generate correct results. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13157/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13156/comments
https://api.github.com/repos/huggingface/transformers/issues/13156/events
https://github.com/huggingface/transformers/pull/13156
973,245,555
MDExOlB1bGxSZXF1ZXN0NzE0NzE5MzA1
13,156
πŸ›: skip_special_tokens in tokenization_utils.py
{ "login": "zhangfanTJU", "id": 58031744, "node_id": "MDQ6VXNlcjU4MDMxNzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/58031744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangfanTJU", "html_url": "https://github.com/zhangfanTJU", "followers_url": "https://api.github.com/users/zhangfanTJU/followers", "following_url": "https://api.github.com/users/zhangfanTJU/following{/other_user}", "gists_url": "https://api.github.com/users/zhangfanTJU/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhangfanTJU/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangfanTJU/subscriptions", "organizations_url": "https://api.github.com/users/zhangfanTJU/orgs", "repos_url": "https://api.github.com/users/zhangfanTJU/repos", "events_url": "https://api.github.com/users/zhangfanTJU/events{/privacy}", "received_events_url": "https://api.github.com/users/zhangfanTJU/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think this is redundant, because the special ids have already been skipped in `filtered_tokens`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
CONTRIBUTOR
null
# What does this PR do? πŸ›: skip_special_tokens in tokenization_utils.py The skip_special_tokens in tokenization_utils.py does not work, because token never in self.all_special_ids, which shoule be fixed to self.all_special_tokens. Many thakns! @n1t0 @LysandreJik .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13156/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13156", "html_url": "https://github.com/huggingface/transformers/pull/13156", "diff_url": "https://github.com/huggingface/transformers/pull/13156.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13156.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13155
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13155/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13155/comments
https://api.github.com/repos/huggingface/transformers/issues/13155/events
https://github.com/huggingface/transformers/pull/13155
973,021,059
MDExOlB1bGxSZXF1ZXN0NzE0NTI4MzM5
13,155
Add FSNER example in research_projects
{ "login": "sayef", "id": 9072075, "node_id": "MDQ6VXNlcjkwNzIwNzU=", "avatar_url": "https://avatars.githubusercontent.com/u/9072075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayef", "html_url": "https://github.com/sayef", "followers_url": "https://api.github.com/users/sayef/followers", "following_url": "https://api.github.com/users/sayef/following{/other_user}", "gists_url": "https://api.github.com/users/sayef/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayef/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayef/subscriptions", "organizations_url": "https://api.github.com/users/sayef/orgs", "repos_url": "https://api.github.com/users/sayef/repos", "events_url": "https://api.github.com/users/sayef/events{/privacy}", "received_events_url": "https://api.github.com/users/sayef/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks great already! I left some small comments.", "Hi @NielsRogge !\r\n\r\nWould you mind telling me what else should I do? Or it's ready to merge?\r\n\r\nThanks!", "Hi,\r\n\r\nNow others need to review this, once they're back from holiday ;)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hi @sayef, there are a few code quality issues; I tried to push to your branch but I do not have push access on your branch.\r\n\r\nCould you run the following commands at the root of your clone? It should tell you what needs fixing:\r\n```\r\npip install -U -e .[quality]\r\nmake fixup\r\n```", "Hi @LysandreJik,\r\n\r\nI followed what you suggested. Let me know if I need to do anything else. :)\r\n", "Hi @sayef - I believe you also rebased or merged the `master` branch into your PR. Unfortunately, GitHub sometimes has issues understanding what happened, for example here your PR shows 245 commits and 466 files changed. \r\n\r\nUsually just closing the PR and opening a new one from the same branch, without changing anything is enough. Would you mind doing that and pinging me so that I may merge? Thank you!", "> Usually just closing the PR and opening a new one from the same branch, without changing anything is enough. Would you mind doing that and pinging me so that I may merge? Thank you!\r\n\r\nOkay. Closing here and will ping you in other PR.\r\n" ]
1,629
1,632
1,632
CONTRIBUTOR
null
# What does this PR do? - This PR adds example code for FSNER (few-shot named entity recognition) using huggingface's `transformers` library. - Only prediction/inference code is provided, training code will be provided very soon. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/pull/13133 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13155/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13155", "html_url": "https://github.com/huggingface/transformers/pull/13155", "diff_url": "https://github.com/huggingface/transformers/pull/13155.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13155.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13154
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13154/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13154/comments
https://api.github.com/repos/huggingface/transformers/issues/13154/events
https://github.com/huggingface/transformers/issues/13154
972,970,588
MDU6SXNzdWU5NzI5NzA1ODg=
13,154
AttributeError: 'AlbertModel' object has no attribute 'bias' -Transforms 4.9.2
{ "login": "AES0007", "id": 61427339, "node_id": "MDQ6VXNlcjYxNDI3MzM5", "avatar_url": "https://avatars.githubusercontent.com/u/61427339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AES0007", "html_url": "https://github.com/AES0007", "followers_url": "https://api.github.com/users/AES0007/followers", "following_url": "https://api.github.com/users/AES0007/following{/other_user}", "gists_url": "https://api.github.com/users/AES0007/gists{/gist_id}", "starred_url": "https://api.github.com/users/AES0007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AES0007/subscriptions", "organizations_url": "https://api.github.com/users/AES0007/orgs", "repos_url": "https://api.github.com/users/AES0007/repos", "events_url": "https://api.github.com/users/AES0007/events{/privacy}", "received_events_url": "https://api.github.com/users/AES0007/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix.", "> Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix.\r\n\r\nMany Thanks. I saw you recommended that I use **AlbertForPreTraining**, and then you said **AlbertForPreTraining** has a bug. So I should I wait on the fix before proceeding correct?", "> > Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix.\r\n> \r\n> Many Thanks. I saw you recommended that I use **AlbertForPreTraining**, and then you said **AlbertForPreTraining** has a bug. So I should I wait on the fix before proceeding correct?\r\n\r\nThat's right. If you switch to `AlbertForPreTraining` now, you may encounter another error as the alias was not set at the right place.", "> > > Since your tf checkpoint has a prediction head which isn't included in `AlbertModel`, using `AlbertForPreTraining` should solve the problem. But `AlbertForPreTraining` currently has a small alias bug, I'll open a PR for the fix.\r\n> > \r\n> > \r\n> > Many Thanks. I saw you recommended that I use **AlbertForPreTraining**, and then you said **AlbertForPreTraining** has a bug. So I should I wait on the fix before proceeding correct?\r\n> \r\n> That's right. If you switch to `AlbertForPreTraining` now, you may encounter another error as the alias was not set at the right place.\r\n\r\nSounds good, I will wait. Thanks again.", "Hi, the PR has been merged. You can install transformers from source and test whether it works as expected :-)", "> Hi, the PR has been merged. You can install transformers from source and test whether it works as expected :-)\r\n\r\nWill do. Thanks", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Lunix - Python version: 3 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.--> @LysandreJik - albert , bert, xlm: @LysandreJik ## Information I am using (AlBert Pretrained on custom corpus): The problem arises when using: * [ ] this is my own scripts: (give details below) _which uses transformers_ Wrote a simple script to extract CLS embedding for sentence from an albert model pretrained on custom vocab. I define the model using **AlbertModel.from_pretrained** and try to load my pre-trained weights using **load_tf_weights_in_albert** I run the script and get the error **AttributeError: 'AlbertModel' object has no attribute 'bias** The tasks I am working on is: * [ ] my own task or dataset: (give details below) Trying to extract CLS embedding for an input sentence from my albert model I pretrained on custom vocab. I will then feed these embedding into a custom classification layer. ## To reproduce Steps to reproduce the behavior: 1. Define my model: modelprt = AlbertModel.from_pretrained(pretrained_model_name_or_path='AOUTPR21/model.ckpt-10000', config=ptcfg, from_tf=True) --(I also tried converting checkpoint to pytorch but that gave an even worse error) 2. load weights into model via : modelprt = load_tf_weights_in_albert(modelprt, ptcfg, prot_model) ..._Note here prot_model = AOUTPR21/model.ckpt-10000_ 3. Run my script and I get the error <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> **Error seen :** File "./finetune_prot_pep.py", line 54, in <module> contexemb = getProtembedd(try_loader) File "/workspace/finetuneclassifier.py", line 52, in __init__ modelprt = AlbertModel.from_pretrained(pretrained_model_name_or_path='AOUTPR21/model.ckpt-10000', config=self.ptcfg, from_tf=True) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 1331, in from_pretrained model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index' File "/usr/local/lib/python3.6/dist-packages/transformers/models/albert/modeling_albert.py", line 169, in load_tf_weights_in_albert pointer = getattr(pointer, "bias") File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1131, in __getattr__ type(self).__name__, name)) AttributeError: 'AlbertModel' object has no attribute 'bias' ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expected that no error will come up
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13154/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13153
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13153/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13153/comments
https://api.github.com/repos/huggingface/transformers/issues/13153/events
https://github.com/huggingface/transformers/pull/13153
972,691,339
MDExOlB1bGxSZXF1ZXN0NzE0MjQ5OTky
13,153
Add Wav2Vec2 & Hubert ForSequenceClassification
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Accuracy evaluation on SUPERB tasks:\r\n\r\n- **KS** has uniform-length samples, so no padding\r\n- **ER** has non-uniform padded batches \r\n- **SID** is evaluated with batch_size=1 as in `s3prl`\r\n\r\n| Task | Model | normalize=True | normalize=False | Paper |\r\n| ---- | ------------- | -------------- | --------------- | ------ |\r\n| **KS** | Wav2Vec2-base | 0.9627 | 0.9643 | 0.9623 |\r\n| | Hubert-base | 0.9669 | 0.9672 | 0.9630 |\r\n| **ER** | Wav2Vec2-base | 0.5281 | 0.6258 | 0.6343 |\r\n| | Hubert-base | 0.5502 | 0.6359 | 0.6492 |\r\n| **SID** | Wav2Vec2-base | 0.7360 | 0.7518 | 0.7518 |\r\n| | Hubert-base | 0.8071 | 0.8071 | 0.8142 |\r\n\r\nSo far `normalize=False` is always better, as expected (`s3prl` never used normalization during eval).\r\nThere's also some slight variation with the official results, but it's of the same magnitude as `s3prl` vs `paper` results. ", "- [x] Passed integration test for all 4 tasks on both models\r\n- [x] Added `Copied from` where possible (the script just inserts a full copy of `W2V2.forward()` before `End copy`, so I didn't use it there)\r\n- [x] Added dummy examples to `forward()` docs\r\n- [x] Moved the models to `https://huggingface.co/superb`\r\n\r\n@patrickvonplaten everything should be ready to merge now :) ", "Awesome job @anton-l ! Feel free to merge the PR whenever you want" ]
1,629
1,631
1,630
MEMBER
null
# What does this PR do? This adds a Hubert extension for sequence classification. Ultimately this classification head should be compatible with s3prl `UtteranceLevel` [implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/model.py#L35) to support classification tasks from SUPERB, such as [Keyword Spotting](https://huggingface.co/datasets/superb#ks) and transfer their pretrained models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13153/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13153/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13153", "html_url": "https://github.com/huggingface/transformers/pull/13153", "diff_url": "https://github.com/huggingface/transformers/pull/13153.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13153.patch", "merged_at": 1630086771000 }
https://api.github.com/repos/huggingface/transformers/issues/13152
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13152/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13152/comments
https://api.github.com/repos/huggingface/transformers/issues/13152/events
https://github.com/huggingface/transformers/pull/13152
972,665,012
MDExOlB1bGxSZXF1ZXN0NzE0MjI2OTg2
13,152
Set missing seq_length variable when using inputs_embeds with ALBERT & Remove code duplication
{ "login": "olenmg", "id": 61135159, "node_id": "MDQ6VXNlcjYxMTM1MTU5", "avatar_url": "https://avatars.githubusercontent.com/u/61135159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olenmg", "html_url": "https://github.com/olenmg", "followers_url": "https://api.github.com/users/olenmg/followers", "following_url": "https://api.github.com/users/olenmg/following{/other_user}", "gists_url": "https://api.github.com/users/olenmg/gists{/gist_id}", "starred_url": "https://api.github.com/users/olenmg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/olenmg/subscriptions", "organizations_url": "https://api.github.com/users/olenmg/orgs", "repos_url": "https://api.github.com/users/olenmg/repos", "events_url": "https://api.github.com/users/olenmg/events{/privacy}", "received_events_url": "https://api.github.com/users/olenmg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure why I can't add the code suggestion, but it makes more sense to do this:\r\n\r\n```diff\r\nif input_ids is not None and inputs_embeds is not None:\r\n raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\r\n elif input_ids is not None:\r\n input_shape = input_ids.size()\r\n- batch_size, seq_length = input_shape\r\n elif inputs_embeds is not None:\r\n input_shape = inputs_embeds.size()[:-1]\r\n- batch_size, seq_length = input_shape\r\n else:\r\n raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\r\n\r\n+ batch_size, seq_length = input_shape \r\n device = input_ids.device if input_ids is not None else inputs_embeds.device\r\n```", "> Not sure why I can't add the code suggestion, but it makes more sense to do this:\r\n> \r\n> ```diff\r\n> if input_ids is not None and inputs_embeds is not None:\r\n> raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\r\n> elif input_ids is not None:\r\n> input_shape = input_ids.size()\r\n> - batch_size, seq_length = input_shape\r\n> elif inputs_embeds is not None:\r\n> input_shape = inputs_embeds.size()[:-1]\r\n> - batch_size, seq_length = input_shape\r\n> else:\r\n> raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\r\n> \r\n> + batch_size, seq_length = input_shape \r\n> device = input_ids.device if input_ids is not None else inputs_embeds.device\r\n> ```\r\n\r\n@NielsRogge \r\nYes, I fully agree with you. But most of the codes of other models are implemented as I wrote, and I just wanted to unify the format to prevent confusion.\r\n\r\nFor example, the code below is from src/transformers/models/bert/modeling_bert.py:\r\n```python\r\nif input_ids is not None and inputs_embeds is not None:\r\n raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\r\nelif input_ids is not None:\r\n input_shape = input_ids.size()\r\n batch_size, seq_length = input_shape\r\nelif inputs_embeds is not None:\r\n input_shape = inputs_embeds.size()[:-1]\r\n batch_size, seq_length = input_shape\r\nelse:\r\n raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\r\n\r\ndevice = input_ids.device if input_ids is not None else inputs_embeds.device\r\n```\r\n\r\nBut if needed, I could change all of the code looks like above.", "> But if needed, I could change all of the code looks like above.\r\n\r\nActually, I'm in favor of this, because it's duplicated code, and I think it's cleaner when just writing it once. ", "In that case, you can also update the [CookieCutter template](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py), which is used when adding a new model. ", "@NielsRogge \r\nOkay, I reflected it! (for at least the files I've found)\r\nPlease check again.", "LGTM! Thanks for making this cleaner." ]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> I think this bug is similar to #13128 , only difference is that this PR is for ALBERT. `AlbertModel` has the same issue that `seq_length` variable is not declared when using `inputs_embeds` I checked that other models that were implemented in the same code format as ALBERT/ELECTRA don't have this issue anymore. ++Additional Remove all of code duplications as @NielsRogge referred on the comments. ```Diff if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() - batch_size, seq_length = input_shape elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] - batch_size, seq_length = input_shape else: raise ValueError("You have to specify either input_ids or inputs_embeds") + batch_size, seq_length = input_shape device = input_ids.device if input_ids is not None else inputs_embeds.device ``` I think it is trivial, so I don't make additional PR. (If this is the problem, please inform me.) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13152/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13152", "html_url": "https://github.com/huggingface/transformers/pull/13152", "diff_url": "https://github.com/huggingface/transformers/pull/13152.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13152.patch", "merged_at": 1630407085000 }
https://api.github.com/repos/huggingface/transformers/issues/13151
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13151/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13151/comments
https://api.github.com/repos/huggingface/transformers/issues/13151/events
https://github.com/huggingface/transformers/issues/13151
972,551,799
MDU6SXNzdWU5NzI1NTE3OTk=
13,151
Unhashable type : dict for visualbert example code.
{ "login": "abhijithneilabraham", "id": 35420019, "node_id": "MDQ6VXNlcjM1NDIwMDE5", "avatar_url": "https://avatars.githubusercontent.com/u/35420019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhijithneilabraham", "html_url": "https://github.com/abhijithneilabraham", "followers_url": "https://api.github.com/users/abhijithneilabraham/followers", "following_url": "https://api.github.com/users/abhijithneilabraham/following{/other_user}", "gists_url": "https://api.github.com/users/abhijithneilabraham/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhijithneilabraham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhijithneilabraham/subscriptions", "organizations_url": "https://api.github.com/users/abhijithneilabraham/orgs", "repos_url": "https://api.github.com/users/abhijithneilabraham/repos", "events_url": "https://api.github.com/users/abhijithneilabraham/events{/privacy}", "received_events_url": "https://api.github.com/users/abhijithneilabraham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You have 2 {{ in your code, whereas it should be only one:\r\n\r\n```\r\ninputs.update({\r\n \"visual_embeds\": visual_embeds,\r\n \"visual_token_type_ids\": visual_token_type_ids,\r\n \"visual_attention_mask\": visual_attention_mask\r\n})\r\n```", "yes I guessed that, i.e, it used sets instead of dicts. But then it should be modified in the documentation as well. Also, doing as you said threw another error\r\n\r\n```\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-8716adc0686f>\", line 1, in <module>\r\n runfile('/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py', wdir='/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments')\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 705, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 102, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py\", line 224, in <module>\r\n outputs = model(**inputs, labels=labels)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py\", line 1240, in forward\r\n return_dict=return_dict,\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py\", line 784, in forward\r\n visual_input_shape = visual_embeds.size()[:-1]\r\n\r\nTypeError: 'int' object is not callable\r\n```\r\n@NielsRogge ", "Tagging @gchhablani as he's the expert on VisualBERT", "Thanks for the tag @NielsRogge.\n\n@abhijithneilabraham There was an error in the docs earlier. The dictionary update is wrong. It should not have `{{` and `}}`, but `{` and `}` instead. It was fixed recently in a PR. Sorry about that.\n\nPlease let me know if this solves your issue.", "> Please let me know if this solves your issue.\r\n\r\nApparently that doesn't solve his issue, as he shows above.", "@gchhablani I would like to help with the issue if I can. Let me know.", "@abhijithneilabraham Can you share your `get_visual_embeddings` method if possible?", "@gchhablani I used it from the [colab notebook](https://colab.research.google.com/drive/1bLGxKdldwqnMVA5x4neY7-l_8fKGWQYI?usp=sharing) that you shared in the doc. I still was unclear on the proper way of using it.", "Also @gchhablani this was the issue encountered after modifying the example code like the way you mentioned.\r\n\r\n```\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-8716adc0686f>\", line 1, in <module>\r\n runfile('/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py', wdir='/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments')\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 705, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 102, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py\", line 224, in <module>\r\n outputs = model(**inputs, labels=labels)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py\", line 1240, in forward\r\n return_dict=return_dict,\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n\r\n File \"/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/transformers/models/visual_bert/modeling_visual_bert.py\", line 784, in forward\r\n visual_input_shape = visual_embeds.size()[:-1]\r\n\r\nTypeError: 'int' object is not callable\r\n```\r\n\r\nCould this be because of the improper way of using the visual embeds? If yes I'd like to understand a proper approach to generating the visual embeds with a function", "@gchhablani This is my source code\r\n\r\n```\r\n#!/usr/bin/env python3\r\n# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Mon Aug 16 11:22:20 2021\r\n\r\n@author: abhijithneilabraham\r\n\"\"\"\r\n\r\nimport torch,torchvision\r\nimport matplotlib.pyplot as plt\r\nimport json\r\nimport cv2\r\nimport numpy as np\r\n\r\nfrom detectron2.modeling import build_model\r\nfrom detectron2.checkpoint import DetectionCheckpointer\r\nfrom detectron2.structures.image_list import ImageList\r\nfrom detectron2.data import transforms as T\r\nfrom detectron2.modeling.box_regression import Box2BoxTransform\r\nfrom detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputs\r\nfrom detectron2.structures.boxes import Boxes\r\nfrom detectron2.layers import nms\r\nfrom detectron2 import model_zoo\r\nfrom detectron2.config import get_cfg\r\n\r\nimg1 = plt.imread(f'profile_pic.jpeg')\r\n\r\n# Detectron expects BGR images\r\nimg_bgr1 = cv2.cvtColor(img1, cv2.COLOR_RGB2BGR)\r\n\r\ncfg_path = \"COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml\"\r\n\r\ndef load_config_and_model_weights(cfg_path):\r\n cfg = get_cfg()\r\n cfg.merge_from_file(model_zoo.get_config_file(cfg_path))\r\n\r\n # ROI HEADS SCORE THRESHOLD\r\n cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5\r\n\r\n # Comment the next line if you're using 'cuda'\r\n cfg['MODEL']['DEVICE']='cpu'\r\n\r\n cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(cfg_path)\r\n\r\n return cfg\r\n\r\ncfg = load_config_and_model_weights(cfg_path)\r\n\r\ndef get_model(cfg):\r\n # build model\r\n model = build_model(cfg)\r\n\r\n # load weights\r\n checkpointer = DetectionCheckpointer(model)\r\n checkpointer.load(cfg.MODEL.WEIGHTS)\r\n\r\n # eval mode\r\n model.eval()\r\n return model\r\n\r\nmodel = get_model(cfg)\r\n\r\ndef prepare_image_inputs(cfg, img_list):\r\n # Resizing the image according to the configuration\r\n transform_gen = T.ResizeShortestEdge(\r\n [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST\r\n )\r\n img_list = [transform_gen.get_transform(img).apply_image(img) for img in img_list]\r\n\r\n # Convert to C,H,W format\r\n convert_to_tensor = lambda x: torch.Tensor(x.astype(\"float32\").transpose(2, 0, 1))\r\n\r\n batched_inputs = [{\"image\":convert_to_tensor(img), \"height\": img.shape[0], \"width\": img.shape[1]} for img in img_list]\r\n\r\n # Normalizing the image\r\n num_channels = len(cfg.MODEL.PIXEL_MEAN)\r\n pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).view(num_channels, 1, 1)\r\n pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).view(num_channels, 1, 1)\r\n normalizer = lambda x: (x - pixel_mean) / pixel_std\r\n images = [normalizer(x[\"image\"]) for x in batched_inputs]\r\n\r\n # Convert to ImageList\r\n images = ImageList.from_tensors(images,model.backbone.size_divisibility)\r\n \r\n return images, batched_inputs\r\n\r\nimages, batched_inputs = prepare_image_inputs(cfg, [img_bgr1])\r\ndef get_features(model, images):\r\n features = model.backbone(images.tensor)\r\n return features\r\n\r\nfeatures = get_features(model, images)\r\n\r\n\r\n\r\ndef get_proposals(model, images, features):\r\n proposals, _ = model.proposal_generator(images, features)\r\n return proposals\r\n\r\nproposals = get_proposals(model, images, features)\r\n\r\n\r\ndef get_box_features(model, features, proposals):\r\n features_list = [features[f] for f in ['p2', 'p3', 'p4', 'p5']]\r\n box_features = model.roi_heads.box_pooler(features_list, [x.proposal_boxes for x in proposals])\r\n box_features = model.roi_heads.box_head.flatten(box_features)\r\n box_features = model.roi_heads.box_head.fc1(box_features)\r\n box_features = model.roi_heads.box_head.fc_relu1(box_features)\r\n box_features = model.roi_heads.box_head.fc2(box_features)\r\n\r\n box_features = box_features.reshape(1, 1000, 1024) # depends on your config and batch size\r\n return box_features, features_list\r\n\r\nbox_features, features_list = get_box_features(model, features, proposals)\r\n\r\n\r\n\r\ndef get_prediction_logits(model, features_list, proposals):\r\n cls_features = model.roi_heads.box_pooler(features_list, [x.proposal_boxes for x in proposals])\r\n cls_features = model.roi_heads.box_head(cls_features)\r\n pred_class_logits, pred_proposal_deltas = model.roi_heads.box_predictor(cls_features)\r\n return pred_class_logits, pred_proposal_deltas\r\n\r\npred_class_logits, pred_proposal_deltas = get_prediction_logits(model, features_list, proposals)\r\n\r\n\r\ndef get_box_scores(cfg, pred_class_logits, pred_proposal_deltas):\r\n box2box_transform = Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS)\r\n smooth_l1_beta = cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA\r\n\r\n outputs = FastRCNNOutputs(\r\n box2box_transform,\r\n pred_class_logits,\r\n pred_proposal_deltas,\r\n proposals,\r\n smooth_l1_beta,\r\n )\r\n\r\n boxes = outputs.predict_boxes()\r\n scores = outputs.predict_probs()\r\n image_shapes = outputs.image_shapes\r\n\r\n return boxes, scores, image_shapes\r\n\r\nboxes, scores, image_shapes = get_box_scores(cfg, pred_class_logits, pred_proposal_deltas)\r\n\r\n\r\n\r\n\r\ndef get_output_boxes(boxes, batched_inputs, image_size):\r\n proposal_boxes = boxes.reshape(-1, 4)\r\n scale_x, scale_y = (batched_inputs[\"width\"] / image_size[1], batched_inputs[\"height\"] / image_size[0])\r\n output_boxes = Boxes(proposal_boxes)\r\n# output_boxes.scale(scale_x, scale_y)\r\n output_boxes.clip(image_size)\r\n\r\n return output_boxes\r\n\r\noutput_boxes = [get_output_boxes(boxes[i], batched_inputs[i], proposals[i].image_size) for i in range(len(proposals))]\r\n\r\n\r\ndef select_boxes(cfg, output_boxes, scores):\r\n test_score_thresh = cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST\r\n test_nms_thresh = cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST\r\n cls_prob = scores.detach()\r\n cls_boxes = output_boxes.tensor.detach().reshape(1000,80,4)\r\n max_conf = torch.zeros((cls_boxes.shape[0]))\r\n for cls_ind in range(0, cls_prob.shape[1]-1):\r\n cls_scores = cls_prob[:, cls_ind+1]\r\n det_boxes = cls_boxes[:,cls_ind,:]\r\n keep = np.array(nms(det_boxes, cls_scores, test_nms_thresh))\r\n max_conf[keep] = torch.where(cls_scores[keep] > max_conf[keep], cls_scores[keep], max_conf[keep])\r\n keep_boxes = torch.where(max_conf >= test_score_thresh)[0]\r\n return keep_boxes, max_conf\r\n\r\n\r\ntemp = [select_boxes(cfg, output_boxes[i], scores[i]) for i in range(len(scores))]\r\nkeep_boxes, max_conf = [],[]\r\nfor keep_box, mx_conf in temp:\r\n keep_boxes.append(keep_box)\r\n max_conf.append(mx_conf)\r\n \r\n \r\n \r\nMIN_BOXES=10\r\nMAX_BOXES=100\r\ndef filter_boxes(keep_boxes, max_conf, min_boxes, max_boxes):\r\n if len(keep_boxes) < min_boxes:\r\n keep_boxes = np.argsort(max_conf).numpy()[::-1][:min_boxes]\r\n elif len(keep_boxes) > max_boxes:\r\n keep_boxes = np.argsort(max_conf).numpy()[::-1][:max_boxes]\r\n return keep_boxes\r\n\r\nkeep_boxes = [filter_boxes(keep_box, mx_conf, MIN_BOXES, MAX_BOXES) for keep_box, mx_conf in zip(keep_boxes, max_conf)]\r\n\r\n\r\ndef get_visual_embeds(box_features, keep_boxes):\r\n return box_features[keep_boxes.copy()]\r\n\r\nvisual_embeds = np.asarray([get_visual_embeds(box_feature, keep_box) for box_feature, keep_box in zip(box_features, keep_boxes)])\r\n\r\n\r\n# Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch.\r\nfrom transformers import BertTokenizer, VisualBertForQuestionAnswering\r\n\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = VisualBertForQuestionAnswering.from_pretrained('uclanlp/visualbert-vqa')\r\n\r\ntext = \"what color dress is he wearing?\"\r\ninputs = tokenizer(text, return_tensors='pt')\r\n\r\nvisual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) #example\r\nvisual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)\r\n\r\ninputs.update({\r\n \"visual_embeds\": visual_embeds,\r\n \"visual_token_type_ids\": visual_token_type_ids,\r\n \"visual_attention_mask\": visual_attention_mask\r\n})\r\n\r\nlabels = torch.tensor([[0.0,1.0]]).unsqueeze(0) # Batch size 1, Num labels 2\r\n\r\noutputs = model(**inputs, labels=labels)\r\nloss = outputs.loss\r\nscores = outputs.logits\r\nprint(outputs)\r\n```", "@abhijithneilabraham The issue is that you are using a numpy array when `visual_embeds` expects a torch tensor:\r\n\r\n```python\r\n>>> import numpy as np\r\n>>> import torch\r\n>>> a = np.ones(10)\r\n>>> a.size\r\n10\r\n>>> b = torch.ones(10)\r\n>>> b.size\r\n<built-in method size of Tensor object at 0x7fb34109ebc0>\r\n>>> b.size()\r\ntorch.Size([10])\r\n```\r\n\r\nI believe you can check the other demo, where the LXMERT authors have provided FasterRCNN classes and the pre-trained model on Visual Genome. It'll be much easier to use that.\n\nEDIT\n------\nThe docs issue has been fixed, the docs have not yet updated, I guess.\n", "Much thanks @gchhablani ! Can you share the link to the other demo? I can then close this issue.", "@abhijithneilabraham No problem :)\n\n\nHere is the demo link : https://github.com/huggingface/transformers/tree/master/examples/research_projects/visual_bert", "Thank you! " ]
1,629
1,629
1,629
NONE
null
Hi, I am using the visualbert model as shown in [visualbert visualreasoning](https://huggingface.co/transformers/model_doc/visual_bert.html#visualbertforvisualreasoning) ``` # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch. from transformers import BertTokenizer, VisualBertForVisualReasoning import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = VisualBertForVisualReasoning.from_pretrained('uclanlp/visualbert-nlvr2') text = "Who is eating the apple?" inputs = tokenizer(text, return_tensors='pt') visual_embeds = get_visual_embeddings(image).unsqueeze(0) visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) #example visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float) inputs.update({{ "visual_embeds": visual_embeds, "visual_token_type_ids": visual_token_type_ids, "visual_attention_mask": visual_attention_mask }}) labels = torch.tensor(1).unsqueeze(0) # Batch size 1, Num choices 2 outputs = model(**inputs, labels=labels) loss = outputs.loss scores = outputs.logits ``` and I encountered the following error: ``` Traceback (most recent call last): File "<ipython-input-1-8716adc0686f>", line 1, in <module> runfile('/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py', wdir='/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments') File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "/Users/abhijithneilabraham/miniconda3/envs/py/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/Users/abhijithneilabraham/Documents/GitHub/visualbert_experiments/example.py", line 219, in <module> "visual_attention_mask": visual_attention_mask TypeError: unhashable type: 'dict' ``` Is this operation supported by python or is this a bug in the code? Transformers-cli env output: ``` - `transformers` version: 4.9.2 - Platform: Darwin-16.7.0-x86_64-i386-64bit - Python version: 3.6.13 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13151/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13150
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13150/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13150/comments
https://api.github.com/repos/huggingface/transformers/issues/13150/events
https://github.com/huggingface/transformers/pull/13150
972,479,271
MDExOlB1bGxSZXF1ZXN0NzE0MDY2Mjk1
13,150
examples: add keep_linebreaks option to CLM examples
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for your PR @stefan-it! \r\n\r\nCould we maybe make `keep_linebreaks` configurable by the command line and let it default to `True`? So ideally we could add it to the `DataArguments` class", "@patrickvonplaten good idea, I'm working on it now!", "I've implemented it :) \r\n\r\nThe `examples/pytorch/language-modeling/run_clm_no_trainer.py` uses the raw argument parser, so I used the same boolean logic as used for the fast tokenizer (or here: slow tokenizer).", "Thanks a lot for your PR @stefan-it! \r\n\r\n@sgugger agreed to make this change here: https://github.com/huggingface/transformers/issues/12971 \r\n\r\nSo good to merge for me!" ]
1,629
1,630
1,630
COLLABORATOR
null
Hi, as discussed in #12971 a newline is missing when using the CLM example scripts, when no dataset name is provided (this is the case when you use "normal" text files). This PR adds the `keep_linebreaks=True` option to all CLM example scripts (when using files).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13150/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13150", "html_url": "https://github.com/huggingface/transformers/pull/13150", "diff_url": "https://github.com/huggingface/transformers/pull/13150.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13150.patch", "merged_at": 1630056945000 }
https://api.github.com/repos/huggingface/transformers/issues/13149
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13149/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13149/comments
https://api.github.com/repos/huggingface/transformers/issues/13149/events
https://github.com/huggingface/transformers/issues/13149
972,436,719
MDU6SXNzdWU5NzI0MzY3MTk=
13,149
Autoregressive differentiable decoding? (no teacher forcing nor self-reconstruction)
{ "login": "Ravoxsg", "id": 26378951, "node_id": "MDQ6VXNlcjI2Mzc4OTUx", "avatar_url": "https://avatars.githubusercontent.com/u/26378951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ravoxsg", "html_url": "https://github.com/Ravoxsg", "followers_url": "https://api.github.com/users/Ravoxsg/followers", "following_url": "https://api.github.com/users/Ravoxsg/following{/other_user}", "gists_url": "https://api.github.com/users/Ravoxsg/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ravoxsg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ravoxsg/subscriptions", "organizations_url": "https://api.github.com/users/Ravoxsg/orgs", "repos_url": "https://api.github.com/users/Ravoxsg/repos", "events_url": "https://api.github.com/users/Ravoxsg/events{/privacy}", "received_events_url": "https://api.github.com/users/Ravoxsg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As far as I know, in generation phase, the autoregressive decoding procedures are all implemented in each searching method such as greedy search, meaning that the output embeddings will certainly be sampled by `argmax` or other sampling methods. Besides, the autoregressive decodings are all running in loop statement currently.\r\n\r\nI think that implementing a method that solely performs autoregression and makes the output differentiable seems great and feasible, but if the bottleneck in your code is loop statement, this way may have no significant help in performance.", "cc @patrickvonplaten ", "Yeah currently gradient backprop is not really supported in `transformers` sadly. Think it would require some major changes to implemented this. Feel free to give it a stab! Would also be very interested in knowing how feasible this is in PyTorch!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,632
1,632
NONE
null
# πŸš€ Feature request Hi, Is there any way to perform autoregressive differentiable decoding? As far as I know, for **encoder-decoder models** (e.g., T5, BART), we have the following: - **.forward()** performs decoding using **teacher forcing**, or in the case of BART, shifts the _input_ids_ to the right if no _decoder_input_ids_ are given in order to do self-reconstruction. In that case the decoding is differentiable but not autoregressive as it uses teacher forcing (or some kind of supervision). - **.generate()** enables decoding using **greedy search**, **beam search**, or **top-k sampling**. In that case, the decoding is autoregressive but not differentiable. I would like to **_decode differentiably_** by using **plain autoregression**: at each decoding step t, we feed to the decoder the token generated at previous step t-1. By _differentiably_, I mean that we don't take the argmax() to select a single generated token. Rather, we use embeddings weighted by the logits, to get a pseudo-token that we can feed to the decoder at the next step. Such a technique of using weighted averages of embeddings as pseudo-tokens has been used in recent research: https://arxiv.org/abs/1905.05621 We can implement this manually using a for loop over decoding steps calling .forward() at each step, but it is quite slow. ## Motivation The motivation behind this feature request is to be able to generate sequences differentiably without supervision, and fast, which can be very useful for several research purposes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13149/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13148
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13148/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13148/comments
https://api.github.com/repos/huggingface/transformers/issues/13148/events
https://github.com/huggingface/transformers/issues/13148
972,428,456
MDU6SXNzdWU5NzI0Mjg0NTY=
13,148
Slow tokenizers return overflowing tokens in reversed order
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge I would like to contribute to this. Can I work on this issue?\r\n", "Sure! The goal would be to make the slow tokenizers equivalent to the fast tokenizers. So that means:\r\n\r\n- [ ] making sure overflowing tokens are returned in the correct order\r\n- [ ] add special tokens to the overflowing tokens\r\n- [ ] add a `overflow_to_sample_mapping`, similar to the fast tokenizers.\r\n\r\nThis would probably require to update the `truncate_sequences` method defined [here](https://github.com/huggingface/transformers/blob/439a43b6b403205eeda2d62645fc16c93627d30d/src/transformers/tokenization_utils_base.py#L2922).", "I see someone also already noticed this: #6697", "@Apoorvgarg-creator It is extremely kind of you to offer your help on this problem! \r\n\r\nAs I had started to look at the problem of the strange order of tokens in `overflowing_tokens` (\"making sure overflowing tokens are returned in the correct order\"), let me share with you what I had identified if it can be of any help:\r\n- There are behaviours that were not tested in the `test_maximum_encoding_length_pair_input` and `test_maximum_encoding_length_single_input` tests in the `test_tokenization_common.py` file. So we should add these tests to make sure that overflowing tokens are tested for all `TruncationStrategy` types and with a single sequence or a pair of sequences;\r\n- As said by @NielsRogge, the problem is most likely with the `truncate_sequences` method in `tokenization_utils_base.py`.\r\n\r\nI would like to take this opportunity to comment on the other 2 points (\"add special tokens to the overflowing tokens\" and\r\n\"add a `overflow_to_sample_mapping`, similar to the fast tokenizers\") raised by @NielsRogge. Indeed, the slow and fast tokenizer handle overflowing tokens quite differently. I think it would be nice to have the opinion of @LysandreJik , @sgugger and @n1t0 (and if ever someone else wants to give their opinion too, it would be a pleasure!!) on the fact of changing the API of the slow tokenizers so that it corresponds to the one of the fast tokenizers (as there is perhaps a need for backward compatibility).", "@SaulLu @NielsRogge Thank you for the guidance. I will go through the `truncate_sequences` method.", "@NielsRogge @SaulLu The reason we are getting the reverse order in the `longest_first` truncation strategy is that In other truncation strategies we are truncating the sequence in one iteration only whereas In `longest_first` we are running a loop\r\n`num_tokens_to_remove` times keeping `window_len` = 1 every time except when `overflowing_token` is empty. Hence we will be taking `1 id` at a time from the last.\r\nI have developed the code that I think will resolve the issue \r\n> making sure overflowing tokens are returned in the correct order.", "@Apoorvgarg-creator - could be error on my end, but on the current master branch I'm still witnessing reversed order with the toy example provided in the original post.", "> @Apoorvgarg-creator - could be error on my end, but on the current master branch I'm still witnessing reversed order with the toy example provided in the original post.\r\n\r\n> toy example provided in the original post\r\n\r\ncould you please share the code or link for the same ? \r\nThank you\r\n", "> could you please share the code or link for the same ?\r\n> Thank you\r\n\r\nI was just referring to the original post in this thread. If i do a fresh install of the latest master and then\r\n\r\n```python\r\nfrom transformers import BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ntext = \"hello my name is niels\"\r\nencoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True)\r\n\r\nprint(tokenizer.decode(encoding.input_ids))\r\n# prints '[CLS] hello my name is [SEP]'\r\n\r\nprint(tokenizer.decode(encoding.overflowing_tokens))\r\n# prints '##els ni'\r\n```\r\n\r\nIs this expected?", "> > could you please share the code or link for the same ?\r\n> > Thank you\r\n> \r\n> I was just referring to the original post in this thread. If i do a fresh install of the latest master and then\r\n> \r\n> ```python\r\n> from transformers import BertTokenizer\r\n> tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n> text = \"hello my name is niels\"\r\n> encoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True)\r\n> \r\n> print(tokenizer.decode(encoding.input_ids))\r\n> # prints '[CLS] hello my name is [SEP]'\r\n> \r\n> print(tokenizer.decode(encoding.overflowing_tokens))\r\n> # prints '##els ni'\r\n> ```\r\n> \r\n> Is this expected?\r\n\r\nSorry, By original post I thought you meant somewhere in the documentation.\r\n\r\nNo this is not expected. I will try reproducing the same. Thank you", "@dcyoung I ran the same code against the current master branch, I got the expected output -\r\n<img width=\"273\" alt=\"Screenshot 2021-09-08 at 11 02 44 AM\" src=\"https://user-images.githubusercontent.com/57873504/132451970-385f7171-14f8-4ce0-93a9-461657bdb7d7.png\">\r\n\r\n\r\n\r\n@dcyoung Can you provide more details about the environment in which you are running the code.", "@Apoorvgarg-creator -- i can't explain it, but a fresh environment solved the issue with the toy example above. It is now correctly printing off `niels`. However, I'm still seeing unexpected behavior with the following example:\r\n\r\nEnvironment:\r\n```bash\r\n$ conda create -n test python=3.8\r\n$ source activate test\r\n$ pip install git+https://github.com/huggingface/transformers.git\r\n...\r\n$ pip list\r\nPackage Version\r\n------------------ -------------------\r\ncertifi 2021.5.30\r\ncharset-normalizer 2.0.4\r\nclick 8.0.1\r\nfilelock 3.0.12\r\nhuggingface-hub 0.0.16\r\nidna 3.2\r\njoblib 1.0.1\r\nnumpy 1.21.2\r\npackaging 21.0\r\npip 21.0.1\r\npyparsing 2.4.7\r\nPyYAML 5.4.1\r\nregex 2021.8.28\r\nrequests 2.26.0\r\nsacremoses 0.0.45\r\nsetuptools 52.0.0.post20210125\r\nsix 1.16.0\r\ntokenizers 0.10.3\r\ntqdm 4.62.2\r\ntransformers 4.11.0.dev0\r\ntyping-extensions 3.10.0.2\r\nurllib3 1.26.6\r\nwheel 0.37.0\r\n```\r\n\r\nReproducible example: \r\n```python\r\nfrom transformers import BertTokenizer, LayoutLMv2Tokenizer\r\n\r\nmax_length = 8\r\nn_src_tok_per_sample = max_length - 2 # account for pad\r\nwords = (\r\n n_src_tok_per_sample * [\"a\"]\r\n + n_src_tok_per_sample * [\"b\"]\r\n + n_src_tok_per_sample * [\"c\"]\r\n)\r\nprint(\"Original words: \", words)\r\n\r\n\r\nprint(50 * \"=\" + \"\\nBERT\\n\" + 50 * \"=\")\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\nencoded_inputs = tokenizer(\r\n text=words,\r\n padding=\"max_length\",\r\n pad_to_multiple_of=8,\r\n truncation=True,\r\n max_length=max_length,\r\n return_overflowing_tokens=True,\r\n return_tensors=\"pt\",\r\n is_split_into_words=True,\r\n)\r\ninput_ids = encoded_inputs[\"input_ids\"]\r\nprint(\"Decoded input_ids: \", [tokenizer.decode(x) for x in input_ids])\r\n\r\noverflowing_tokens = encoded_inputs[\"overflowing_tokens\"]\r\nprint(\"Decoded overflow tokens: \", [tokenizer.decode(x) for x in overflowing_tokens])\r\n\r\nprint(50 * \"=\" + \"\\nLayout\\n\" + 50 * \"=\")\r\ntokenizer = LayoutLMv2Tokenizer.from_pretrained(\r\n \"microsoft/layoutlmv2-base-uncased\",\r\n only_label_first_subword=False,\r\n)\r\n\r\nencoded_inputs = tokenizer(\r\n text=words,\r\n boxes=len(words) * [[1, 1, 1, 1]],\r\n padding=\"max_length\",\r\n pad_to_multiple_of=8,\r\n truncation=True,\r\n max_length=max_length,\r\n return_overflowing_tokens=True,\r\n return_tensors=\"pt\",\r\n is_split_into_words=True,\r\n)\r\ninput_ids = encoded_inputs[\"input_ids\"]\r\nprint(\"Decoded input_ids: \", [tokenizer.decode(x) for x in input_ids])\r\n\r\noverflowing_tokens = encoded_inputs[\"overflowing_tokens\"]\r\nprint(\"Decoded overflow tokens: \", [tokenizer.decode(x) for x in overflowing_tokens])\r\n\r\n```\r\n\r\nOutput:\r\n```bash\r\nOriginal words: ['a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b', 'b', 'c', 'c', 'c', 'c', 'c', 'c']\r\n==================================================\r\nBERT\r\n==================================================\r\nDecoded input_ids: ['[CLS] a a a a a a [SEP]']\r\nDecoded overflow tokens: ['b b b b b b c c c c c c']\r\n==================================================\r\nLayout\r\n==================================================\r\nDecoded input_ids: ['[CLS] a a a a a a [SEP]']\r\nDecoded overflow tokens: ['c c c c c c b b b b b b']\r\n```", "Thank you very much for reporting the issue @dcyoung :blush:.\r\n\r\nI think it's due to the fact that `layoutLMv2` (which must have been merged around the same time as this fix) redefines the operation and does not use the generic method. Might be of interest to @NielsRogge :slightly_smiling_face: ", "@NielsRogge @SaulLu, LayoutLMv2 has its own `truncate_sequence` method. so that's why the problem of reverse order of overflowing tokens occurred in this tokenizer.\r\nShall I make the respective changes in the `truncate_sequence` method of LayoutLMv2 tokenizer?\r\n\r\n@dcyoung, Thank you very much for reporting the issue.", "Yes, the LayoutLMv2 PR was merged before the PR that fixed the reverse order. So feel free to update the `truncate_sequence` method of `LayoutLMv2Tokenizer`." ]
1,629
1,631
1,630
CONTRIBUTOR
null
When implementing the slow tokenizer for LayoutLMv2, I spotted some weird behaviour for slow tokenizers when specifying `return_overflowing_tokens = True`. Namely, in that case, overflowing tokens are returned in reversed order, and no padding is performed, unlike fast tokenizers. Small example: ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") text = "hello my name is niels" encoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) ``` When checking out the encoding, it looks as follows: ``` print(tokenizer.decode(encoding.input_ids)) # prints '[CLS] hello my name is [SEP]' print(tokenizer.decode(encoding.overflowing_tokens)) # prints '##els ni' ``` As you can see, the overflowing tokens are returned in reversed order, and they are not padded up to the max length of 6 tokens. In contrast, `BertTokenizerFast` does everything correctly: ``` from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") text = "hello my name is niels" encoding = tokenizer(text, padding=True, max_length=6, truncation=True, return_overflowing_tokens=True) ``` returns ``` print(tokenizer.decode(encoding.input_ids[0])) # prints '[CLS] hello my name is [SEP]' print(tokenizer.decode(encoding.input_ids[1])) # prints '[CLS] niels [SEP] [PAD] [PAD]' ``` So I guess we have some work to do for slow tokenizers to work correctly. cc @LysandreJik @SaulLu @n1t0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13148/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13148/timeline
completed
null
null