url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/10130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10130/comments | https://api.github.com/repos/huggingface/transformers/issues/10130/events | https://github.com/huggingface/transformers/pull/10130 | 806,085,151 | MDExOlB1bGxSZXF1ZXN0NTcxNTcyNjA3 | 10,130 | [DeepSpeed in notebooks] Jupyter + Colab | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,613 | 1,637 | 1,613 | CONTRIBUTOR | null | This PR addresses issues raised in https://github.com/huggingface/transformers/issues/10011 when the user tried to use DeepSpeed in a notebook (most likely colab).
This PR:
* forces device and distributed setup init from TrainingArguments explicitly at the beginning of Trainer's `__init__`. This is needed since until now the init was happening as a side effect of someone calling `device` or `n_gpus`, which doesn't happen if someone runs their own version of `Trainer` w/ deepspeed - which is the case with notebooks - so we are missing out on DeepSpeed init and things weren't working. Let's do it explicitly, and not as a side-effect, so everything is loud and clear.
* sets up `self.local_rank` based on LOCAL_RANK env var under deepspeed to save users a hassle - deepspeed must have `local_rank>-1`. I guess the fake launcher env setup could be folded into the init as well, but then the user loses control over the port number and they may need to edit it, so for now leaving it outside - but will ask deepspeed to provide a wrapper function to make it easy for the user. perhaps once the wrapper is available it could be automated completely. Alternatively, if they make `mpi4py` a dependency then fake launcher env setup won't be needed at all.
* documents how to run DeepSpeed in the notebook env
* adds a test that mocks a notebook environment and runs deepspeed w/o a launcher
I may wait to hear about the follow up to https://github.com/microsoft/DeepSpeed/issues/748 to merge this, if it looks quick, but otherwise I will revise the setup doc in a future PR. The main changes of this PR besides the doc are required anyway.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10130/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10130",
"html_url": "https://github.com/huggingface/transformers/pull/10130",
"diff_url": "https://github.com/huggingface/transformers/pull/10130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10130.patch",
"merged_at": 1613080925000
} |
https://api.github.com/repos/huggingface/transformers/issues/10129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10129/comments | https://api.github.com/repos/huggingface/transformers/issues/10129/events | https://github.com/huggingface/transformers/pull/10129 | 806,043,041 | MDExOlB1bGxSZXF1ZXN0NTcxNTM4NzMy | 10,129 | Fix v2 model loading issue | {
"login": "BigBird01",
"id": 38195654,
"node_id": "MDQ6VXNlcjM4MTk1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/38195654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigBird01",
"html_url": "https://github.com/BigBird01",
"followers_url": "https://api.github.com/users/BigBird01/followers",
"following_url": "https://api.github.com/users/BigBird01/following{/other_user}",
"gists_url": "https://api.github.com/users/BigBird01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigBird01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigBird01/subscriptions",
"organizations_url": "https://api.github.com/users/BigBird01/orgs",
"repos_url": "https://api.github.com/users/BigBird01/repos",
"events_url": "https://api.github.com/users/BigBird01/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigBird01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @BigBird01, thanks a lot for fixing these issues! I'd like to prevent as much as possible from having the `pre_load_hooks` in the code. When do you expect mismatched head dimensions?",
"This is the case that if we want to fine-tune a task based on mnli models, e.g. MRPC, SST, QNLI. If we want to avoid this method, we need to fix the error reporting when load pretrained models.\n\nGet Outlook for iOS<https://aka.ms/o0ukef>\n________________________________\nFrom: Lysandre Debut <[email protected]>\nSent: Saturday, February 13, 2021 5:23:53 AM\nTo: huggingface/transformers <[email protected]>\nCc: Pengcheng He <[email protected]>; Mention <[email protected]>\nSubject: Re: [huggingface/transformers] Fix v2 model loading issue (#10129)\n\n\nHi @BigBird01<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBigBird01&data=04%7C01%7CPengcheng.H%40microsoft.com%7C301190f89ac34907b53408d8d0229db7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637488194400010761%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=9%2BnuKLc3nPym5xSUhe%2Flko3Fd3jxus0WRwj3W%2Bks%2FuM%3D&reserved=0>, thanks a lot for fixing these issues! I'd like to prevent as much as possible from having the pre_load_hooks in the code. When do you expect mismatched head dimensions?\n\n—\nYou are receiving this because you were mentioned.\nReply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fpull%2F10129%23issuecomment-778618852&data=04%7C01%7CPengcheng.H%40microsoft.com%7C301190f89ac34907b53408d8d0229db7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637488194400010761%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ka8Oo2xj%2Ba0u2X6Fobb77H27KO94ePzlbCeZ7a797ww%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAJDNDRT5T5MFRSCQH3MXIEDS6Z4OTANCNFSM4XOC2W6A&data=04%7C01%7CPengcheng.H%40microsoft.com%7C301190f89ac34907b53408d8d0229db7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637488194400020711%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=kdOpjyJfnJyaSLKNbPjNIMs3p2KbiceAVHHS4WCfVEA%3D&reserved=0>.\n",
"Usually we recommend to load through the base model to lose the head:\r\n\r\n```py\r\nfrom transformers import DebertaV2Model, DebertaV2ForSequenceClassification\r\n\r\nseq_model = DebertaV2ForSequenceClassification.from_pretrained(\"xxx\", num_labels=4)\r\nseq_model.save_pretrained(directory)\r\n\r\nbase = DebertaV2Model.from_pretrained(directory) # Lose the head\r\nbase.save_pretrained(directory)\r\n\r\nseq_model = DebertaV2ForSequenceClassification.from_pretrained(directory, num_labels=8)\r\n```\r\n\r\nDoes that work in your case? I agree you're touching to something that has a bad API, and this should be handled in the `from_pretrained` method. I don't think we should handle it model-wise, however. I'll look into it soon.",
"yes. but this looks a little bit tricky. And need to modify existing text classification code to benefit from mnli fine-tuned models. How about we keep current hook method, and finish current PR? After we work out a systematic solution for such scenario, we can drop the hook method.\n\nGet Outlook for iOS<https://aka.ms/o0ukef>\n________________________________\nFrom: Lysandre Debut <[email protected]>\nSent: Saturday, February 13, 2021 6:28:34 AM\nTo: huggingface/transformers <[email protected]>\nCc: Pengcheng He <[email protected]>; Mention <[email protected]>\nSubject: Re: [huggingface/transformers] Fix v2 model loading issue (#10129)\n\n\nUsually we recommend to load through the base model to lose the head:\n\nfrom transformers import DebertaV2Model, DebertaV2ForSequenceClassification\n\nseq_model = DebertaV2ForSequenceClassification.from_pretrained(\"xxx\", num_labels=4)\nseq_model.save_pretrained(directory)\n\nbase = DebertaV2Model.from_pretrained(directory) # Lose the head\nbase.save_pretrained(directory)\n\nseq_model = DebertaV2ForSequenceClassification.from_pretrained(directory, num_labels=8)\n\nDoes that work in your case? I agree you're touching to something that has a bad API, and this should be handled in the from_pretrained method. I don't think we should handle it model-wise, however. I'll look into it soon.\n\n—\nYou are receiving this because you were mentioned.\nReply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fpull%2F10129%23issuecomment-778627044&data=04%7C01%7CPengcheng.H%40microsoft.com%7C4e6ebb9592e84114db8908d8d02ba69b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637488233183179214%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=6B7Tw%2BulDyc50hCWtt40Y3Eu%2F9pDhycqsFcGGlbEWJA%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAJDNDRSALWZ5T6MA5DFF7VDS62EBFANCNFSM4XOC2W6A&data=04%7C01%7CPengcheng.H%40microsoft.com%7C4e6ebb9592e84114db8908d8d02ba69b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637488233183179214%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=DUoGLBFe%2Ff2Ms580JCNn5okYXrYWH9vqTIhAMve0tqE%3D&reserved=0>.\n",
"Ok, will merge like that and we'll discuss with other team members for the main PR. Thanks!"
] | 1,613 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
Fix few issues with loading deberta v2 models and deberta mnli fine-tuned model
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10129/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10129",
"html_url": "https://github.com/huggingface/transformers/pull/10129",
"diff_url": "https://github.com/huggingface/transformers/pull/10129.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10129.patch",
"merged_at": 1613383984000
} |
https://api.github.com/repos/huggingface/transformers/issues/10128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10128/comments | https://api.github.com/repos/huggingface/transformers/issues/10128/events | https://github.com/huggingface/transformers/issues/10128 | 806,011,613 | MDU6SXNzdWU4MDYwMTE2MTM= | 10,128 | Bug in numpy_pad_and_concatenate | {
"login": "liamcli",
"id": 1198666,
"node_id": "MDQ6VXNlcjExOTg2NjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1198666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liamcli",
"html_url": "https://github.com/liamcli",
"followers_url": "https://api.github.com/users/liamcli/followers",
"following_url": "https://api.github.com/users/liamcli/following{/other_user}",
"gists_url": "https://api.github.com/users/liamcli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liamcli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liamcli/subscriptions",
"organizations_url": "https://api.github.com/users/liamcli/orgs",
"repos_url": "https://api.github.com/users/liamcli/repos",
"events_url": "https://api.github.com/users/liamcli/events{/privacy}",
"received_events_url": "https://api.github.com/users/liamcli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed. Would you like to do a PR to fix this since you spotted the bug?",
"sure thing!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,613 | 1,618 | 1,618 | NONE | null | https://github.com/huggingface/transformers/blob/77b862847b8069d57c0849ca012f48414c427d8e/src/transformers/trainer_pt_utils.py#L71
I believe this should be
`np.concatenate((array1, array2), axis=0)` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10128/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10127/comments | https://api.github.com/repos/huggingface/transformers/issues/10127/events | https://github.com/huggingface/transformers/issues/10127 | 805,946,518 | MDU6SXNzdWU4MDU5NDY1MTg= | 10,127 | XLM-R tokenizer is none | {
"login": "aggiejiang",
"id": 40179465,
"node_id": "MDQ6VXNlcjQwMTc5NDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/40179465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aggiejiang",
"html_url": "https://github.com/aggiejiang",
"followers_url": "https://api.github.com/users/aggiejiang/followers",
"following_url": "https://api.github.com/users/aggiejiang/following{/other_user}",
"gists_url": "https://api.github.com/users/aggiejiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aggiejiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aggiejiang/subscriptions",
"organizations_url": "https://api.github.com/users/aggiejiang/orgs",
"repos_url": "https://api.github.com/users/aggiejiang/repos",
"events_url": "https://api.github.com/users/aggiejiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/aggiejiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello! This is weird, it shouldn't happen. Could you try to install `sentencepiece` and let me know if it fixes your issue? Thank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I'm still facing this issue.\r\n\r\n\r\n",
"Do you have a reproducible colab notebook? Thanks",
"Not sure how but it's working today.\r\n\r\n",
"This is probably because you hadn't restarted your kernel after installing the `sentencepiece` dependency!"
] | 1,612 | 1,620 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
### Who can help
@LysandreJik @n1t0
## Information
I am using XLM-R:
The problem arises when using:
* the official example scripts: (give details below)
The tasks I am working on is:
* my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behaviour:
```
tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')
model = XLMRobertaModel.from_pretrained('xlm-roberta-base')
print(tokenizer, model)
```
## Result
The xlm-r tokenizer is none but the model can be found.
I am a beginner for this model. Many thanks for your help.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10127/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10126/comments | https://api.github.com/repos/huggingface/transformers/issues/10126/events | https://github.com/huggingface/transformers/pull/10126 | 805,910,844 | MDExOlB1bGxSZXF1ZXN0NTcxNDMzMjMz | 10,126 | Add new community notebook - Blenderbot | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for your work @lordtt13 \r\n\r\nActually, there are already plenty of notebooks about how to fine-tune T5. It would be great if could add a notebook for missing/new models/task. Below are some of the new models which aren't used much\r\n\r\n- T5_v1_1, mT5\r\n- ProphetNet, xlm-prophetnet\r\n- Blenderbot\r\n- mBART, mBART-50\r\n\r\nUsing languages other than English would be even better, we now have so many languages in the `datasets` library after the sprint. So it's a good opportunity to use those datasets to fine-tune/evaluate multi-lingual models on them (mT5, mBART, xlm-prophetnet) \r\n\r\ncc @patrickvonplaten ",
"I agree with @patil-suraj that notebooks on multilingual T5 would be super useful as well! \r\n\r\nBut nevertheless, I think we can merge this notebook :-) ",
"Thank you for the suggestions, and yes maybe T5 has been trained on too much, I will change the notebook to have it train a different model and then request for merge.",
"Have trained now on BlenderBotSmall, will add multilingual model training tutorial in next PR! \r\nPlease check @patil-suraj @patrickvonplaten "
] | 1,612 | 1,613 | 1,613 | CONTRIBUTOR | null | Updated community.md file to add new notebook, How to fine tune T5 for summarization using the Trainer API | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10126/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10126",
"html_url": "https://github.com/huggingface/transformers/pull/10126",
"diff_url": "https://github.com/huggingface/transformers/pull/10126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10126.patch",
"merged_at": 1613037220000
} |
https://api.github.com/repos/huggingface/transformers/issues/10125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10125/comments | https://api.github.com/repos/huggingface/transformers/issues/10125/events | https://github.com/huggingface/transformers/issues/10125 | 805,853,544 | MDU6SXNzdWU4MDU4NTM1NDQ= | 10,125 | Converted pytorch model to onnx does not work correctly | {
"login": "yzhang-github-pub",
"id": 73549252,
"node_id": "MDQ6VXNlcjczNTQ5MjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/73549252?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzhang-github-pub",
"html_url": "https://github.com/yzhang-github-pub",
"followers_url": "https://api.github.com/users/yzhang-github-pub/followers",
"following_url": "https://api.github.com/users/yzhang-github-pub/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhang-github-pub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzhang-github-pub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhang-github-pub/subscriptions",
"organizations_url": "https://api.github.com/users/yzhang-github-pub/orgs",
"repos_url": "https://api.github.com/users/yzhang-github-pub/repos",
"events_url": "https://api.github.com/users/yzhang-github-pub/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzhang-github-pub/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> I converted pretrained 'Rostlab/prot_bert_bfd' @ huggingface to onnx, then tried to convert a checkpoint from fine tuning of the pretrained model. Comparing to pretrained model, conversion of fine tuned model generated a lot of warnings. Basically the warnings said that parameters from first to last layer were not initialized. The converted model did not work correctly.\r\n> \r\n> This is how I called the conversion module:\r\n> \r\n> python3 -m transformers.convert_graph_to_onnx --model Rostlab/prot_bert_bfd --framework pt prot_bert_bfd.onnx\r\n> I did similarly for checkpoint model.\r\n> \r\n> Does the module work on checkpoint?\r\n\r\nHello @yzhang-github-pub ,I have used the same line of code as you did for a PEGASUS model on colab:\r\n`!python3 -m transformers.convert_graph_to_onnx --model jpcorb20/pegasus-large-reddit_tifu-samsum-256 --framework pt pegasus-large-reddit_tifu-samsum-256.onnx`\r\n**The following error keeps showing and the coversion fails. Can you please tell me how did you solve this problem??**\r\n**The error:**\r\n\r\n> Some weights of the model checkpoint at jpcorb20/pegasus-large-reddit_tifu-samsum-256 were not used when initializing PegasusModel: ['final_logits_bias', 'lm_head.weight']\r\n\r\n> - This IS expected if you are initializing PegasusModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n> - This IS NOT expected if you are initializing PegasusModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n> Downloading: 100% 1.50k/1.50k [00:00<00:00, 1.29MB/s]\r\n> Downloading: 100% 1.91M/1.91M [00:01<00:00, 1.12MB/s]\r\n> Downloading: 100% 1.34k/1.34k [00:00<00:00, 1.18MB/s]\r\n> Error while converting the model: Folder /content is not empty, aborting conversion",
"I wrote a script to do onnx conversion, by importing onnx and onnxruntime modules. I heard some versions of transformer have bugs in onnx conversion and model loading. \r\n"
] | 1,612 | 1,627 | 1,619 | NONE | null | I converted pretrained 'Rostlab/prot_bert_bfd' @ huggingface to onnx, then tried to convert a checkpoint from fine tuning of the pretrained model. Comparing to pretrained model, conversion of fine tuned model generated a lot of warnings. Basically the warnings said that parameters from first to last layer were not initialized. The converted model did not work correctly.
This is how I called the conversion module:
python3 -m transformers.convert_graph_to_onnx --model Rostlab/prot_bert_bfd --framework pt prot_bert_bfd.onnx
I did similarly for checkpoint model.
Does the module work on checkpoint?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10125/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10124/comments | https://api.github.com/repos/huggingface/transformers/issues/10124/events | https://github.com/huggingface/transformers/pull/10124 | 805,839,721 | MDExOlB1bGxSZXF1ZXN0NTcxMzc0OTc0 | 10,124 | [Doc] Fix version control in internal pages | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,613 | 1,613 | COLLABORATOR | null | # What does this PR do?
When I added the internal submenu, I didn't think of also adding it in the test that properly generates the links for another version of the doc. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10124/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10124",
"html_url": "https://github.com/huggingface/transformers/pull/10124",
"diff_url": "https://github.com/huggingface/transformers/pull/10124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10124.patch",
"merged_at": 1613224350000
} |
https://api.github.com/repos/huggingface/transformers/issues/10123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10123/comments | https://api.github.com/repos/huggingface/transformers/issues/10123/events | https://github.com/huggingface/transformers/issues/10123 | 805,793,568 | MDU6SXNzdWU4MDU3OTM1Njg= | 10,123 | Help on training TFBERT to IntegerEncoded sequences | {
"login": "victormaricato",
"id": 11489228,
"node_id": "MDQ6VXNlcjExNDg5MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/11489228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/victormaricato",
"html_url": "https://github.com/victormaricato",
"followers_url": "https://api.github.com/users/victormaricato/followers",
"following_url": "https://api.github.com/users/victormaricato/following{/other_user}",
"gists_url": "https://api.github.com/users/victormaricato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/victormaricato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/victormaricato/subscriptions",
"organizations_url": "https://api.github.com/users/victormaricato/orgs",
"repos_url": "https://api.github.com/users/victormaricato/repos",
"events_url": "https://api.github.com/users/victormaricato/events{/privacy}",
"received_events_url": "https://api.github.com/users/victormaricato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I had to define the following parameters on `BertConfig`:\r\n\r\n`config = BertConfig(vocab_size=4+1, hidden_size=len(x), max_position_embeddings=len(x))`\r\n\r\nMy issue is now:\r\n\r\n`InvalidArgumentError: Index out of range using input dim 1; input has only 1 dims [Op:StridedSlice] name: tf_bert_model_8/bert/strided_slice/`",
"The input to BERT (`input_ids`) must be a tensor of shape `(batch_size, sequence_length)` = `(1, 1200)`.\r\n\r\nHowever, currently the shape of your x is only `(1200,`). You should add a batch dimension, like so:\r\n\r\n```\r\nimport numpy as np\r\n\r\nx = np.random.randint(1,5,1200)\r\nx = np.expand_dims(x, axis=0)\r\n```\r\n\r\nThe embedding layer of BERT will then turn this into a tensor of shape `(batch_size, sequence_length, hidden_size)`, i.e. it will turn each of the 1200 integers into a vector of size `hidden_size`. As you're setting `hidden_size` equal to `len(x)`, this means that each integer will be turned into a vector of size 1200. Is this what you want (seems quite a big vector :p)?\r\n\r\nThen the following will work:\r\n\r\n```\r\nfrom transformers import TFBertModel, BertConfig\r\n\r\nconfig = BertConfig(vocab_size=4+1, max_position_embeddings=len(x))\r\nmodel = TFBertModel(config)\r\n\r\nmodel(x)\r\n```\r\n\r\nBtw, please ask questions which are not bugs or feature requests on the [forum](https://discuss.huggingface.co/) rather than here.",
"Thank you @NielsRogge. I must apologize. Should I cut this issue and paste into the forum?\r\n\r\nI think I made a mistake... the vector size should not be that large and won't fit into memory :P\r\n\r\nI managed to complete a model forward cycle. But, if you allow me another question:\r\n\r\nShould I add an extra integer to the beginning of the sequence (of value != to the existing ones, e.g.: `5`), to act as the <CLS> token?\r\n\r\nI am asking this because the `model(inputs)` returns a pooled `(batch_size, 1,hidden_size)` token, instead of `(batch_size, seq_length, hidden_size)`, however I am not sure if I should be passing this to the `Dense(1)` layer in the next step.",
"> Should I add an extra integer to the beginning of the sequence (of value != to the existing ones, e.g.: `5`), to act as the token?\r\n\r\nEach integer acts as a token, so adding one more will increase the number of tokens by one. The `vocab_size` is the total number of tokens for which the model learns an embedding vector.\r\n\r\n> I am asking this because the `model(inputs)` returns a pooled `(batch_size, 1,hidden_size)` token, instead of `(batch_size, seq_length, hidden_size)`, however I am not sure if I should be passing this to the `Dense(1)` layer in the next step.\r\n\r\nIf you use `TFBertModel`, then by default it returns a `TFBaseModelOutputWithPooling` object, with an attribute called `last_hidden_state`. This is a tensor of shape `(batch_size, sequence_length, hidden_size)` and this is probably what you want.\r\n\r\nYou may close the issue and if you have any further questions, feel free to ask them on the forum, we're happy to help!"
] | 1,612 | 1,613 | 1,613 | NONE | null | Hi,
My inputs are Integer Encoded vectors, like:
`[1,2,3,1,2,4,1,2,3,4,2,3,4, ...]`
Where:
`len(inputs) = 1200` & `unique values = 4` (1,2,3,4).
As you can see, this is not a common NLP problem as I have only 4 tokens, instead of a huge vocabulary.
And I could not find any pretrained vocab to tokenize this.
**What I am trying to do**:
I want to fit a BERT model to this sequence data.
**What have I tried**
I am using `tensorflow-2.4.0`, and here is my model:
```
class MyModel(Model):
def __init__(self):
super().__init__()
self._bert = _create_bert_model()
self._head = Dense(1)
def call(self, inputs):
embedding = self._bert(inputs)
return self._head(embedding)
def _create_bert_model() -> TFBertModel:
config = BertConfig(vocab_size=4+1)
return TFBertModel(config)
```
As you can see, I want to create an "embedding" using BERT, and pass this to a head (regression).
## Issue
When I put a breakpoint inside this `call` method, here is what I get:
```
# Batch Size: 4 (just for debug)
>> inputs
<tf.Tensor: shape=(4, 1200), dtype=int32, numpy=
array([[2, 4, 1, ..., 4, 3, 3],
[2, 1, 4, ..., 1, 2, 4],
[4, 2, 1, ..., 3, 1, 2],
[2, 2, 4, ..., 1, 2, 1]], dtype=int32)>
>> self._bert(inputs)
*** tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [512,768] vs. [4,1200,768] [Op:BroadcastTo]
```
## Help
Can anyone provide me any guidance on solving this issue, and procceed with my modelling?
And, is there any tutorial I may find on fitting a HuggingFace Tokenizer/BERT on custom vocabulary?
## Reproduce
```
from transformers import TFBertModel, BertConfig
import numpy as np
x = np.random.randint(1,5,1200)
config = BertConfig(vocab_size=4+1)
model = TFBertModel(config)
model(x)
```
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10123/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10122/comments | https://api.github.com/repos/huggingface/transformers/issues/10122/events | https://github.com/huggingface/transformers/pull/10122 | 805,669,674 | MDExOlB1bGxSZXF1ZXN0NTcxMjMyOTAx | 10,122 | Add SageMakerTrainer for model paralellism | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,613 | 1,613 | COLLABORATOR | null | # What does this PR do?
This PR adds a subclass of `Trainer` to use model parallelism in SageMaker. This new `Trainer` still retains all the functionality of the previous trainer (e.g. when not run in SageMaker it will work like the normal one) while automatically enabling model parallelism when an example script is launched via SageMaker with that option activated.
The easiest way to enable this in any example script is to replace the `Trainer` and `TrainingArguments` imports by:
```python
from transformers.sagemaker import SageMakerTrainingArguments as TrainingArguments, SageMakerTrainer as Trainer
```
Along the way, I had to refactor a few things in `Trainer` to make it easier to deal with stuff in the subclass (without having to rewrite the whole train method for instance), mainly the part that does the model wrapping. Also, there was a subtle bug coming from the fact SageMaker wrapper for the model for model parallelism changes the forward method of the model, so the `Trainer` will now store the arguments in the signature (in case that signature changes after wrapping the model). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10122/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10122",
"html_url": "https://github.com/huggingface/transformers/pull/10122",
"diff_url": "https://github.com/huggingface/transformers/pull/10122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10122.patch",
"merged_at": 1613087058000
} |
https://api.github.com/repos/huggingface/transformers/issues/10121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10121/comments | https://api.github.com/repos/huggingface/transformers/issues/10121/events | https://github.com/huggingface/transformers/issues/10121 | 805,617,166 | MDU6SXNzdWU4MDU2MTcxNjY= | 10,121 | Allow `do_lower_case=True` for any tokenizer | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Discussed offline with @n1t0: our current decision is to wait for https://github.com/huggingface/tokenizers/issues/659 to be resolved before moving on with this issue. \r\nThis is the better tradeoff as the alternative would imply duplicating a lot of logic in `transformers` that's already present but not exposed by `tokenizers`."
] | 1,612 | 1,616 | null | MEMBER | null | # 🚀 Feature request
Extract the `do_lower_case` option to make it available for any tokenizer. Not just those that initially supported this, like the `BERT` tokenizers.
## Motivation
Sometimes we want to specify `do_lower_case=True` in the `tokenizer_config.json` of a custom tokenizer to activate the lowercasing. The problem is that this obviously works only for tokenizers based on one that originally used this option.
I think we should extract this feature to make it a shared one, that could be used with any tokenizer.
Example of a model that would need this described here: https://github.com/huggingface/transformers/issues/9518
## Special care points
- Make sure the `convert_slow_tokenizer` script also handles this, to activate the option in the resulting fast tokenizer.
- Maybe some other options could have the same treatment?
cc @LysandreJik @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10121/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10121/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10120/comments | https://api.github.com/repos/huggingface/transformers/issues/10120/events | https://github.com/huggingface/transformers/pull/10120 | 805,523,963 | MDExOlB1bGxSZXF1ZXN0NTcxMTEwNjQ5 | 10,120 | Conversion from slow to fast for BPE spm vocabs contained an error. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
- There is only 1 test currently (tokenizers + slow) that used the modified path
and it's reformer, which does not contain any ids modification so the
bug was silent for now.
- The real issue is that vocab variable was overloaded by
SentencePieceExtractor, leading to Slow specific vocab oddities to be
completely ignored
- The bug was reported here https://github.com/huggingface/transformers/issues/9518
- Ran the complete tokenization test suite with slow without error
(`RUN_SLOW=1 pytest -sv tests/test_tokenization_*`)
- We need to keep in mind that BPE + SPM are relatively rare.
- I still need to carry out
a full sweep of the hub to check all possible variants.
Affected models (all repos containing `sentencepiece.bpe.model`):
- `Musixmatch/umberto-commoncrawl-cased-v1`
- `idb-ita/gilberto-uncased-from-camembert`
- `itsunoda/wolfbbsRoBERTa-large` (not fixed with current PR, seems linked to prefixed '_' in fast tokenizers)
- `itsunoda/wolfbbsRoBERTa-small` (not fixed with current PR)
- `mrm8488/umberto-wikipedia-uncased-v1-finetuned-squadv1-it`
- `EMBEDDIA/litlat-bert`
- `neuralspace-reverie/indic-transformers-bn-xlmroberta`
- `neuralspace-reverie/indic-transformers-hi-xlmroberta`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@thomwolf @LysandreJik @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10120/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10120",
"html_url": "https://github.com/huggingface/transformers/pull/10120",
"diff_url": "https://github.com/huggingface/transformers/pull/10120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10120.patch",
"merged_at": 1613222693000
} |
https://api.github.com/repos/huggingface/transformers/issues/10119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10119/comments | https://api.github.com/repos/huggingface/transformers/issues/10119/events | https://github.com/huggingface/transformers/pull/10119 | 805,298,544 | MDExOlB1bGxSZXF1ZXN0NTcwOTI3ODY2 | 10,119 | Line endings should be LF across repo and not CRLF | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you @LysandreJik, this will alleviate much of the pain Windows users had with git in the past!",
"To hack my way up the contributors list, can I change all line endings to CRLF then revert? 😂 "
] | 1,612 | 1,612 | 1,612 | MEMBER | null | Up to now we've had quite a few issues with users on Windows having their line endings be "CRLF" by default, while Linux users have "LF" line endings by default.
### Problem
This can be problematic in the following scenarios where no handling of the issue has been done on the user's side:
- When a user runs `make style`, their line endings will switch from LF to CRLF in *all* files, essentially rewriting the entire file
- When a user adds a new file to the repository, it will be in the "CRLF" format and will be committed as such.
### Resolution
The resolution is either to have the user handle that, or to handle that ourselves. Handling it ourselves is simple as it only requires adding a `.gitattributes` file at the root of the repository which will specify the line endings we're looking for, thus this is what this PR is proposing. On the other hand, we had issues handling it on the user side with the proposed `git core.autocrlf` as it seemed to have different results according to the setup.
Additionally, if users already have files in `CRLF` mode, then an additional command is required to convert these files to `LF`: `git add --renormalize .`. I believe this only impacts users that created files previous to this PR, as newly created files will already benefit from the `.gitattributes` file.
---
This PR completely reformats two files: `examples/research_projects/bertology/run_prune_gpt.py` and `tests/test_modeling_deberta.py`. These files had CRLF line endings, and will now have LF line endings.
---
Further readings:
- [🙏 Please Add .gitattributes To Your Git Repository](https://dev.to/deadlybyte/please-add-gitattributes-to-your-git-repository-1jld)
- [Why should I use core.autocrlf=true in Git?](https://stackoverflow.com/questions/2825428/why-should-i-use-core-autocrlf-true-in-git)
- [git replacing LF with CRLF](https://stackoverflow.com/questions/1967370/git-replacing-lf-with-crlf?noredirect=1&lq=1)
cc @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10119/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10119",
"html_url": "https://github.com/huggingface/transformers/pull/10119",
"diff_url": "https://github.com/huggingface/transformers/pull/10119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10119.patch",
"merged_at": 1612972201000
} |
https://api.github.com/repos/huggingface/transformers/issues/10118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10118/comments | https://api.github.com/repos/huggingface/transformers/issues/10118/events | https://github.com/huggingface/transformers/issues/10118 | 805,259,580 | MDU6SXNzdWU4MDUyNTk1ODA= | 10,118 | Exporting transformers models in ONNX format | {
"login": "chetanambi",
"id": 37707687,
"node_id": "MDQ6VXNlcjM3NzA3Njg3",
"avatar_url": "https://avatars.githubusercontent.com/u/37707687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chetanambi",
"html_url": "https://github.com/chetanambi",
"followers_url": "https://api.github.com/users/chetanambi/followers",
"following_url": "https://api.github.com/users/chetanambi/following{/other_user}",
"gists_url": "https://api.github.com/users/chetanambi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chetanambi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chetanambi/subscriptions",
"organizations_url": "https://api.github.com/users/chetanambi/orgs",
"repos_url": "https://api.github.com/users/chetanambi/repos",
"events_url": "https://api.github.com/users/chetanambi/events{/privacy}",
"received_events_url": "https://api.github.com/users/chetanambi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi!\r\nTry this one, but do it from an empty folder:\r\n`python3 -m transformers.convert_graph_to_onnx --framework pt --model bert-base-cased bert-base-cased.onnx`",
"@Denovitz Thanks a lot. I was able to convert the BERT model to ONNX successfully. Do you have a sample code of how the converted ONNX model can be used further for inferences? I am able to use ONNX for TF, Keras, Sklearn, Xgboost and other models but stuck with transformer model. Appreciate any inputs. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | I am trying to convert transformer model to ONNX by referring the article **[here](https://huggingface.co/transformers/serialization.html)** but running into below error. Can you please guide me if this is not the correct way of doing it.
Code I am using in Colab:
```
!git clone https://github.com/huggingface/transformers.git
%cd transformers
!pip install .
%cd src/transformers
!python3 convert_graph_to_onnx.py --framework pt --model bert-base-cased bert-base-cased.onnx
```
Traceback (most recent call last):
File "convert_graph_to_onnx.py", line 22, in <module>
from .file_utils import ModelOutput, is_tf_available, is_torch_available
ModuleNotFoundError: No module named '__main__.file_utils'; '__main__' is not a package | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10118/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10117/comments | https://api.github.com/repos/huggingface/transformers/issues/10117/events | https://github.com/huggingface/transformers/pull/10117 | 805,241,287 | MDExOlB1bGxSZXF1ZXN0NTcwODgwNDE0 | 10,117 | [Wav2Vec2] Improve Tokenizer & Model for batched inference | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Go Patrick!!!!! YES! someone who cares!",
"Merging since @LysandreJik is off today and this is blocking me a bit"
] | 1,612 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR improves batched inference for Wav2Vec2 models by:
- adding an `attention_mask`
- adding zero-mean unit-variance normalization to the tokenizer
- correctly setting returning `attention_mask` and doing normalization depending on which architecture is used
## Background
Some of Fairseq's Wav2Vec2 models apply Group Normalization over the time axis in the feature extractor. This means that the convolutional layers in the feature extractor can not 100% correctly treat padded input resulting in those models giving different results depending on whether the input is padded or not. See https://github.com/pytorch/fairseq/issues/3227 . Those models should never make use of `attention_mask` which is made sure by setting `return_attention_mask=False` in their corresponding tokenizer configs: https://huggingface.co/facebook/wav2vec2-base-960h/blob/main/tokenizer_config.json . Also some explicit warnigs have been added to both the tokenizer and model.
For the "newer" models however that have the improved layer norm architecture in the feature extraction: https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self , normalization and correct padding via `attention_mask` gives some nice performance improvements and works correctly.
## Performance Evaluation
I've evaluated both `wav2vec2-large-960h-lv60-self` and `wav2vec2-large-960h-lv60` on the test set of librispeech and got some nice improvements:
- `wav2vec2-large-960h-lv60-self`: 2.2 WER -> 1.8 WER
- `wav2vec2-large-960h-lv60`: 3.4 WER -> 2.2 WER
So that the results now seem to match the paper's results very nicely.
Also, I checked that `wav2vec2-base-960h` should **not** use an `attention_mask` as the performance on librispeech test then drop heavily from ~4 WER to ~20 WER.
## TODO
Once this PR is merged, I can fully focus on adding the fine-tuning functionality and will also update the model cards with the new evaluation code & results. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10117/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10117",
"html_url": "https://github.com/huggingface/transformers/pull/10117",
"diff_url": "https://github.com/huggingface/transformers/pull/10117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10117.patch",
"merged_at": 1613047255000
} |
https://api.github.com/repos/huggingface/transformers/issues/10116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10116/comments | https://api.github.com/repos/huggingface/transformers/issues/10116/events | https://github.com/huggingface/transformers/pull/10116 | 805,211,535 | MDExOlB1bGxSZXF1ZXN0NTcwODU1MTMw | 10,116 | [scheduled github CI] add deepspeed fairscale deps | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thanks!"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | This PR adds `deepspeed` +`fairscale` to pip install on multi-gpu self-hosted scheduled job - so that we can start running those tests.
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10116",
"html_url": "https://github.com/huggingface/transformers/pull/10116",
"diff_url": "https://github.com/huggingface/transformers/pull/10116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10116.patch",
"merged_at": 1612944747000
} |
https://api.github.com/repos/huggingface/transformers/issues/10115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10115/comments | https://api.github.com/repos/huggingface/transformers/issues/10115/events | https://github.com/huggingface/transformers/pull/10115 | 805,207,219 | MDExOlB1bGxSZXF1ZXN0NTcwODUxMzA0 | 10,115 | [CI] build docs faster | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | I assume the CI machine should have at least 4 cores, so let's build docs faster.
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10115/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10115",
"html_url": "https://github.com/huggingface/transformers/pull/10115",
"diff_url": "https://github.com/huggingface/transformers/pull/10115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10115.patch",
"merged_at": 1612944160000
} |
https://api.github.com/repos/huggingface/transformers/issues/10114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10114/comments | https://api.github.com/repos/huggingface/transformers/issues/10114/events | https://github.com/huggingface/transformers/pull/10114 | 805,192,036 | MDExOlB1bGxSZXF1ZXN0NTcwODM4MDE4 | 10,114 | [DeepSpeed] restore memory for evaluation | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | I spent some time trying to see if we could gain from DeepSpeed during inference - and while in the future there will be goodies to make it useful at the moment we don't need it, so let's make DeepSpeed cleanly contained to `train` for now.
This PR has a few small tweaks:
- frees up all the memory used by DeepSpeed at the end of training
- makes a clean way of not switching `model.to()` - only for when `--do_train` is used with deepspeed (so this is the case where you @sgugger were concerned about eval before train - no problem now)
- adds a warning if a user tries to use `--deepspeed` without `--do_train`
- re-works the test suite
- applies consistent json config formatting
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10114/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10114",
"html_url": "https://github.com/huggingface/transformers/pull/10114",
"diff_url": "https://github.com/huggingface/transformers/pull/10114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10114.patch",
"merged_at": 1612976989000
} |
https://api.github.com/repos/huggingface/transformers/issues/10113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10113/comments | https://api.github.com/repos/huggingface/transformers/issues/10113/events | https://github.com/huggingface/transformers/issues/10113 | 805,141,157 | MDU6SXNzdWU4MDUxNDExNTc= | 10,113 | CUDA Out of Memory After Several Epochs | {
"login": "liyucheng09",
"id": 27999909,
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyucheng09",
"html_url": "https://github.com/liyucheng09",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm quite puzzled too, to be honest. I know that sometimes, PyTorch will trigger a CUDA OOM error even if there is enough memory in theory just because it's not able to find a contiguous chunk or has some leftovers for some reason, exactly like what your message suggests (22.53GB allocated but 23.21GB reserved by PyTorch). I don't have any suggestion apart from trying the usual strategies to lower a bit the memory footprint (slightly lower the batch size or block size).",
"@sgugger Appreciate your reply! I am wondering that can I resume the training processing if I change the batch size or block size of the training args. I have no idea whether it will fit the saved schedule or optimizer parameters.",
"> @sgugger Appreciate your reply! I am wondering that can I resume the training processing if I change the batch size or block size of the training args. I have no idea whether it will fit the saved schedule or optimizer parameters.\r\n\r\n你好,请问你解决了这个问题了吗",
"@xinjicong Not yet. If you have some ideas, please shares.",
"> @xinjicong Not yet. If you have some ideas, please shares.\r\n\r\ni try to make max_seq_length smaller but it can't not work. ",
"> @xinjicong Not yet. If you have some ideas, please shares.\r\n\r\n我检查了代码,发现是我在使用tokenizer的时候,出现了问题。tokenizer输出的维度多了一维,然后后面batch的时候就出错了。",
"I observe the same issue, if I train a model, save a checkpoint and reload from this, I get memory issues for the code which was training fine before. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Same Issue",
"+1",
"I have this issue as well. Model trains for 1 epoch and goes through validation step, then I get OOM somewhere in the second epoch. These are large models I am training and I often get OOM after it has been training for a couple of hours.",
"@dinsausti-vir Try reducing validation batch size to 1. I'm not sure how I fixed the error but batch size is usually the cause for OOM",
"@perceptiveshawty Thanks for the tip. I will give that a shot!"
] | 1,612 | 1,673 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Linux-4.14.105-1-tlinux3-0013-x86_64-with-centos-7.2-Final
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: using nn.data_parallel
### Who can help
- gpt2: @patrickvonplaten
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [- ] the official example scripts: run_clm.py
The tasks I am working on is:
* [ -] my own task or dataset: zh_wikitext
## To reproduce
The strange thing is that the scripts runs ok in the first 12 epochs, and ends with error in the middle of 12 epochs. I have checked that the trainer doesn't cache training loss tensor, so I am quite puzzled by the error. Any help are highly appreciated.
Steps to reproduce the behavior:
1. `python run_clm.py config.json`
Several useful config in `config.json` are:
```
block_size: 512
check_point_name: "gpt2_result/checkpoint-100000"
per_device_train_batch_size: 12
learning_rate: 0.00005
weight_decay: 0
adam_beta1: 0.9
adam_beta2: 0.98
adam_epsilon: 1e-8
max_grad_norm: 1
num_train_epochs: 50
max_steps: -1
warmup_steps: 0
```
Model Config are:
```
Model config GPT2Config {
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 512,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 512,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"use_cache": true,
"vocab_size": 21128
}
```
The tokenizer used is `BertTokenizer.from_pretrained('Bert-base-chinese')`.
The error log are following:
```
[INFO|trainer.py:703] 2021-02-10 11:30:39,997 >> ***** Running training *****
[INFO|trainer.py:704] 2021-02-10 11:30:39,997 >> Num examples = 744899
[INFO|trainer.py:705] 2021-02-10 11:30:39,997 >> Num Epochs = 50
[INFO|trainer.py:706] 2021-02-10 11:30:39,997 >> Instantaneous batch size per device = 12
[INFO|trainer.py:707] 2021-02-10 11:30:39,997 >> Total train batch size (w. parallel, distributed & accumulation) = 96
[INFO|trainer.py:708] 2021-02-10 11:30:39,997 >> Gradient Accumulation steps = 1
[INFO|trainer.py:709] 2021-02-10 11:30:39,997 >> Total optimization steps = 388000
[INFO|trainer.py:725] 2021-02-10 11:30:40,011 >> Continuing training from checkpoint, will skip to saved global_step
[INFO|trainer.py:726] 2021-02-10 11:30:40,011 >> Continuing training from epoch 12
[INFO|trainer.py:727] 2021-02-10 11:30:40,011 >> Continuing training from global step 100002
0%| | 0/388000 [00:00<?, ?it/s]/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
26%|███████████████████████▋ | 100003/388000 [00:17<00:50, 5746.78it/s]Traceback (most recent call last):
File "run_clm.py", line 321, in <module>
main()
File "run_clm.py", line 291, in main
trainer.train(model_path=model_path)
File "/data/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 799, in train
tr_loss += self.training_step(model, inputs)
File "/data/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1139, in training_step
loss = self.compute_loss(model, inputs)
File "/data/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1163, in compute_loss
outputs = model(**inputs)
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward
return self.gather(outputs, self.output_device)
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather
return gather(outputs, output_device, dim=self.dim)
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map
for k in out))
File "<string>", line 9, in __init__
File "/data/miniconda3/lib/python3.7/site-packages/transformers/file_utils.py", line 1412, in __post_init__
for element in iterator:
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr>
for k in out))
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/data/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 68, in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/data/miniconda3/lib/python3.7/site-packages/torch/cuda/comm.py", line 165, in gather
return torch._C._gather(tensors, dim, destination)
RuntimeError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 23.88 GiB total capacity; 22.53 GiB already allocated; 86.38 MiB free; 23.21 GiB reserved in total by PyTorch)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10113/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10113/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10112/comments | https://api.github.com/repos/huggingface/transformers/issues/10112/events | https://github.com/huggingface/transformers/issues/10112 | 805,095,973 | MDU6SXNzdWU4MDUwOTU5NzM= | 10,112 | Possible bug in RAG Tokenizer | {
"login": "krishanudb",
"id": 11831343,
"node_id": "MDQ6VXNlcjExODMxMzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/11831343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishanudb",
"html_url": "https://github.com/krishanudb",
"followers_url": "https://api.github.com/users/krishanudb/followers",
"following_url": "https://api.github.com/users/krishanudb/following{/other_user}",
"gists_url": "https://api.github.com/users/krishanudb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishanudb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishanudb/subscriptions",
"organizations_url": "https://api.github.com/users/krishanudb/orgs",
"repos_url": "https://api.github.com/users/krishanudb/repos",
"events_url": "https://api.github.com/users/krishanudb/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishanudb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"hi @krishanudb \r\n\r\nThank you for reporting this @krishanudb !",
"Is there any update on this issue? @patil-suraj ",
"It's fixed now on master!\r\nsee #10167",
"The issue persists in transformers 4.3.3\r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/rag/tokenization_rag.py in prepare_seq2seq_batch(self, src_texts, tgt_texts, max_length, max_target_length, **kwargs)\r\n 75 if max_target_length is None:\r\n 76 max_target_length = self.generator.model_max_length\r\n---> 77 return super().prepare_seq2seq_batch(\r\n 78 src_texts, tgt_texts, max_length=max_length, max_target_length=max_target_length, **kwargs\r\n 79 )\r\n\r\nAttributeError: 'super' object has no attribute 'prepare_seq2seq_batch'",
"Hi @rajasekar-venkatesan \r\n\r\nThe issue is fixed on master after the `4.3.3` release. This fix will be available in the next release."
] | 1,612 | 1,615 | 1,613 | NONE | null | On this line
input_dict = tokenizer.prepare_seq2seq_batch(question, return_tensors="pt")
the following error is being generated
AttributeError: 'super' object has no attribute 'prepare_seq2seq_batch'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10112/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10111/comments | https://api.github.com/repos/huggingface/transformers/issues/10111/events | https://github.com/huggingface/transformers/issues/10111 | 805,091,319 | MDU6SXNzdWU4MDUwOTEzMTk= | 10,111 | Bug in RAG Sequence generate | {
"login": "krishanudb",
"id": 11831343,
"node_id": "MDQ6VXNlcjExODMxMzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/11831343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishanudb",
"html_url": "https://github.com/krishanudb",
"followers_url": "https://api.github.com/users/krishanudb/followers",
"following_url": "https://api.github.com/users/krishanudb/following{/other_user}",
"gists_url": "https://api.github.com/users/krishanudb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishanudb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishanudb/subscriptions",
"organizations_url": "https://api.github.com/users/krishanudb/orgs",
"repos_url": "https://api.github.com/users/krishanudb/repos",
"events_url": "https://api.github.com/users/krishanudb/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishanudb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @krishanudb \r\n\r\nCould you post a code snippet so we can reproduce the issue? Please post your env info, short code snippet, and stack trace if possible when reporting bugs. Thanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | IMHO there is a bug in the RAG Sequence model, in the generate function.
The shapes mismatch all the time. I looked into the code and found the issue in the following loop.
https://github.com/huggingface/transformers/blob/85395e4901f87b880f364bcd6424fe37da94574b/src/transformers/models/rag/modeling_rag.py#L936
Kindly let me know if there is indeed a bug or is it just my code problem.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10110/comments | https://api.github.com/repos/huggingface/transformers/issues/10110/events | https://github.com/huggingface/transformers/pull/10110 | 805,086,064 | MDExOlB1bGxSZXF1ZXN0NTcwNzQ4NDk2 | 10,110 | Fix tokenizers training in notebooks | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | The `train` method has been updated in `tokenizers` v0.10, and it includes a breaking change from the previous versions (reordered arguments). This modification ensures it works for all versions.
cc @sgugger @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10110",
"html_url": "https://github.com/huggingface/transformers/pull/10110",
"diff_url": "https://github.com/huggingface/transformers/pull/10110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10110.patch",
"merged_at": 1612925302000
} |
https://api.github.com/repos/huggingface/transformers/issues/10109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10109/comments | https://api.github.com/repos/huggingface/transformers/issues/10109/events | https://github.com/huggingface/transformers/issues/10109 | 805,011,295 | MDU6SXNzdWU4MDUwMTEyOTU= | 10,109 | Git does not find the model folder and does not commit model files in the hugging face | {
"login": "MLDovakin",
"id": 78375175,
"node_id": "MDQ6VXNlcjc4Mzc1MTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/78375175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MLDovakin",
"html_url": "https://github.com/MLDovakin",
"followers_url": "https://api.github.com/users/MLDovakin/followers",
"following_url": "https://api.github.com/users/MLDovakin/following{/other_user}",
"gists_url": "https://api.github.com/users/MLDovakin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MLDovakin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MLDovakin/subscriptions",
"organizations_url": "https://api.github.com/users/MLDovakin/orgs",
"repos_url": "https://api.github.com/users/MLDovakin/repos",
"events_url": "https://api.github.com/users/MLDovakin/events{/privacy}",
"received_events_url": "https://api.github.com/users/MLDovakin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hi @IndianMLGay \r\n\r\nYou should `cd` into `simple-small-kvantorium` directory and then do `git add/commit/push` etc",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,618 | 1,618 | NONE | null | I work for Google Colab. This is how I save my training model files
````
trainer.save_model("./kvantorium-small")
tokenizer.save_pretrained("/content/For_tokenize", legacy_format=False)
````
Next, I want to commit my files to the hugging face repository. As shown in the guide (https://huggingface.co/welcome)
all these lines of code compile successfully:
````
!sudo apt-get install git-lfs
!pip install huggingface_hub
!huggingface-cli login
!huggingface-cli repo create simple-small-kvantorium
!git lfs install
!git clone https://huggingface.co/Fidlobabovic/simple-small-kvantorium
````
But when I want to make a push request to save files to the repository, I get an error that there is no such repository. How do I rewrite the request to publish the files to the repository? Is this a problem that I work for Google Colab? Thanks a lot in advance, you are helping me a lot
#9878
````
!git add .
!git commit -m "commit from $Fidlobabovic/simple-small-kvantorium"
!git push
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
```` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10109/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10108/comments | https://api.github.com/repos/huggingface/transformers/issues/10108/events | https://github.com/huggingface/transformers/issues/10108 | 804,987,783 | MDU6SXNzdWU4MDQ5ODc3ODM= | 10,108 | Non-JSON-serializable tokenizer config with `save_pretrained` | {
"login": "vinbo8",
"id": 1384073,
"node_id": "MDQ6VXNlcjEzODQwNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1384073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinbo8",
"html_url": "https://github.com/vinbo8",
"followers_url": "https://api.github.com/users/vinbo8/followers",
"following_url": "https://api.github.com/users/vinbo8/following{/other_user}",
"gists_url": "https://api.github.com/users/vinbo8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinbo8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinbo8/subscriptions",
"organizations_url": "https://api.github.com/users/vinbo8/orgs",
"repos_url": "https://api.github.com/users/vinbo8/repos",
"events_url": "https://api.github.com/users/vinbo8/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinbo8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @vin-ivar \r\n\r\nThe `tokenizer` does not need the model config file, there is no need to pass it when initializing the tokenizer.",
"That fixes it, I was using an older script without taking that bit out."
] | 1,612 | 1,613 | 1,613 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.1
- Platform: Linux
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (GPU)
- Tensorflow version (GPU?): 2.1.2 (GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Using a minimal example with loading/saving a tokenizer.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Again, this is just a minimal example.
## To reproduce
Steps to reproduce the behavior:
1. Instantiate a `BertConfig` and a `BertTokenizer` based on the config.
2. Try and save the tokenizer with `save_pretrained`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Minimal example:
```
from transformers import BertConfig, BertTokenizer
config = BertConfig.from_pretrained("./configs/bert-small.json", cache_dir=".")
tokenizer = BertTokenizer.from_pretrained("vocab/", cache_dir=".", config=config)
tokenizer.save_pretrained('new_save')
```
Error:
```
Traceback (most recent call last):
File "test.py", line 5, in <module>
tokenizer.save_pretrained('new_save')
File "/cluster/envs/mult/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1979, in save_pretrained
f.write(json.dumps(tokenizer_config, ensure_ascii=False))
File "/cluster/envs/mult/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/cluster/envs/mult/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/cluster/envs/mult/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/cluster/envs/mult/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type BertConfig is not JSON serializable
```
## Expected behavior
Tokenizer should be saveable. I'm guessing this could be happening because the bit that's supposed to be saving the config is using the `json` library directly, instead of calling `to_json_file` on the `BertConfig`, but I'm not sure.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10108/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10107/comments | https://api.github.com/repos/huggingface/transformers/issues/10107/events | https://github.com/huggingface/transformers/pull/10107 | 804,922,454 | MDExOlB1bGxSZXF1ZXN0NTcwNjExNjAy | 10,107 | Remove speed metrics from default compute objective [WIP] | {
"login": "shiva-z",
"id": 14043961,
"node_id": "MDQ6VXNlcjE0MDQzOTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/14043961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shiva-z",
"html_url": "https://github.com/shiva-z",
"followers_url": "https://api.github.com/users/shiva-z/followers",
"following_url": "https://api.github.com/users/shiva-z/following{/other_user}",
"gists_url": "https://api.github.com/users/shiva-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shiva-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shiva-z/subscriptions",
"organizations_url": "https://api.github.com/users/shiva-z/orgs",
"repos_url": "https://api.github.com/users/shiva-z/repos",
"events_url": "https://api.github.com/users/shiva-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/shiva-z/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh boy, that is rather bad! Thanks a lot for fixing this!\r\nDid you want to add the test in this PR?",
"> Oh boy, that is rather bad! Thanks a lot for fixing this!\r\n> Did you want to add the test in this PR?\r\n\r\nI can also create a follow up PR if you want to merge this asap. I can't implement the test cases right away. Maybe in ~4 days. @sgugger",
"In that case maybe a follow-up PR, this fix is needed badly so I will merge. Thanks again!"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
This PR removes speed metrics (e.g. `eval_runtime`) from the default compute objective (`default_compute_objective`). `default_compute_objective` is used when no `compute_objective` is passed to `Trainer.hyperparameter_search`. `Trainer` adds speed metrics such as `eval_runtime` and `eval_samples_per_second` to the metrics and `default_compute_objective` returns the sum of metrics as the objective so these speed metrics will be included in the objective.
I still need to add unit test for `default_compute_objective` to avoid having such metrics in the objective in the future.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10107",
"html_url": "https://github.com/huggingface/transformers/pull/10107",
"diff_url": "https://github.com/huggingface/transformers/pull/10107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10107.patch",
"merged_at": 1612915383000
} |
https://api.github.com/repos/huggingface/transformers/issues/10106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10106/comments | https://api.github.com/repos/huggingface/transformers/issues/10106/events | https://github.com/huggingface/transformers/pull/10106 | 804,906,658 | MDExOlB1bGxSZXF1ZXN0NTcwNTk4MzIw | 10,106 | Revert "Fix TFConvBertModelIntegrationTest::test_inference_masked_lm Test" | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,651 | 1,617 | MEMBER | null | Reverts huggingface/transformers#10104 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10106",
"html_url": "https://github.com/huggingface/transformers/pull/10106",
"diff_url": "https://github.com/huggingface/transformers/pull/10106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10106.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10105/comments | https://api.github.com/repos/huggingface/transformers/issues/10105/events | https://github.com/huggingface/transformers/issues/10105 | 804,870,511 | MDU6SXNzdWU4MDQ4NzA1MTE= | 10,105 | PruneTrain: Fast Neural Network Training by Dynamic Sparse Model Reconfiguration | {
"login": "gaceladri",
"id": 7850682,
"node_id": "MDQ6VXNlcjc4NTA2ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7850682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaceladri",
"html_url": "https://github.com/gaceladri",
"followers_url": "https://api.github.com/users/gaceladri/followers",
"following_url": "https://api.github.com/users/gaceladri/following{/other_user}",
"gists_url": "https://api.github.com/users/gaceladri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaceladri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaceladri/subscriptions",
"organizations_url": "https://api.github.com/users/gaceladri/orgs",
"repos_url": "https://api.github.com/users/gaceladri/repos",
"events_url": "https://api.github.com/users/gaceladri/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaceladri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,612 | 1,612 | null | NONE | null | # 🚀 Feature request
PruneTrain. {...} By using a structured-pruning approach and additional reconfiguration techniques we introduce, the pruned model can still be efficiently processed on a GPU accelerator. Overall, **PruneTrain achieves a reduction of 39% in the end-to-end training time of ResNet50 for ImageNet by reducing computation cost by 40% in FLOPs, memory accesses by 37% for memory bandwidth bound layers, and the inter-accelerator communication by 55%.**
## Motivation
I'm pre-training some midsize language models from scratch. If you tell me that I can pretrain a network with 1% drop in performance while cutting down the energy demand of the training by up to 40% and speeding inference time at the same time, I will buy it.
## Your contribution
https://arxiv.org/abs/1901.09290. I can not understand why the authors did not open source the code, since it could reduce the global warming, speedup experimentation and reduce energy consumption. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10105/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/10105/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10104/comments | https://api.github.com/repos/huggingface/transformers/issues/10104/events | https://github.com/huggingface/transformers/pull/10104 | 804,836,419 | MDExOlB1bGxSZXF1ZXN0NTcwNTM4NTI3 | 10,104 | Fix TFConvBertModelIntegrationTest::test_inference_masked_lm Test | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@abhishekkrthakur - this doesn't look good to me. \r\n\r\nJust changing the hardcoded integration test to values that make the test pass does not seem like the way to go here. The PyTorch integration test: https://github.com/huggingface/transformers/blob/7c7962ba891864f9770b9e9424f87d158b839a59/tests/test_modeling_convbert.py#L430 still has the old values and passes, which to me is an indicator that the TF implementation or the PyTorch implementation is not correct.\r\n\r\nAlso, it would be great if we could not merge PRs that have no description and that neither @sgugger, @LysandreJik or I approved."
] | 1,612 | 1,612 | 1,612 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10104/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10104",
"html_url": "https://github.com/huggingface/transformers/pull/10104",
"diff_url": "https://github.com/huggingface/transformers/pull/10104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10104.patch",
"merged_at": 1612898574000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10103/comments | https://api.github.com/repos/huggingface/transformers/issues/10103/events | https://github.com/huggingface/transformers/pull/10103 | 804,800,256 | MDExOlB1bGxSZXF1ZXN0NTcwNTA3NzQ4 | 10,103 | Fix Faiss Import | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As expected the two RAG tests are failing",
"Thanks for fixing!"
] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
All RAG related tests are skipped on circle ci at the moment because `faiss-cpu` is not passing the `is_faiss_available()` function. Sadly this didn't make us realize that RAG is currently broken on master. This should be merged with https://github.com/huggingface/transformers/pull/10094 to fix RAG.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10103/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10103",
"html_url": "https://github.com/huggingface/transformers/pull/10103",
"diff_url": "https://github.com/huggingface/transformers/pull/10103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10103.patch",
"merged_at": 1612896221000
} |
https://api.github.com/repos/huggingface/transformers/issues/10102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10102/comments | https://api.github.com/repos/huggingface/transformers/issues/10102/events | https://github.com/huggingface/transformers/pull/10102 | 804,788,248 | MDExOlB1bGxSZXF1ZXN0NTcwNDk3ODUw | 10,102 | Replace faiss cpu by faiss | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10102",
"html_url": "https://github.com/huggingface/transformers/pull/10102",
"diff_url": "https://github.com/huggingface/transformers/pull/10102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10102.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10101/comments | https://api.github.com/repos/huggingface/transformers/issues/10101/events | https://github.com/huggingface/transformers/pull/10101 | 804,782,697 | MDExOlB1bGxSZXF1ZXN0NTcwNDkzMjMz | 10,101 | Change dependency from faiss-cpu to faiss | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10101/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10101",
"html_url": "https://github.com/huggingface/transformers/pull/10101",
"diff_url": "https://github.com/huggingface/transformers/pull/10101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10101.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10100/comments | https://api.github.com/repos/huggingface/transformers/issues/10100/events | https://github.com/huggingface/transformers/pull/10100 | 804,637,618 | MDExOlB1bGxSZXF1ZXN0NTcwMzcxNDMw | 10,100 | Fix some edge cases in report_to and add deprecation warnings | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | COLLABORATOR | null | # What does this PR do?
This PR adds two new values for the `report_to` TrainingArguments:
- "all" for all integrations installed
- "none" for none (necessary when using in the CLI and we can't pass an empty list)
It also starts warning the user (with an info to not be too spammy) of the upcoming change of default in v5. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10100/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10100",
"html_url": "https://github.com/huggingface/transformers/pull/10100",
"diff_url": "https://github.com/huggingface/transformers/pull/10100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10100.patch",
"merged_at": 1612885093000
} |
https://api.github.com/repos/huggingface/transformers/issues/10099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10099/comments | https://api.github.com/repos/huggingface/transformers/issues/10099/events | https://github.com/huggingface/transformers/issues/10099 | 804,581,681 | MDU6SXNzdWU4MDQ1ODE2ODE= | 10,099 | Issue training Longformer | {
"login": "TomUdale-debug",
"id": 78804008,
"node_id": "MDQ6VXNlcjc4ODA0MDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/78804008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomUdale-debug",
"html_url": "https://github.com/TomUdale-debug",
"followers_url": "https://api.github.com/users/TomUdale-debug/followers",
"following_url": "https://api.github.com/users/TomUdale-debug/following{/other_user}",
"gists_url": "https://api.github.com/users/TomUdale-debug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomUdale-debug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomUdale-debug/subscriptions",
"organizations_url": "https://api.github.com/users/TomUdale-debug/orgs",
"repos_url": "https://api.github.com/users/TomUdale-debug/repos",
"events_url": "https://api.github.com/users/TomUdale-debug/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomUdale-debug/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've been having the same problem",
"Maybe @patrickvonplaten can chime in here",
"Hey @TomUdale-debug - thanks for reporting your issue. It's quite difficult to debug problems with training, but I'll try my best to help you here. However, I would need full access to the training data etc...Could you please make a google colab that I can just run to reproduce your error and link it here. I think this will be the easiest way to check whether there is a problem with Longformer :-) ",
"Sure thing I will set up a Colab, thanks!",
"Many apologies for going slow on this, here is a [Colab](https://colab.research.google.com/drive/12ALD3gJS9rMpW7fvdIwj5mJGx1JGYgLb?usp=sharing) which demonstrates the issue. After one epoch of training (5k docs)the model logit outputs become constant, recall goes to 1 so the model if just predicting everything as 1 (binary classification task). I have the model checkpoint for that if it would be helpful. Any help on this would be great! Thanks, Tom",
"Hmm, at first sounds to me this sounds like the classic overfitting to one class, I'm not so sure whether this is due to using Longformer. \r\n\r\nSome tips:\r\n\r\n- Get more info about your dataset. Is the dataset balanced? Could it be that one class is much more present in the dataset then other classes, which would then be a reason why the model overfits to one class\r\n- Increase the batch_size. Batch_size of 1 is too small IMO, try 8, 16 or 32\r\n- Play around with learning_rate / weight_decay\r\n- If nothing works, try whether you are able to fine-tune BERT well on this dataset. If BERT works well and Longformer doesn't then this is a strong indication that there is a problem with Longformer. But just from looking at the colab, I can't really draw any conclusions and it doesn't really seem to me that the problem is Longformer.\r\n\r\nHope this is somewhat helpful!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | Hello, apologies if this is the wrong place to ask for help, I'm currently trying to fine-tune longformer on a text classification task. My script is below.
When I use ```for param in model.longformer.encoder.parameters():
param.requires_grad = False``` to not train the encoder layer but just the classification head and the embeddings, training works as expected. When I don't freeze the encoder layers, the model doesn't train at all, and when I try to do inference on it, it gives constant output, regardless of what data I put in. I've been reading all the papers to find what I have done wrong, can anyone point me in the right direction? Thank you so much for your help! Tom
```import logging
import pandas as pd
from transformers import AdamW, LongformerTokenizerFast, TrainingArguments, Trainer,LongformerForSequenceClassification
import torch
from torch.utils.data import DataLoader
import numpy as np
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
# calculate accuracy using sklearn's function
acc = accuracy_score(labels, preds)
f1 = f1_score(labels,preds)
precision = precision_score(labels,preds)
recall = recall_score(labels,preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
class SupremeDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
def main():
# Setup logging:
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logging.info("*** Data processing ***")
logging.info("importing data")
data_train = pd.read_csv("../../../shared/benchmarking/supreme_train.csv").dropna()
data_val = pd.read_csv("../../../shared/benchmarking/supreme_val.csv").dropna()
tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096')
logging.info("tokenizing data")
train_encodings = tokenizer(list(data_train.content_decode),truncation=True,padding=True,return_tensors="pt")
val_encodings = tokenizer(list(data_val.content_decode),truncation=True,padding=True,return_tensors="pt")
train_encodings['global_attention_mask'] = torch.zeros_like(train_encodings['input_ids'])
val_encodings['global_attention_mask'] = torch.zeros_like(val_encodings['input_ids'])
train_encodings['global_attention_mask'][train_encodings['input_ids']==0] = 1
val_encodings['global_attention_mask'][val_encodings['input_ids']==0] = 1
train_labels = data_train.label.tolist()
val_labels = data_val.label.tolist()
logging.info("creating datasets")
train_dataset = SupremeDataset(train_encodings, train_labels)
val_dataset = SupremeDataset(val_encodings, val_labels)
logging.info("*** Training ***")
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=1, # batch size per device during training
per_device_eval_batch_size=1, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=200,
do_eval=True,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
evaluation_strategy = "steps",
)
logging.info("loading model")
model = LongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096')
for param in model.longformer.encoder.parameters():
param.requires_grad = False
logging.info("loading trainer")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset,
compute_metrics = compute_metrics # evaluation dataset
)
logging.info("starting training")
trainer.train()
torch.save(model, 'supremecourt_fullmodel.pt')
if __name__ == "__main__":
main()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10099/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10099/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10098/comments | https://api.github.com/repos/huggingface/transformers/issues/10098/events | https://github.com/huggingface/transformers/pull/10098 | 804,554,093 | MDExOlB1bGxSZXF1ZXN0NTcwMzAyMjEz | 10,098 | Adding support for TFEncoderDecoderModel | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @patrickvonplaten,\r\nI just realised that major step will be adding cross-attention layer to `TFDecoderLMHeadModel` for enabling this support. I will start doing that next.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR will add tf2 support for EncoderDecoderModel upon completion.
<!-- Remove if not applicable -->
Fixes #9863
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10098/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10098",
"html_url": "https://github.com/huggingface/transformers/pull/10098",
"diff_url": "https://github.com/huggingface/transformers/pull/10098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10098.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10097/comments | https://api.github.com/repos/huggingface/transformers/issues/10097/events | https://github.com/huggingface/transformers/issues/10097 | 804,458,453 | MDU6SXNzdWU4MDQ0NTg0NTM= | 10,097 | DeBERTa v2 throws "TypeError: stat: path should be string...", v1 not | {
"login": "205g0",
"id": 74575852,
"node_id": "MDQ6VXNlcjc0NTc1ODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/74575852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/205g0",
"html_url": "https://github.com/205g0",
"followers_url": "https://api.github.com/users/205g0/followers",
"following_url": "https://api.github.com/users/205g0/following{/other_user}",
"gists_url": "https://api.github.com/users/205g0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/205g0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/205g0/subscriptions",
"organizations_url": "https://api.github.com/users/205g0/orgs",
"repos_url": "https://api.github.com/users/205g0/repos",
"events_url": "https://api.github.com/users/205g0/events{/privacy}",
"received_events_url": "https://api.github.com/users/205g0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @205g0 \r\n\r\nThank you for reporting this! \r\n\r\n`microsoft/deberta-xlarge-v2` uses `sentencepiece` vocab and it's not implemented for deberta, which is the reason for this error. ",
"Hey Suraj, thanks for the quick response and good to know!",
"@BigBird01 do you think you could add the missing tokenizer, otherwise, I could add it. Thanks!",
"DeBERTa-v2 is not available in the library yet. We're working towards it with @BigBird01.",
"Thanks @205g0 for the interest in DeBERTa-v2. We are working on it with @LysandreJik, hopefully, it will be available soon. You can check our [PR](https://github.com/huggingface/transformers/pull/10018) for the progress.",
"Oh sorry, @BigBird01, I did not realize that this was a work in progress",
"> Oh sorry, @BigBird01, I did not realize that this was a work in progress\r\n\r\nNo worry, @patil-suraj. Thanks for your quick response. We are glad to integrate these SOTA NLU models with HF to benefit the community:) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.1
- Platform: Linux-5.4.0-54-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: false
- Using distributed or parallel set-up in script?: false
### Who can help
@BigBird01 @patil-suraj
## Information
Model I am using (DeBERTa v2):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create this file:
```
from transformers import AutoTokenizer, AutoModel
import torch
tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-xlarge-v2')
model = AutoModel.from_pretrained('microsoft/deberta-xlarge-v2')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
print(outputs)
```
2. Run the file
3. You'll get:
```
(venv) root@16gb:~/deberta# python3 index.py
Traceback (most recent call last):
File "index.py", line 4, in <module>
tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-xlarge-v2')
File "/root/deberta/venv/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 398, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/root/deberta/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
return cls._from_pretrained(
File "/root/deberta/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/root/deberta/venv/lib/python3.8/site-packages/transformers/models/deberta/tokenization_deberta.py", line 542, in __init__
if not os.path.isfile(vocab_file):
File "/usr/lib/python3.8/genericpath.py", line 30, in isfile
st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
I tried this with the DeBERTa v1 models and there was no error. I've the same behavior when using `DebertaTokenizer, DebertaModel`
## Expected behavior
No error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10097/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10096/comments | https://api.github.com/repos/huggingface/transformers/issues/10096/events | https://github.com/huggingface/transformers/pull/10096 | 804,434,876 | MDExOlB1bGxSZXF1ZXN0NTcwMjAxOTI1 | 10,096 | Fix example in Wav2Vec2 documentation | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | Fixes an example in Wav2Vec2 documentation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10096/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10096/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10096",
"html_url": "https://github.com/huggingface/transformers/pull/10096",
"diff_url": "https://github.com/huggingface/transformers/pull/10096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10096.patch",
"merged_at": 1612868877000
} |
https://api.github.com/repos/huggingface/transformers/issues/10095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10095/comments | https://api.github.com/repos/huggingface/transformers/issues/10095/events | https://github.com/huggingface/transformers/pull/10095 | 804,416,680 | MDExOlB1bGxSZXF1ZXN0NTcwMTg3NTky | 10,095 | Fix naming in TF MobileBERT | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes a naming issue in the `TFMobileBertForMaskedLM` model.
# Fixes
#10088 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10095/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10095",
"html_url": "https://github.com/huggingface/transformers/pull/10095",
"diff_url": "https://github.com/huggingface/transformers/pull/10095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10095.patch",
"merged_at": 1612869032000
} |
https://api.github.com/repos/huggingface/transformers/issues/10094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10094/comments | https://api.github.com/repos/huggingface/transformers/issues/10094/events | https://github.com/huggingface/transformers/pull/10094 | 804,414,794 | MDExOlB1bGxSZXF1ZXN0NTcwMTg2MDU3 | 10,094 | [RAG] fix generate | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Wow this is a huge bug actually! Thanks a lot for fixing it @patil-suraj! \r\n\r\n@LysandreJik @sgugger - Sadly Circle CI is skipping all RAG tests at the moment -> therefore we should first fix the faiss import (#10103), then rebase this PR to see that everything is correctly solved, merge it and then do a patch "
] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
#9984 introduced a new `encoder_no_repeat_ngram_size` `generate` param, but it was missing from `RagTokenForGeneration.generate`, it's a required argument for `_get_logits_processor` which is called inside `RagTokenForGeneration.generate`.
This PR adds the argument to `RagTokenForGeneration.generate` and passes it to `_get_logits_processor`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10094/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10094",
"html_url": "https://github.com/huggingface/transformers/pull/10094",
"diff_url": "https://github.com/huggingface/transformers/pull/10094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10094.patch",
"merged_at": 1612897058000
} |
https://api.github.com/repos/huggingface/transformers/issues/10093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10093/comments | https://api.github.com/repos/huggingface/transformers/issues/10093/events | https://github.com/huggingface/transformers/issues/10093 | 804,409,388 | MDU6SXNzdWU4MDQ0MDkzODg= | 10,093 | Pre-Training for Question Generation | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I guess @patil-suraj is the expert in this, check out his [repo](https://github.com/patil-suraj/question_generation) explaining all the details.",
"ok..\r\n@patil-suraj , plz revert on [this](https://github.com/patil-suraj/question_generation/issues/69) issue I raised in your repo"
] | 1,612 | 1,612 | 1,612 | NONE | null | Hi,
How to pre-train any of language generation models (T5, BART or GPT ) for Question Generation task where I had passage, question and answer?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10093/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10092/comments | https://api.github.com/repos/huggingface/transformers/issues/10092/events | https://github.com/huggingface/transformers/pull/10092 | 804,383,024 | MDExOlB1bGxSZXF1ZXN0NTcwMTYwMzkw | 10,092 | Logging propagation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | This PR enables logs propagation by default with transformers' logging system, in a similar fashion to https://github.com/huggingface/datasets/pull/1845.
Unlike `datasets` however, we will not remove the default handler in transformers' logging system: this handler is heavily used in all examples, and removing the default handler would prevent the formatting from being correctly applied to the examples.
Since this is the best practice shown in examples, I believe this change would be breaking for users that have copy/pasted this change across their codebases, and this change would therefore be breaking for these users.
Furthermore, any user that does not want the default handler may use the `disable_default_handler` method in order to disable that behavior. These two methods are added to the documentation in this PR.
cc @lhoestq @sgugger @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10092/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10092",
"html_url": "https://github.com/huggingface/transformers/pull/10092",
"diff_url": "https://github.com/huggingface/transformers/pull/10092.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10092.patch",
"merged_at": 1612884470000
} |
https://api.github.com/repos/huggingface/transformers/issues/10091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10091/comments | https://api.github.com/repos/huggingface/transformers/issues/10091/events | https://github.com/huggingface/transformers/issues/10091 | 804,334,409 | MDU6SXNzdWU4MDQzMzQ0MDk= | 10,091 | How to run distributed training on multiple machines? | {
"login": "allanj",
"id": 3351187,
"node_id": "MDQ6VXNlcjMzNTExODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3351187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allanj",
"html_url": "https://github.com/allanj",
"followers_url": "https://api.github.com/users/allanj/followers",
"following_url": "https://api.github.com/users/allanj/following{/other_user}",
"gists_url": "https://api.github.com/users/allanj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allanj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allanj/subscriptions",
"organizations_url": "https://api.github.com/users/allanj/orgs",
"repos_url": "https://api.github.com/users/allanj/repos",
"events_url": "https://api.github.com/users/allanj/events{/privacy}",
"received_events_url": "https://api.github.com/users/allanj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm only aware that the PyTorch official documentation is using the RPC: https://pytorch.org/tutorials/intermediate/dist_pipeline_parallel_tutorial.html",
"This is more of a question for the PyTorch GitHub then ours since this is a question about to use `torch.distributed.launch`. That's why I'll close the issue. Still, I can share the command I run on my side:\r\n```\r\npython -m torch.distributed.launch --nproc_per_node 8 \\\r\n --nnodes 2 \\\r\n --node_rank rank_of_your_machine \\\r\n --master_addr main_machine_ip \\\r\n --master_port open_port_on_main_machine \\\r\n run_mlm.py \\\r\n --sharded_ddp \\\r\n --all_other_args_to_script\r\n```\r\nwhere `rank_of_your_machine` should be 0 for the main machine and 1 for the other one, `main_machine_ip` the IP of the machine of rank 0 and `open_port_on_main_machine` the port to use to communicate between the two machines.",
"Thanks, this is really helpful"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.0
- Platform: PyTorch
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
##Who can help:
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Roberta
## To reproduce
The script I'm working with is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
I know that we can run the distributed training on multiple GPUs in a single machine by
`python -m torch.distributed.launch --nproc_per_node=8 run_mlm.py --sharded_dpp`
But what if I can multiple machines with multiple GPUs, let's say I have two machines and each is with 8 GPUs, what is the expected command to run on these 16 GPUs?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10091/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10091/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10090/comments | https://api.github.com/repos/huggingface/transformers/issues/10090/events | https://github.com/huggingface/transformers/issues/10090 | 804,332,530 | MDU6SXNzdWU4MDQzMzI1MzA= | 10,090 | [question] Are the tensorflow bert weights same as the original repo ? | {
"login": "Slyne",
"id": 6286804,
"node_id": "MDQ6VXNlcjYyODY4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6286804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Slyne",
"html_url": "https://github.com/Slyne",
"followers_url": "https://api.github.com/users/Slyne/followers",
"following_url": "https://api.github.com/users/Slyne/following{/other_user}",
"gists_url": "https://api.github.com/users/Slyne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Slyne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Slyne/subscriptions",
"organizations_url": "https://api.github.com/users/Slyne/orgs",
"repos_url": "https://api.github.com/users/Slyne/repos",
"events_url": "https://api.github.com/users/Slyne/events{/privacy}",
"received_events_url": "https://api.github.com/users/Slyne/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you provide the code you used to get the predictions with the BERT checkpoint on TF Hub? The two should be identical.",
"> Hi! Could you provide the code you used to get the predictions with the BERT checkpoint on TF Hub? The two should be identical.\r\n\r\nI copied from https://tfhub.dev/tensorflow/bert_zh_L-12_H-768_A-12/3\r\n```python\r\ntext_input = tf.keras.layers.Input(shape=(), dtype=tf.string)\r\npreprocessor = hub.KerasLayer(\r\n \"https://tfhub.dev/tensorflow/bert_zh_preprocess/3\")\r\nencoder_inputs = preprocessor(text_input)\r\nencoder = hub.KerasLayer(\r\n \"https://tfhub.dev/tensorflow/bert_zh_L-12_H-768_A-12/3\",\r\n trainable=False)\r\noutputs = encoder(encoder_inputs)\r\npooled_output = outputs[\"pooled_output\"] # [batch_size, 768].\r\nsequence_output = outputs[\"sequence_output\"] # [batch_size, seq_length, 768].\r\nembedding_model = tf.keras.Model(text_input, pooled_output)\r\nsentences = tf.constant([\"今天天气怎么样\"])\r\nprint(embedding_model(sentences))\r\n```\r\n",
"Any update ? @LysandreJik ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-133-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.8
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.5.0-dev20210204 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
Models:
- albert, bert, xlm: @LysandreJik
## Information
I'm using the pretrained bert-base-chinese [here](https://huggingface.co/bert-base-chinese). I print out the pooler_output and the result is different from the tensorflow2.0 published hub saved model [here](https://tfhub.dev/tensorflow/bert_zh_L-12_H-768_A-12/3).
I want to confirm whether the two checkpoints weights are the same.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import TFBert
bert = bert = TFBertModel.from_pretrained('bert-base-chinese', output_hidden_states=True)
output = bert(input_ids=tf.convert_to_tensor([[ 101, 791, 1921, 1921, 3698, 2582, 720, 3416, 102]]),
attention_mask=tf.convert_to_tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1]]), training=False)
print(output.pooler_output)
```
```
# print the beginning weights:
array([[ 0.99749047, 0.9999622 , 0.99657625, 0.96953416, 0.8489984 ,
0.06474952,
```
The tensorflow hub output can be produced from https://tfhub.dev/tensorflow/bert_zh_L-12_H-768_A-12/3.
The input text is "今天天气怎么样"
```
[[ 9.97488916e-01 9.99962687e-01 9.96489942e-01 9.69992220e-01
8.49602520e-01 6.62192404e-02 ,
```
The tensorflow part claims they use the original bert checkpoints from tf1.X.
There is no training=True/False option in tensorflow hub so I'm confused if the difference is due to this option? (I've set trainable=False in tensorflow hub)
## Expected behavior
Expect the outputs to be the same.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10089/comments | https://api.github.com/repos/huggingface/transformers/issues/10089/events | https://github.com/huggingface/transformers/pull/10089 | 804,326,793 | MDExOlB1bGxSZXF1ZXN0NTcwMTE0Mzkz | 10,089 | Deprecate Wav2Vec2ForMaskedLM and add Wav2Vec2ForCTC | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> LGTM! Could we remove `Wav2Vec2ForMaskedLM` from the documentation?\r\n\r\nYes! It's better than adding a note saying that the model is deprecated? => yeah let's just remove it!"
] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Deprecates `Wav2Vec2ForMaskedLM` -> the name was very badly chosen since it's currently used for CTC classification which is very different from `MaskedLM`. Also `MaskedLM` is not a good name for pretraining where it should rather be something like `ForMaskedSpeechModeling`, so IMO the best idea is to deprecate the whole class.
Right after this PR is merged and there is a patch, I will update all configs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10089",
"html_url": "https://github.com/huggingface/transformers/pull/10089",
"diff_url": "https://github.com/huggingface/transformers/pull/10089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10089.patch",
"merged_at": 1612860542000
} |
https://api.github.com/repos/huggingface/transformers/issues/10088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10088/comments | https://api.github.com/repos/huggingface/transformers/issues/10088/events | https://github.com/huggingface/transformers/issues/10088 | 804,313,273 | MDU6SXNzdWU4MDQzMTMyNzM= | 10,088 | Language modelling head has zero weights in pretrained TFMobileBertForMaskedLM | {
"login": "mknz",
"id": 6409704,
"node_id": "MDQ6VXNlcjY0MDk3MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6409704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mknz",
"html_url": "https://github.com/mknz",
"followers_url": "https://api.github.com/users/mknz/followers",
"following_url": "https://api.github.com/users/mknz/following{/other_user}",
"gists_url": "https://api.github.com/users/mknz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mknz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mknz/subscriptions",
"organizations_url": "https://api.github.com/users/mknz/orgs",
"repos_url": "https://api.github.com/users/mknz/repos",
"events_url": "https://api.github.com/users/mknz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mknz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello!\r\n\r\nIndeed there is an issue in the naming for `TFMobileBertForMaskedLM`. This will be fixed in the next release.",
"OK, thanks for the quick response!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,618 | 1,618 | NONE | null | ## Description
The `TFMobileBertForMaskedLM` example returns all zero logits while `MobileBertForMaskedLM` example works fine.
https://huggingface.co/transformers/model_doc/mobilebert.html#tfmobilebertformaskedlm
I checked language modeling head weights for both models and found that TF pretrained model has zero weights.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@jplu @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFMobileBertForMaskedLM
The problem arises when using:
- [x] the official example scripts: (give details below)
## To reproduce
```
from transformers import MobileBertForMaskedLM
from transformers import TFMobileBertForMaskedLM
# PyTorch
model = MobileBertForMaskedLM.from_pretrained('google/mobilebert-uncased')
print(list(model.cls.parameters())[0])
# Parameter containing:
# tensor([-7.2946, -7.4302, -7.5401, ..., -7.4850, -7.4503, -2.7798],
# requires_grad=True)
# TensorFlow
model = TFMobileBertForMaskedLM.from_pretrained('google/mobilebert-uncased')
print(model.layers[1].get_weights()[0])
# array([0., 0., 0., ..., 0., 0., 0.], dtype=float32)
```
## Expected behavior
Language modeling head for TFMobileBertForMaskedLM should have the same weights as of MobileBertForMaskedLM
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10088/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10087/comments | https://api.github.com/repos/huggingface/transformers/issues/10087/events | https://github.com/huggingface/transformers/pull/10087 | 804,233,244 | MDExOlB1bGxSZXF1ZXN0NTcwMDM1NzA0 | 10,087 | remove adjust_logits_during_generation method | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"All slow tests passing in PT!",
"@patrickvonplaten (Bart, MBart, Pegasus, Marian, Blenderbot, BlenderbotSmall) slow tests are passing in TF as well.",
"Applied Sylvain's suggestions. Merging!"
] | 1,612 | 1,668 | 1,612 | MEMBER | null | # What does this PR do?
This PR is the first split of #9811.
This PR
1. introduces two new `generate` and `config` arguments and `LogitsProcessor`
- `forced_bos_token_id` and `forced_eos_token_id`, to force a specific start and end token. This is particularly useful for many to many and one to many translation models, so we can pass different language tokens as `forced_bos_token_id` to `generate`,
- `ForcedBOSTokenLogitsProcessor` and `ForcedEOSTokenLogitsProcessor`
2. Remove `adjust_logits_during_generation` method from all models (except `Marian`) and handle that use case using the newly introduced logits processors.
4. remove the `force_bos_token_to_be_generated` argument from `BartConfig`
For `Marian` we still need to keep the `adjust_logits_during_generation` method to force the model to not generate pad token. Adding the pad token to `bad_words_ids` does not resolve this issue, the score of `pad_token_id` needs to be set to `-inf` before calling `log_softmax` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10087/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10087/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10087",
"html_url": "https://github.com/huggingface/transformers/pull/10087",
"diff_url": "https://github.com/huggingface/transformers/pull/10087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10087.patch",
"merged_at": 1612976950000
} |
https://api.github.com/repos/huggingface/transformers/issues/10086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10086/comments | https://api.github.com/repos/huggingface/transformers/issues/10086/events | https://github.com/huggingface/transformers/pull/10086 | 804,221,068 | MDExOlB1bGxSZXF1ZXN0NTcwMDI1NDA5 | 10,086 | doc: update W&B related doc | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There's also `docs/source/example.md` but I understand it is built automatically from `examples/README.md`"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Updates W&B related documentation:
* remove outdated examples
* update urls
* add config parameters
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10086/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10086",
"html_url": "https://github.com/huggingface/transformers/pull/10086",
"diff_url": "https://github.com/huggingface/transformers/pull/10086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10086.patch",
"merged_at": 1612900072000
} |
https://api.github.com/repos/huggingface/transformers/issues/10085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10085/comments | https://api.github.com/repos/huggingface/transformers/issues/10085/events | https://github.com/huggingface/transformers/pull/10085 | 804,186,750 | MDExOlB1bGxSZXF1ZXN0NTY5OTk2NzY1 | 10,085 | [examples/s2s] add test set predictions | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I propose that the best approach would be to finish everything that is planned and then we will run tests side by side and note any small discrepancies if any and fix them in one go? Does that work?\r\n\r\nYes, this was the last major missing piece from this script. Now I'm going to start running both scripts side by side (manually converting the old datasets to new datasets format) and note the discrepancies, I'll also wait for your tests.\r\n\r\n> I'm waiting for the datasets hub to port the datasets to be able to compare the old and the new.\r\n\r\nLet's not wait for the hub, for now, we could just manually convert the datasets for tests and later upload them to the hub once it's ready. ",
"> > I propose that the best approach would be to finish everything that is planned and then we will run tests side by side and note any small discrepancies if any and fix them in one go? Does that work?\r\n> \r\n> Yes, this was the last major missing piece from this script. Now I'm going to start running both scripts side by side (manually converting the old datasets to new datasets format) and note the discrepancies, I'll also wait for your tests.\r\n\r\nThat works.\r\n\r\n> > I'm waiting for the datasets hub to port the datasets to be able to compare the old and the new.\r\n> \r\n> Let's not wait for the hub, for now, we could just manually convert the datasets for tests and later upload them to the hub once it's ready.\r\n\r\nSure - I already wrote the code for wmt en-ro https://github.com/huggingface/transformers/issues/10044#issuecomment-774413928 need to adapt to others.",
"Changed `eval_beams` to `num_beams`. Hopefully final name change. Merging!"
] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
This PR adds the `do_predict` option to the `run_seq2seq.py` script for test set predictions.
Fixes #10032
cc. @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10085/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10085/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10085",
"html_url": "https://github.com/huggingface/transformers/pull/10085",
"diff_url": "https://github.com/huggingface/transformers/pull/10085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10085.patch",
"merged_at": 1612883502000
} |
https://api.github.com/repos/huggingface/transformers/issues/10084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10084/comments | https://api.github.com/repos/huggingface/transformers/issues/10084/events | https://github.com/huggingface/transformers/issues/10084 | 804,059,530 | MDU6SXNzdWU4MDQwNTk1MzA= | 10,084 | Tapas not working with tables exceeding token limit | {
"login": "bogdankostic",
"id": 48713846,
"node_id": "MDQ6VXNlcjQ4NzEzODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/48713846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bogdankostic",
"html_url": "https://github.com/bogdankostic",
"followers_url": "https://api.github.com/users/bogdankostic/followers",
"following_url": "https://api.github.com/users/bogdankostic/following{/other_user}",
"gists_url": "https://api.github.com/users/bogdankostic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bogdankostic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bogdankostic/subscriptions",
"organizations_url": "https://api.github.com/users/bogdankostic/orgs",
"repos_url": "https://api.github.com/users/bogdankostic/repos",
"events_url": "https://api.github.com/users/bogdankostic/events{/privacy}",
"received_events_url": "https://api.github.com/users/bogdankostic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nYes the column ranks may cause issues when a table is too big, as the vocab size is only 256. See also [my reply](https://github.com/huggingface/transformers/issues/9221#issuecomment-749093391) on #9221.\r\n\r\nActually, the authors of TAPAS did release a new method in a [follow-up paper](https://arxiv.org/abs/2010.00571) to prune columns that are not relevant to a question to be able to serve large tables to the BERT-like model, so this is something that maybe could be added in the future.",
"My suggestion would be to compute the column ranks on the truncated table. (Not sure if and how this is feasible.)\r\nOtherwise I would suggest returning a more informative error message.",
"Yes, good suggestion. I've investigated this a bit and it seems that the original implementation also computes the column ranks on the original table, rather than the truncated one. I've asked the original authors [here](https://github.com/google-research/tapas/issues/106#issue-804538477). Will keep you updated.",
"So the author replied:\r\n\r\n> IIRC, then we compute them before pruning the table.\r\nThat was by design so that those ranks would match the original numeric rank (pre-pruning).\r\nIt's true that the rank could thus exceed the vocab size.\r\nWe could add some trimming to prevent that.\r\n\r\nSo this is something that could be added in the future (together with the `prune_columns` option). I put it on my to-do list for now.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@NielsRogge Thanks for the explanations above. Has there been any update on this issue? I have also run into this issue when running Tapas on the WTQ dataset, and it took me a lot of efforts to get to the bottom of this and realize that this is an issue with the `column_rank` IDs from oversized tables.\r\n\r\nThe painful part is that there is currently no guard or no warning against feeding oversized tables into the tokenizer, and the issue will only come out as a \"CUDA error: device-side assert triggered\" message when the Tapas forward pass is run.\r\n\r\nI think there are several potential ways to solve this or make this less painful:\r\n\r\n1. Computing the column rank after the table truncation (as already suggested by another comment above). This makes a ton of sense because the table will only be presented to the model after truncation in the tokenizer anyway, so there is no point to maintain a non-continuous column rank for large tables (with some ranks removed due to truncation). I understand that the original TF implementation might not handle this, but can this be added as a behavior in the Huggingface implementation?\r\n\r\n2. Add an option to re-map all the large column ranks to the max rank value. This can be implemented in this tokenizer function: https://github.com/huggingface/transformers/blob/7fcee113c163a95d1b125ef35dc49a0a1aa13a50/src/transformers/models/tapas/tokenization_tapas.py#L1487\r\nThis is less ideal than 1, but can make sure that the model won't crash due to an index-out-of-range error.\r\n\r\n3. The easiest fix would be to add some warning/exception in the tokenizer that reminds users about this. Or let the tokenizer return a `None` value in the output, or return a special boolean variable such as `table_oversized`. This does not solve anything, but can make the capture of this issue much easier.\r\n\r\nLook forward to some updates on this issue.",
"Is there any way to bypass the token limit ?",
"@KML1337,\r\n\r\nI do not have sure if you could consider the following approach:\r\n\r\nFirst, you split the table into n-subtables that generate tokens under the limit tokens;\r\nThen, process each subtable with the model;\r\nFinally, aggregate all responses and select the one with the highest logit score.\r\n"
] | 1,612 | 1,687 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.0
- Platform: MacOS
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @sgugger @NielsRogge
## Information
Model I am using (Bert, XLNet ...): TaPas
## To reproduce
When executing the following code, using this [table](https://gist.github.com/bogdankostic/387d1c7a0e8ce25ea302395756df11b3), I get an `IndexError: index out of range in self`.
```python
from transformers import AutoTokenizer, AutoModelForTableQuestionAnswering
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-wtq", drop_rows_to_fit=True)
model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")
df = pd.read_csv("table.tsv", sep="\t").astype(str)
queries = ["How big is Ardeen?"]
inputs = tokenizer(table=df, queries=queries, padding="max_length", truncation=True, return_tensors="pt")
outputs = model(**inputs)
```
I am not completely sure about the cause of the error but I suspect that the column rank vectors are not correctly generated. (`torch.max(token_type_ids[:, :, 4])` returns 298 and `torch.max(token_type_ids[:, :, 5])` returns 302, the Embedding Models for column rank and inverse column rank, however, allow a max value of 255)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10084/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10083/comments | https://api.github.com/repos/huggingface/transformers/issues/10083/events | https://github.com/huggingface/transformers/issues/10083 | 804,018,050 | MDU6SXNzdWU4MDQwMTgwNTA= | 10,083 | model.generate needs BART config update | {
"login": "swethmandava",
"id": 17828952,
"node_id": "MDQ6VXNlcjE3ODI4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/17828952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/swethmandava",
"html_url": "https://github.com/swethmandava",
"followers_url": "https://api.github.com/users/swethmandava/followers",
"following_url": "https://api.github.com/users/swethmandava/following{/other_user}",
"gists_url": "https://api.github.com/users/swethmandava/gists{/gist_id}",
"starred_url": "https://api.github.com/users/swethmandava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/swethmandava/subscriptions",
"organizations_url": "https://api.github.com/users/swethmandava/orgs",
"repos_url": "https://api.github.com/users/swethmandava/repos",
"events_url": "https://api.github.com/users/swethmandava/events{/privacy}",
"received_events_url": "https://api.github.com/users/swethmandava/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @swethmandava,\r\n\r\nSorry I don't understand the issue here - what do you mean by `model.generate` runs into errors? Your above code snippet works fine for me. Could you clarify the issue? Thank you!",
"`summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True, num_beam_groups=1, output_scores=False, return_dict_in_generate=False, encoder_no_repeat_ngram_size=0, diversity_penalty=0.0)\r\n`\r\n\r\nworks for me. I have to define the following defaults (num_beam_groups, output_scores, return_dict_in_generate, encoder_no_repeat_ngram_size, diversity_penalty) explicitly since they are not in BARTConfig and default to None",
"Hey @swethmandava \r\n\r\nYou shouldn't need to define these param. All these config params have default values defined in the `PretrainedConfig` class from which all other configs inherit. \r\n\r\nCould you try again with the newest transformers version?"
] | 1,612 | 1,613 | 1,613 | CONTRIBUTOR | null | ### Who can help
@patrickvonplaten @patil-suraj
```
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
```
model.generate runs into errors. num_beam_groups, return_dict_in_generate and encoder_no_repeat_ngram_size are not defined in BART config. Should they be added? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10083/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10082/comments | https://api.github.com/repos/huggingface/transformers/issues/10082/events | https://github.com/huggingface/transformers/issues/10082 | 803,985,366 | MDU6SXNzdWU4MDM5ODUzNjY= | 10,082 | Supporting truncation from both ends of the sequence in BertTokenizerFast | {
"login": "shangw-nvidia",
"id": 66387198,
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shangw-nvidia",
"html_url": "https://github.com/shangw-nvidia",
"followers_url": "https://api.github.com/users/shangw-nvidia/followers",
"following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}",
"gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions",
"organizations_url": "https://api.github.com/users/shangw-nvidia/orgs",
"repos_url": "https://api.github.com/users/shangw-nvidia/repos",
"events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}",
"received_events_url": "https://api.github.com/users/shangw-nvidia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hi, thanks for opening an issue! We have the `padding_side` tokenizer attribute, but it doesn't work for truncation unfortunately.\r\n@n1t0, what do you think?",
"@LysandreJik Thanks a lot for your response! @n1t0 I'm wondering what your thoughts are on this feature?"
] | 1,612 | 1,613 | null | NONE | null | # 🚀 Feature request
For `BertTokenizerFast` (inherited from `PreTrainedTokenizerFast`), it seems like `__call__` only supports truncating from the end of the sequences if we set `truncation` to be `longest_first`, `only_first` or `only_second`. For example, assuming `max_length` is 6 and `truncation` is `longest_first`:
(`I have a pen`, `I have an apple`) --> truncation --> (`I have a`, `I have an`)
However, if we take a closer look at [Google's original data-preprocessing script for BERT](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L430), truncation can happen at both ends of the sequence randomly:
(`I have a pen`, `I have an apple`) --> truncation --> (`I have a`, `have an apple`) or (`have a pen`, `I have an`) or (`I have a`, `I have an`) or (`have a pen`, `have an apple`)
For `BertTokenizer`, perhaps I could reassigned its `truncate_sequences` member function (https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L2887) with a new function that implements Google's truncation scheme; however, for `BertTokenizerFast`, truncation is handled completely in Rust, about which I can't do anything.
An alternative is to call `tokenize` first, then truncate the sequence using Google's scheme, ~~then call `__call__` and passing `is_split_into_words` as `True`~~. However, this approach has significant performance impact comparing to calling `__call__` on a batch of sequences directly (the average total tokenization latency doubled in our experiments).
> PS: Turned out `is_split_into_words` doesn't work this way (since when it sees a subword `##abc`, `__call__` would further tokenize it into `#` `#` `abc` even if `is_split_into_words==True`). Thus, the actual (but slow) alternative is to 1) call `tokenize` 2) implement the truncation scheme and making sure a subword starting with `##` won't be at the boundary 3) call `convert_tokens_to_string` 4) call `__call__`. Effectively, this alternative tokenizes the same sequence twice.
I'm wondering if's possible to add official support for random truncation from both ends of the sequence?
## Motivation
To match Google's truncation scheme exactly and minimizing artificial impacts on pretraining convergence.
## Your contribution
Unfortunately I'm not very familiar with Rust (I can read it, but I neve learned/wrote Rust before), thus I can't help much.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10082/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10082/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10081/comments | https://api.github.com/repos/huggingface/transformers/issues/10081/events | https://github.com/huggingface/transformers/issues/10081 | 803,964,866 | MDU6SXNzdWU4MDM5NjQ4NjY= | 10,081 | pipeline("sentiment-analysis') - index out of range in self | {
"login": "nikchha",
"id": 37020365,
"node_id": "MDQ6VXNlcjM3MDIwMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/37020365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikchha",
"html_url": "https://github.com/nikchha",
"followers_url": "https://api.github.com/users/nikchha/followers",
"following_url": "https://api.github.com/users/nikchha/following{/other_user}",
"gists_url": "https://api.github.com/users/nikchha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikchha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikchha/subscriptions",
"organizations_url": "https://api.github.com/users/nikchha/orgs",
"repos_url": "https://api.github.com/users/nikchha/repos",
"events_url": "https://api.github.com/users/nikchha/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikchha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Do you mind giving us a reproducible example, for example the sequence that makes this pipeline crash? Without such an example we won't be able to find out what's wrong. Thank you for your understanding",
"Hello! Thank you very much for your quick reply. While there are many entities in my dataset that cause the error, I just found the following entry and reproduced the error in a seperate script:\r\n\r\n> Hi Jan! Nice post and I’m jealous that you get to go to both the SAP sessions and the AppleDevCon. But I think you inadvertent discovery of the aging of the SAP developer population vs the non-enterprise developers is a telling one. SAP tools and platforms remain a niche area that are only utilised by SAP developers. They may be brilliant, indeed I think in some area SAP is well ahead of the rest of the pack. The problem is I am 1 in 10,000 in thinking this (conservative estimate I fear). Those with plenty of experience in enterprise development (hence older) appreciate the ways that SAPs tools work with an enterprise way of doing things (translatable, solid, standard, accessible, enhanceable, etc). Whereas those that are used to pushing code changes to production every few hours just don’t understand. Why would you want your app to look like it is an SAP app? (Hello UI5 I can see you from across the room, you can’t hide.) Of course if you’re using this as an enterprise-wide approach, it makes sense. Thankfully for the livelihood of all of us aging SAP developers, enterprises have architects that insist on standards and enterprise-wide approaches. In the meantime, however, our younger, and likely less well paid, colleagues in the non SAP developer space will continue to use whatever framework offers the best(fastest/easiest) result and most jobs. Since to get a job in the SAP space customers are used to asking for a minimum of multiple years of experience, it’s hard to get a gig – so it’s much more profitable to just develop in Firebase, Angular, etc and get a job. After all, having a paying job is quite often more important that working with your framework of choice. I am sure that many of us older SAP devs will hire many people and teach them the minor cross-over skills to be proficient in the SAP iOS SDK, and we’ll probably make a decent amount of money from the companies that have architects that insist on SAP UI5 looking applications. But I don’t think this will change the overall conversation. In another 3 years, the developers in SAP will have aged another 3 years (there will still be a huge demand and the pay will be too good to move on). A bunch of new talent will have been trained in the new tools and will by now have 3 years experience and will be able to find enterprise SAP jobs of their own, but we will be no closer to getting anyone to adopt SAP tools for anything other than SAP customer usage. Grim outlook – sorry. The alternative (as I see it) is that SAP gives up on building its own (even if open source and rather excellent) frameworks and just starts adding to some existing ones. All of a sudden instead of trying to convince people to use a new framework, you ask them to use a variant of one they already know. At the same time SAP invests some serious money into “public API first” development and makes everything in S4 and their other cloud products able to be accessed and updated via well documented APIs. (Thus the end of the need for ABAP developers and those who understand the black arts of the SAP APIs.) The costs per developer hour plummet and then we see a new group of developers helping customers realise their dreams. And some very happy customers. As for the SAP iOS SDK, I think it has a very niche area, even more so than standard UI5 development. Not only is it specific to a requirement that only a large SAP customer would have, it’s also mobile platform specific. Given that it will not translate to Android devices I fear that it will not interest the generic mobile app developer. Due to being quite SAP specific quite probably not the iOS only developer either. We’ll see SAP devs training up or being hired & trained for specific tasks, not adopting the platform just because it’s cool. Perhaps I’m just being too much of a grumpy old git (meant in the non-awesome code sharing/management/versioning way) and we will find that these open frameworks are adopted. That would be awesome. It would make a lot of SAP customers a lot happier too to be able to have some decent choice as to who to do their work. Cheers, Chris",
"Hello! There were two issues here:\r\n\r\n- The configuration for the tokenizer of `distilbert-base-uncased-finetuned-sst-2-english` was ill-configured and was lacking the `max_length`. I've manually fixed this in [huggingface#03b4d1](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/commit/03b4d196c19d0a73c7e0322684e97db1ec397613)\r\n- You should truncate your sequences by setting `truncation=True` so that your sequences don't overflow in the pipeline:\r\n\r\n```py\r\nclassifier = pipeline('sentiment-analysis')\r\nclassifier(text, truncation=True)\r\n```\r\n\r\nLet me know if this fixes your issue!",
"Hello!\r\n\r\nThank you so much! That fixed the issue. I already thought the missing `max_length` could be the issue but it did not help to pass `max_length = 512` to the _call_ function of the pipeline.\r\n\r\nI used the truncation flag before but I guess it did not work due to the missing `max_length` value.\r\n\r\nAnyway, works perfectly now! Thank you!",
"Unfortunately this was due to the ill-configured tokenizer on the hub. We're working on a more general fix to prevent this from happening in the future.\r\n\r\nHappy to help!"
] | 1,612 | 1,612 | 1,612 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Manjaro Linux (Feb 2021)
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (GPU)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
Library:
- tokenizers: @n1t0, @LysandreJik
- pipelines: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): distilbert-base-uncased-finetuned-sst-2-english
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: sentiment analysis
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
My dataset consists blog articles and comments on them. Sometimes there are non-english characters, code snippets or other weird sequences.
Error occurs when:
1. Initialize the default pipeline("sentiment-analysis") with device 0 or -1
2. Run inference classifier with truncation=True of my dataset
3. After some time the classifier returns the following error:
CPU: `Index out of range in self`
GPU: ``/opt/conda/conda-bld/pytorch_1607370172916/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [56,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.``
## Expected behavior
I thought at first that my data was messing up the tokenization process or the model because sometimes there are strange sequences in the data e.g. code, links or stack traces.
However, if you name the model and tokenizer during pipeline initialization, inference from the model works fine for the same data:
`classifier = pipeline('sentiment-analysis', model='distilbert-base-uncased-finetuned-sst-2-english', tokenizer='distilbert-base-uncased', device=0)`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10081/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10080/comments | https://api.github.com/repos/huggingface/transformers/issues/10080/events | https://github.com/huggingface/transformers/pull/10080 | 803,834,719 | MDExOlB1bGxSZXF1ZXN0NTY5Njk2NTE1 | 10,080 | [deepspeed tests] transition to new tests dir | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I no longer can use the libraries from seq2seq since I can't do a relative import from the script.\r\n\r\nWe can add things to the sys path if needed (see the [general text examples](https://github.com/huggingface/transformers/blob/master/examples/test_examples.py)).\r\n\r\nThanks for doing this, it looks good to me!",
"Indeed, that's what we have been doing, but having a library named \"`utils.py`\" and importing it from far away is too ambiguous. So will probably need to rename such libraries, or start moving their functionality into a more central area."
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | as discussed at https://github.com/huggingface/transformers/issues/10076 relocating deepspeed tests a dedicated area and out of the scripts area.
I went right ahead and create a dedicated sub-folder for deepspeed tests.
I no longer can use the libraries from `seq2seq` since I can't do a relative import from the script.
The only thing is that I will need to update all of my comments/posts to mention that `ds_config.json` has moved.
fairscale will be probably next if this looks good.
Fixes: https://github.com/huggingface/transformers/issues/10076
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10080/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10080",
"html_url": "https://github.com/huggingface/transformers/pull/10080",
"diff_url": "https://github.com/huggingface/transformers/pull/10080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10080.patch",
"merged_at": 1612816913000
} |
https://api.github.com/repos/huggingface/transformers/issues/10079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10079/comments | https://api.github.com/repos/huggingface/transformers/issues/10079/events | https://github.com/huggingface/transformers/issues/10079 | 803,788,439 | MDU6SXNzdWU4MDM3ODg0Mzk= | 10,079 | Unclear error "NotImplementedError: "while saving tokenizer. How fix it? | {
"login": "MLDovakin",
"id": 78375175,
"node_id": "MDQ6VXNlcjc4Mzc1MTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/78375175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MLDovakin",
"html_url": "https://github.com/MLDovakin",
"followers_url": "https://api.github.com/users/MLDovakin/followers",
"following_url": "https://api.github.com/users/MLDovakin/following{/other_user}",
"gists_url": "https://api.github.com/users/MLDovakin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MLDovakin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MLDovakin/subscriptions",
"organizations_url": "https://api.github.com/users/MLDovakin/orgs",
"repos_url": "https://api.github.com/users/MLDovakin/repos",
"events_url": "https://api.github.com/users/MLDovakin/events{/privacy}",
"received_events_url": "https://api.github.com/users/MLDovakin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @n1t0 can chime in here!",
"> When I output tokenizer name_or_path = nothing is displayed. This is normal?\r\n\r\nI think it is yes, you are loading using `tokenizer_file=` instead of using the normal path with `from_pretrained`. No need to worry about this.\r\n\r\nConcerning the error, I think the way to avoid it is by specifying `legacy_format=False`:\r\n```python\r\ntokenizer.save_pretrained(\"/content/tokennizerrrr\", legacy_format=False)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | Here is my tokenizer code and how I save it to a json file" /content/bert-datas7.json"
````
from tokenizers import normalizers
from tokenizers.normalizers import Lowercase, NFD, StripAccents
bert_tokenizer.pre_tokenizer = Whitespace()
from tokenizers.processors import TemplateProcessing
bert_tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", 1),
("[SEP]", 2),
("[PAD]", 3),
],
)
from tokenizers.trainers import WordPieceTrainer
trainer = WordPieceTrainer(
vocab_size=30522, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"], pad_to_max_length=True
)
files = [f"/content/For_ITMO.txt" for split in ["test", "train", "valid"]]
bert_tokenizer.train(trainer, files)
model_files = bert_tokenizer.model.save("data", "/content/For_ITMO.txt")
bert_tokenizer.model = WordPiece.from_file(*model_files, unk_token="[UNK]", pad_to_max_length=True)
bert_tokenizer.save("/content/bert-datas7.json")
````
When I output tokenizer name_or_path = nothing is displayed. This is normal?
````
tokenizer = PreTrainedTokenizerFast(tokenizer_file='/content/bert-datas7.json')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
print(tokenizer)
>>> PreTrainedTokenizerFast(name_or_path='', vocab_size=1435, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'pad_token': '[PAD]'})
````
Also, when I try to save my tokenizer, I get an error without explanation. How can I rewrite the code so that all this???
#9658
#10039
[For_ITMO.txt-vocab (1) (1).txt](https://github.com/huggingface/transformers/files/5945659/For_ITMO.txt-vocab.1.1.txt)
````
tokenizer.save_pretrained("/content/tokennizerrrr")
NotImplementedError Traceback (most recent call last)
<ipython-input-11-efc48254a528> in <module>()
----> 1 tokenizer.save_pretrained("/content/tokennizerrrr")
2 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in save_vocabulary(self, save_directory, filename_prefix)
2042 :obj:`Tuple(str)`: Paths to the files saved.
2043 """
-> 2044 raise NotImplementedError
2045
2046 def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:
NotImplementedError:
````
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10079/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10078/comments | https://api.github.com/repos/huggingface/transformers/issues/10078/events | https://github.com/huggingface/transformers/pull/10078 | 803,762,731 | MDExOlB1bGxSZXF1ZXN0NTY5NjM2OTgz | 10,078 | Replace strided slice with tf.expand_dims | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> When two new dimensions are need, can we use a different operator to make the code more readable? Suggested tf.reshape but there might be something else available?\r\n\r\nYes, this is doable with `tf.reshape`.",
"All the slow tests of the concerned models are ok!"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
This PR aims to replace the strided slice notation by its TF operator counterpart. As proposed by @mfuntowicz https://github.com/huggingface/transformers/pull/9890#discussion_r571939682 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10078/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10078",
"html_url": "https://github.com/huggingface/transformers/pull/10078",
"diff_url": "https://github.com/huggingface/transformers/pull/10078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10078.patch",
"merged_at": 1612889308000
} |
https://api.github.com/repos/huggingface/transformers/issues/10077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10077/comments | https://api.github.com/repos/huggingface/transformers/issues/10077/events | https://github.com/huggingface/transformers/pull/10077 | 803,762,030 | MDExOlB1bGxSZXF1ZXN0NTY5NjM2Mzkw | 10,077 | Update tokenizers requirement | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | Bump `tokenizers` version requirement to use the latest release, and also accept any new version before the next one possibly breaking. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10077/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10077",
"html_url": "https://github.com/huggingface/transformers/pull/10077",
"diff_url": "https://github.com/huggingface/transformers/pull/10077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10077.patch",
"merged_at": 1612805247000
} |
https://api.github.com/repos/huggingface/transformers/issues/10076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10076/comments | https://api.github.com/repos/huggingface/transformers/issues/10076/events | https://github.com/huggingface/transformers/issues/10076 | 803,740,872 | MDU6SXNzdWU4MDM3NDA4NzI= | 10,076 | [tests] where to put deepspeed + fairscale tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The examples are not tests, so `examples/deepspeed` should not be created to host some deepspeed tests. I'm wondering why we would need an `examples/deepspeed` since the base concept of having deepspeed integrating in our `Trainer` is to have it work out of the box for **all** our examples.\r\n\r\nMy suggestion was to create an `examples/tests` folder where all tests should go (so `test_examples`, and all `seq2seq/test_xxx`), to keep the examples folder themselves clean so that user can easily use them.",
"> The examples are not tests, so `examples/deepspeed` should not be created to host some deepspeed tests. I'm wondering why we would need an `examples/deepspeed` since the base concept of having deepspeed integrating in our `Trainer` is to have it work out of the box for **all** our examples.\r\n\r\nI only meant it as a grouping, same as we did for models. first it was all flat and then we grouped them together under `models`.\r\n\r\n> My suggestion was to create an `examples/tests` folder where all tests should go (so `test_examples`, and all `seq2seq/test_xxx`), to keep the examples folder themselves clean so that user can easily use them.\r\n\r\nSure, as long as you feel that it's OK that we test core integrations under `examples` (as it is now) that works for me.\r\n\r\nCould you pelase clarify, do you prefer most/all of the `examples/tests` to be flat, or would grouping make things easier to make sense of - I'm asking since some tests come with extra files (as is the case with ds_config files) - so `examples/tests/deepspeed`, ...\r\n",
"I agree with Sylvain, fairscale/deepspeed is supposed to work with all of our existing examples, so IMO we shouldn’t add `examples/deepspeed`.\r\n\r\n `examples/tests` makes sense to me.",
"> Could you pelase clarify, do you prefer most/all of the examples/tests to be flat, or would grouping make things easier to make sense of - I'm asking since some tests come with extra files (as is the case with ds_config files) - so examples/tests/deepspeed, ...\r\n\r\nWe can certainly have several files once they are all together in one folder. I'd just like the examples subfolders to be clean, our internal testing should be setup so it's the easiest for us to understand/debug.",
"Thank you for the clarification, @sgugger. I will start working on that transition."
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | As a split off from this comment https://github.com/huggingface/transformers/pull/10039#pullrequestreview-585482462 we need to find a new home for deepspeed + fairscale tests.
Currently there are under `examples/seq2seq` because they rely on `finetune_trainer.py` ( `run_seq2seq.py` once the transition is over).
@sgugger suggests to keep the `seq2seq` folder as simple as possible. We also have `ds_config.json` there that could be moved too.
Seeing what's happening in the fairscale land - I think we will need a bunch of various tests there in the future too.
So where should we put the deepspeed + fairscale tests?
Ideally they should be put under main `tests`, since they are part of the trainer core, but I'm not sure whether reaching across the tests suite is a clean approach.
My fantasy is that one day transformers will have a few essential tools that aren't examples, and those will then leave somewhere in the main tree, perhaps `src/transformers/apps` and then it'd be easy to have such tests under `tests`.
So suggestions for now:
1. create `examples/deepspeed` and `examples/fairscale`
2. create `examples/distributed` and perhaps have all those extensions tested in one folder
3. create a new 3rd test suite for integrations
4. create `tests/deepspeed` - but as voiced earlier I'm not sure how reaching across a test suite will work - need to try - also this proposes to change the current flat structure of `tests`.
Perhaps you have other ideas.
@sgugger, @patrickvonplaten, @LysandreJik, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10076/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10075/comments | https://api.github.com/repos/huggingface/transformers/issues/10075/events | https://github.com/huggingface/transformers/issues/10075 | 803,732,718 | MDU6SXNzdWU4MDM3MzI3MTg= | 10,075 | assertion failed: [predictions must be >= 0] | {
"login": "ZJaume",
"id": 11339330,
"node_id": "MDQ6VXNlcjExMzM5MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/11339330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZJaume",
"html_url": "https://github.com/ZJaume",
"followers_url": "https://api.github.com/users/ZJaume/followers",
"following_url": "https://api.github.com/users/ZJaume/following{/other_user}",
"gists_url": "https://api.github.com/users/ZJaume/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZJaume/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZJaume/subscriptions",
"organizations_url": "https://api.github.com/users/ZJaume/orgs",
"repos_url": "https://api.github.com/users/ZJaume/repos",
"events_url": "https://api.github.com/users/ZJaume/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZJaume/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @jplu has an idea!",
"I dug up a bit into the model and it seems that the activation is `tanh`. Shouldn't it be a `sigmoid` or a `softmax`? That's why it is producing predictions lower than 0.\r\n\r\n```python\r\nIn [2]: model.classifier.dense\r\nOut[2]: <tensorflow.python.keras.layers.core.Dense at 0x7f6f70357350>\r\n\r\nIn [3]: model.classifier.dense.activation\r\nOut[3]: <function tensorflow.python.keras.activations.tanh(x)>\r\n```",
"Hello!\r\n\r\nFrom what I can see from your dataset example, you have two labels `0` and `1` and not one so that's might be why you get this issue. For regression task, (single label output), they all have to have a float value between `0` and `1`. You can have an example with the `stsb` glue task in our example here https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py",
"Alright, I'll take a look then. Predicting a binary classification with 2 neurons and `linear` or `tanh` activations seemed strange to me. I've always used a single neuron with `sigmoid`.",
"For binary classification, you have two labels and then neurons, it is more intuitive to proceed that way :) but yes you can also do what you propose and round the float value to 0 or 1 depending of the output of your sigmoid activation. Nevertheless, our models don't propose such approach.",
"Figured out that the error was thrown by the `Precision` and `Recall` classes because they require values between 0 and 1. In case someone wants to use them when training with native TensowFlow I managed to add the `argmax` to the the classes with this:\r\n\r\n```python\r\nfrom tensorflow.python.keras.utils import metrics_utils\r\n\r\nclass PrecisionArgmax(Precision):\r\n def update_state(self, y_true, y_pred, sample_weight=None):\r\n y_pred = tf.math.argmax(y_pred, -1)\r\n return metrics_utils.update_confusion_matrix_variables(\r\n {\r\n metrics_utils.ConfusionMatrix.TRUE_POSITIVES: self.true_positives,\r\n metrics_utils.ConfusionMatrix.FALSE_POSITIVES: self.false_positives\r\n },\r\n y_true,\r\n y_pred,\r\n thresholds=self.thresholds,\r\n top_k=self.top_k,\r\n class_id=self.class_id,\r\n sample_weight=sample_weight)\r\n```\r\n\r\nSo the code that I posted works with `num_classes=2` and using the the overridden classes as metrics."
] | 1,612 | 1,642 | 1,612 | NONE | null | Trying to train a binary classifier over sentence pairs with custom dataset throws a Tensroflow error.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `4.2.2`
- Platform: Ubuntu 18.04
- Python version: `3.7.5`
- PyTorch version (GPU?):
- Tensorflow version (GPU): `2.3.1`
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Nope
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (TFRoberta, TFXLMRoberta...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: https://huggingface.co/transformers/training.html#fine-tuning-in-native-tensorflow-2
* [x] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
from keras.metrics import Precision, Recall
import tensorflow as tf
def build_dataset(tokenizer, filename):
data = [[], [], []]
with open(filename, 'r') as file_:
for line in file_:
fields = line.split('\t')
data[0].append(fields[0].strip())
data[1].append(fields[1].strip())
data[2].append(int(fields[2].strip()))
sentences = tokenizer(data[0], data[1],
padding=True,
truncation=True)
return tf.data.Dataset.from_tensor_slices((dict(sentences),
data[2]))
settings = {
"model": 'roberta-base',
"batch_size": 8,
"n_classes": 1,
"epochs": 10,
"steps_per_epoch": 128,
"patience": 5,
"loss": "binary_crossentropy",
"lr": 5e7,
"clipnorm": 1.0,
}
tokenizer = AutoTokenizer.from_pretrained(settings["model"])
train_dataset = build_dataset(tokenizer, 'train.head')
train_dataset = train_dataset.shuffle(
len(train_dataset)).batch(settings["batch_size"])
dev_dataset = build_dataset(tokenizer, 'dev.head').batch(
settings["batch_size"])
model = TFAutoModelForSequenceClassification.from_pretrained(
settings['model'],
num_labels=1)
model.compile(optimizer='adam',
#loss='binary_crossentropy',
loss=model.compute_loss,
metrics=[Precision(name='p'), Recall(name='r')])
model.summary()
model.fit(train_dataset,
epochs=settings["epochs"],
#steps_per_epoch=steps_per_epoch,
validation_data=dev_dataset,
batch_size=settings["batch_size"],
verbose=1)
```
Gives the following output
```
Some layers of TFRobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Model: "tf_roberta_for_sequence_classification"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
roberta (TFRobertaMainLayer) multiple 124055040
_________________________________________________________________
classifier (TFRobertaClassif multiple 591361
=================================================================
Total params: 124,646,401
Trainable params: 124,646,401
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
Traceback (most recent call last):
File "finetune.py", line 52, in <module>
verbose=1)
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 840, in _call
return self._stateless_fn(*args, **kwds)
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2829, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call
cancellation_manager=cancellation_manager)
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 550, in call
ctx=ctx)
File "/work/user/bicleaner-neural/venv/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (tf_roberta_for_sequence_classification/classifier/out_proj/BiasAdd:0) = ] [[0.153356239][0.171548933][0.121127911]...] [y (Cast_3/x:0) = ] [0]
[[{{node assert_greater_equal/Assert/AssertGuard/else/_1/assert_greater_equal/Assert/AssertGuard/Assert}}]]
[[assert_greater_equal_1/Assert/AssertGuard/pivot_f/_31/_205]]
(1) Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (tf_roberta_for_sequence_classification/classifier/out_proj/BiasAdd:0) = ] [[0.153356239][0.171548933][0.121127911]...] [y (Cast_3/x:0) = ] [0]
[[{{node assert_greater_equal/Assert/AssertGuard/else/_1/assert_greater_equal/Assert/AssertGuard/Assert}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_20780]
Function call stack:
train_function -> train_function
```
The dataset examples look like this:
```python
print(list(train_dataset.take(1).as_numpy_iterator()))
```
```
[({'input_ids': array([[ 0, 133, 864, ..., 1, 1, 1],
[ 0, 133, 382, ..., 1, 1, 1],
[ 0, 1121, 645, ..., 1, 1, 1],
...,
[ 0, 133, 864, ..., 1, 1, 1],
[ 0, 1121, 144, ..., 1, 1, 1],
[ 0, 495, 21046, ..., 1, 1, 1]], dtype=int32), 'attention_mask': array([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], dtype=int32)}, array([0, 0, 0, 0, 1, 0, 0, 0], dtype=int32))]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10075/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10074/comments | https://api.github.com/repos/huggingface/transformers/issues/10074/events | https://github.com/huggingface/transformers/issues/10074 | 803,647,850 | MDU6SXNzdWU4MDM2NDc4NTA= | 10,074 | AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'new_ones' | {
"login": "Tortoise17",
"id": 36593708,
"node_id": "MDQ6VXNlcjM2NTkzNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36593708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tortoise17",
"html_url": "https://github.com/Tortoise17",
"followers_url": "https://api.github.com/users/Tortoise17/followers",
"following_url": "https://api.github.com/users/Tortoise17/following{/other_user}",
"gists_url": "https://api.github.com/users/Tortoise17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tortoise17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tortoise17/subscriptions",
"organizations_url": "https://api.github.com/users/Tortoise17/orgs",
"repos_url": "https://api.github.com/users/Tortoise17/repos",
"events_url": "https://api.github.com/users/Tortoise17/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tortoise17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you fix this, and if so, how? I'm having the same problem and nothing's working for me so far.\r\n\r\n**EDIT**: Fixed it, I was using `tf` tensors with a (PyTorch) `AutoModel` instead of a `TFAutoModel`."
] | 1,612 | 1,618 | 1,612 | NONE | null | I have environment with following
```
torch=1.7.1+cpu
tensorflow=2.2.0
transformers=4.2.2
Python=3.6.12
```
and I am using below command
```
input_ids = tokenizer.encode('accident', return_tensors='tf')
greedy_output = model.generate(input_ids, max_length=50)
print("Output:\n" + 100 * '-')
```
but I get the error below
```
File "/home/anaconda3/envs/myenv/lib/python3.6/site-packages/transformers/generation_utils.py", line 368, in _prepare_attention_mask_for_generation
return input_ids.new_ones(input_ids.shape)
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'new_ones'
```
Earlier, until October, the same was working perfectly. May I ask help which error or conflict I am making./?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10074/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10073/comments | https://api.github.com/repos/huggingface/transformers/issues/10073/events | https://github.com/huggingface/transformers/pull/10073 | 803,610,938 | MDExOlB1bGxSZXF1ZXN0NTY5NTEwMjQ5 | 10,073 | Added integration tests for Pytorch implementation of the ELECTRA model | {
"login": "spatil6",
"id": 6419011,
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spatil6",
"html_url": "https://github.com/spatil6",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"repos_url": "https://api.github.com/users/spatil6/repos",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | Added integration tests for Pytorch implementation of the ELECTRA model
Fixes #9949
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10073/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10073",
"html_url": "https://github.com/huggingface/transformers/pull/10073",
"diff_url": "https://github.com/huggingface/transformers/pull/10073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10073.patch",
"merged_at": 1612816945000
} |
https://api.github.com/repos/huggingface/transformers/issues/10072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10072/comments | https://api.github.com/repos/huggingface/transformers/issues/10072/events | https://github.com/huggingface/transformers/pull/10072 | 803,588,395 | MDExOlB1bGxSZXF1ZXN0NTY5NDkxNzEx | 10,072 | Fixing model templates | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One of the model templates test (the pull request target) will fail because it doesn't take into account the changes made in the PR. Following this in https://github.com/huggingface/transformers/issues/10065",
"Thanks for fixing!"
] | 1,612 | 1,612 | 1,612 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10072/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10072",
"html_url": "https://github.com/huggingface/transformers/pull/10072",
"diff_url": "https://github.com/huggingface/transformers/pull/10072.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10072.patch",
"merged_at": 1612793222000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10071/comments | https://api.github.com/repos/huggingface/transformers/issues/10071/events | https://github.com/huggingface/transformers/pull/10071 | 803,574,702 | MDExOlB1bGxSZXF1ZXN0NTY5NDgwNTEy | 10,071 | Fix mlflow param overflow clean | {
"login": "noise-field",
"id": 14188757,
"node_id": "MDQ6VXNlcjE0MTg4NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/14188757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noise-field",
"html_url": "https://github.com/noise-field",
"followers_url": "https://api.github.com/users/noise-field/followers",
"following_url": "https://api.github.com/users/noise-field/following{/other_user}",
"gists_url": "https://api.github.com/users/noise-field/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noise-field/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noise-field/subscriptions",
"organizations_url": "https://api.github.com/users/noise-field/orgs",
"repos_url": "https://api.github.com/users/noise-field/repos",
"events_url": "https://api.github.com/users/noise-field/events{/privacy}",
"received_events_url": "https://api.github.com/users/noise-field/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the issue #8849 where MLflow logging failed due to parameters logged being too long. Now the MLflow logger also fetches the limits directly from MLflow validation utility.
Fixes #8849
An example using run_seq2seq.py: https://colab.research.google.com/drive/1Sof7YtueI5MNcm9rn0wOKkFvWSeqK-Sy?usp=sharing
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10071/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10071",
"html_url": "https://github.com/huggingface/transformers/pull/10071",
"diff_url": "https://github.com/huggingface/transformers/pull/10071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10071.patch",
"merged_at": 1612803483000
} |
https://api.github.com/repos/huggingface/transformers/issues/10070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10070/comments | https://api.github.com/repos/huggingface/transformers/issues/10070/events | https://github.com/huggingface/transformers/pull/10070 | 803,568,905 | MDExOlB1bGxSZXF1ZXN0NTY5NDc1Njcx | 10,070 | remove token_type_ids from TokenizerBertGeneration output | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
this PR fixes #10045 removes `token_type_ids` as it's not needed for `BertGenerationModel`
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10070/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10070",
"html_url": "https://github.com/huggingface/transformers/pull/10070",
"diff_url": "https://github.com/huggingface/transformers/pull/10070.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10070.patch",
"merged_at": 1612807533000
} |
https://api.github.com/repos/huggingface/transformers/issues/10069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10069/comments | https://api.github.com/repos/huggingface/transformers/issues/10069/events | https://github.com/huggingface/transformers/pull/10069 | 803,517,444 | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNjk0 | 10,069 | Fix TF template | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik I think there is a problem with the Template test. It doesn't seem to take into account the changes in the current PR.",
"Good catch @jplu, thanks for fixing! Yes indeed, this test needs to be reworked. Tracking it in https://github.com/huggingface/transformers/issues/10065"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
Fix the TF template. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10069/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10069",
"html_url": "https://github.com/huggingface/transformers/pull/10069",
"diff_url": "https://github.com/huggingface/transformers/pull/10069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10069.patch",
"merged_at": 1612789851000
} |
https://api.github.com/repos/huggingface/transformers/issues/10068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10068/comments | https://api.github.com/repos/huggingface/transformers/issues/10068/events | https://github.com/huggingface/transformers/issues/10068 | 803,492,684 | MDU6SXNzdWU4MDM0OTI2ODQ= | 10,068 | Integrating GPT-2 model with Web page | {
"login": "states786",
"id": 64096105,
"node_id": "MDQ6VXNlcjY0MDk2MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/64096105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/states786",
"html_url": "https://github.com/states786",
"followers_url": "https://api.github.com/users/states786/followers",
"following_url": "https://api.github.com/users/states786/following{/other_user}",
"gists_url": "https://api.github.com/users/states786/gists{/gist_id}",
"starred_url": "https://api.github.com/users/states786/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/states786/subscriptions",
"organizations_url": "https://api.github.com/users/states786/orgs",
"repos_url": "https://api.github.com/users/states786/repos",
"events_url": "https://api.github.com/users/states786/events{/privacy}",
"received_events_url": "https://api.github.com/users/states786/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | Hi,
I would like to integrate GPT-2 model with web technology such as html, and javascript in order to build a similar editor https://transformer.huggingface.co/doc/gpt2-large
Can you please guide me how can I achieve it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10068/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10067/comments | https://api.github.com/repos/huggingface/transformers/issues/10067/events | https://github.com/huggingface/transformers/issues/10067 | 803,479,834 | MDU6SXNzdWU4MDM0Nzk4MzQ= | 10,067 | "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. | {
"login": "zakidotai",
"id": 21052344,
"node_id": "MDQ6VXNlcjIxMDUyMzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/21052344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zakidotai",
"html_url": "https://github.com/zakidotai",
"followers_url": "https://api.github.com/users/zakidotai/followers",
"following_url": "https://api.github.com/users/zakidotai/following{/other_user}",
"gists_url": "https://api.github.com/users/zakidotai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zakidotai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zakidotai/subscriptions",
"organizations_url": "https://api.github.com/users/zakidotai/orgs",
"repos_url": "https://api.github.com/users/zakidotai/repos",
"events_url": "https://api.github.com/users/zakidotai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zakidotai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Can you try installing the brand new v4.3.0 to see if it resolves your issue?",
"same problem here and my transformers is v4.3.0, still not working",
"Could you try to load the model/tokenizer and specify the `local_files_only=True` kwarg to the `from_pretrained` method, before passing them to the pipeline directly?\r\n\r\ni.e., instead of:\r\n\r\n```py\r\npipeline('sentiment-analysis')('I love you')\r\n```\r\n\r\ntry:\r\n\r\n```py\r\nfrom transformers import pipeline, AutoModelForSequenceClassification, AutoTokenizer\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\", local_files_only=True)\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased-finetuned-sst-2-english\", local_files_only=True)\r\n\r\npipeline('sentiment-analysis', model=model, tokenizer=tokenizer)('I love you')\r\n```",
"Thanks @LysandreJik.\r\n\r\nTried your suggestions. \r\nNot working. Tried by keeping local_files_only both True and False while loading the model. It did not work.",
"@PrinceMohdZaki were you able to find a solution for this error?",
"Can anyone having those kind of issues, please try #10235, and let us know if it provides more insight into the cause (networking issue, proxy error, etc.)?\r\n\r\nThanks!",
"> @PrinceMohdZaki were you able to find a solution for this error?\r\n\r\nYeah. Finally we resolved it by exporting the https_proxy same as http_proxy as shown here : https://stackoverflow.com/questions/56628194/sslerror-installing-with-pip/56628419 ",
"I am facing the same issue when trying to do `spacy.load`. Is there an obvious solution to that?\r\n\r\nI am following [this](https://turbolab.in/build-a-custom-ner-model-using-spacy-3-0/) tutorial to build a custom NER pipeline on an HPC cluster where the compute nodes do not have access to the internet. \r\n\r\nHere's the log:\r\n\r\n```\r\npython3 -m spacy train data/config.cfg --paths.train ./train.spacy --paths.dev ./valid.spacy --output ./models/output --gpu-id 0\r\nℹ Saving to output directory: models/output\r\nℹ Using GPU: 0\r\n\r\n=========================== Initializing pipeline ===========================\r\n[2022-11-18 15:12:08,973] [INFO] Set up nlp object from config\r\n[2022-11-18 15:12:08,982] [INFO] Pipeline: ['transformer', 'ner']\r\n[2022-11-18 15:12:08,984] [INFO] Created vocabulary\r\n[2022-11-18 15:12:08,986] [INFO] Finished initializing nlp object\r\nTraceback (most recent call last):\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy/__main__.py\", line 4, in <module>\r\n setup_cli()\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy/cli/_util.py\", line 71, in setup_cli\r\n command(prog_name=COMMAND)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/typer/main.py\", line 500, in wrapper\r\n return callback(**use_params) # type: ignore\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy/cli/train.py\", line 45, in train_cli\r\n train(config_path, output_path, use_gpu=use_gpu, overrides=overrides)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy/cli/train.py\", line 72, in train\r\n nlp = init_nlp(config, use_gpu=use_gpu)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy/training/initialize.py\", line 84, in init_nlp\r\n nlp.initialize(lambda: train_corpus(nlp), sgd=optimizer)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy/language.py\", line 1317, in initialize\r\n proc.initialize(get_examples, nlp=self, **p_settings)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy_transformers/pipeline_component.py\", line 355, in initialize\r\n self.model.initialize(X=docs)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/thinc/model.py\", line 299, in initialize\r\n self.init(self, X=X, Y=Y)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy_transformers/layers/transformer_model.py\", line 131, in init\r\n hf_model = huggingface_from_pretrained(name, tok_cfg, trf_cfg)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/spacy_transformers/layers/transformer_model.py\", line 251, in huggingface_from_pretrained\r\n tokenizer = AutoTokenizer.from_pretrained(str_path, **tok_config)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 471, in from_pretrained\r\n tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py\", line 332, in get_tokenizer_config\r\n resolved_config_file = get_file_from_repo(\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/transformers/file_utils.py\", line 2310, in get_file_from_repo\r\n resolved_file = cached_path(\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/transformers/file_utils.py\", line 1921, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/hyadav/.conda/envs/spacy_venv/lib/python3.10/site-packages/transformers/file_utils.py\", line 2177, in get_from_cache\r\n raise ValueError(\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n```\r\n\r\n\r\n",
"First you need to run internet on the hpc. Read your hpc's documentation or ask system administrator about it.",
"> First you need to run internet on the hpc. Read your hpc's documentation or ask system administrator about it.\r\n\r\nWe have access to the internet on the login nodes but not on the compute nodes. So, I can download everything on the login nodes I need before I finally start computations on the compute nodes.",
"Follow the same protocol to run internet on the compute nodes. If you are\nrunning the script by submitting a job on compute nodes, insert the\ncommands to run internet in the job script before the python command.\n\nIf proxies are also required to be exported, you can either export them\nbefore the python command or export proxies within the code using os module.\n\nOn Sat, 19 Nov 2022, 00:27 Himanshi Yadav, ***@***.***> wrote:\n\n> First you need to run internet on the hpc. Read your hpc's documentation\n> or ask system administrator about it.\n>\n> We have access to the internet on the login nodes but not on the compute\n> nodes. So, I can download everything on the login nodes I need before I\n> finally start computations on the compute nodes.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/10067#issuecomment-1320540933>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFATXOFCQS2LFR2I2KNVXN3WI7YFFANCNFSM4XI2ROIA>\n> .\n> You are receiving this because you modified the open/close state.Message\n> ID: ***@***.***>\n>\n",
"> Follow the same protocol to run internet on the compute nodes. If you are running the script by submitting a job on compute nodes, insert the commands to run internet in the job script before the python command. If proxies are also required to be exported, you can either export them before the python command or export proxies within the code using os module.\r\n> […](#)\r\n> On Sat, 19 Nov 2022, 00:27 Himanshi Yadav, ***@***.***> wrote: First you need to run internet on the hpc. Read your hpc's documentation or ask system administrator about it. We have access to the internet on the login nodes but not on the compute nodes. So, I can download everything on the login nodes I need before I finally start computations on the compute nodes. — Reply to this email directly, view it on GitHub <[#10067 (comment)](https://github.com/huggingface/transformers/issues/10067#issuecomment-1320540933)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AFATXOFCQS2LFR2I2KNVXN3WI7YFFANCNFSM4XI2ROIA> . You are receiving this because you modified the open/close state.Message ID: ***@***.***>\r\n\r\nThere is no protocol to run the internet on the compute nodes, you **can not** use internet on the compute nodes. ",
"Getting this error, and found out today that **HuggingFace is down** so this is likely not because of the issues mentioned above at the moment",
"Thank you ! \r\nI solved the problem when I set export HTTPS_PROXY and https_proxy",
"changing the default value of force_download=True\r\nin cached_path line 1037 \r\n\\home\\username\\anaconda3\\envs\\punct\\lib\\python3.8\\site-packages\\transformers\\file_utils.py solved it for me\r\n"
] | 1,612 | 1,686 | 1,615 | NONE | null | I am trying to execute this command after installing all the required modules and I ran into this error:
NOTE : We are running this on HPC cluster.
`python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))"
`
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/civil/phd/cez198233/anaconda3/envs/lang2/lib/python3.7/site-packages/transformers/pipelines/__init__.py", line 340, in pipeline
framework = framework or get_framework(model)
File "/home/civil/phd/cez198233/anaconda3/envs/lang2/lib/python3.7/site-packages/transformers/pipelines/base.py", line 66, in get_framework
model = AutoModel.from_pretrained(model, revision=revision)
File "/home/civil/phd/cez198233/anaconda3/envs/lang2/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 724, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/home/civil/phd/cez198233/anaconda3/envs/lang2/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 360, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/civil/phd/cez198233/anaconda3/envs/lang2/lib/python3.7/site-packages/transformers/configuration_utils.py", line 420, in get_config_dict
use_auth_token=use_auth_token,
File "/home/civil/phd/cez198233/anaconda3/envs/lang2/lib/python3.7/site-packages/transformers/file_utils.py", line 1056, in cached_path
local_files_only=local_files_only,
File "/home/civil/phd/cez198233/anaconda3/envs/lang2/lib/python3.7/site-packages/transformers/file_utils.py", line 1235, in get_from_cache
"Connection error, and we cannot find the requested files in the cached path."
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
**#CONDA LIST OUTPUT**
conda list
# packages in environment at /home/civil/phd/cez198233/anaconda3/envs/lang2:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
blas 1.0 mkl
brotlipy 0.7.0 py37hb5d75c8_1001 conda-forge
ca-certificates 2020.12.5 ha878542_0 conda-forge
certifi 2020.12.5 py37h89c1867_1 conda-forge
cffi 1.14.4 py37h261ae71_0
chardet 4.0.0 py37h89c1867_1 conda-forge
click 7.1.2 pyh9f0ad1d_0 conda-forge
cryptography 2.9.2 py37hb09aad4_0 conda-forge
cudatoolkit 10.0.130 0
dataclasses 0.7 pyhb2cacf7_7 conda-forge
filelock 3.0.12 pyh9f0ad1d_0 conda-forge
freetype 2.10.4 h5ab3b9f_0
gperftools 2.7 h767d802_2 conda-forge
idna 2.10 pyh9f0ad1d_0 conda-forge
importlib-metadata 3.4.0 py37h89c1867_0 conda-forge
intel-openmp 2020.2 254
joblib 1.0.0 pyhd8ed1ab_0 conda-forge
jpeg 9b h024ee3a_2
lcms2 2.11 h396b838_0
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20191231 h14c3975_1
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_1
lz4-c 1.9.3 h2531618_0
mkl 2020.2 256
mkl-service 2.3.0 py37he8ac12f_0
mkl_fft 1.2.0 py37h23d657b_0
mkl_random 1.1.1 py37h0573a6f_0
ncurses 6.2 he6710b0_1
ninja 1.10.2 py37hff7bd54_0
numpy 1.19.2 py37h54aff64_0
numpy-base 1.19.2 py37hfa32c7d_0
olefile 0.46 py37_0
openssl 1.1.1i h27cfd23_0
packaging 20.9 pyh44b312d_0 conda-forge
perl 5.32.0 h36c2ea0_0 conda-forge
pillow 8.1.0 py37he98fc37_0
pip 20.3.3 py37h06a4308_0
pycparser 2.20 py_2
pyopenssl 19.1.0 py37_0 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pysocks 1.7.1 py37h89c1867_3 conda-forge
python 3.7.9 h7579374_0
python_abi 3.7 1_cp37m conda-forge
pytorch 1.1.0 py3.7_cuda10.0.130_cudnn7.5.1_0 pytorch
readline 8.1 h27cfd23_0
regex 2020.11.13 py37h4abf009_0 conda-forge
requests 2.25.1 pyhd3deb0d_0 conda-forge
sacremoses 0.0.43 pyh9f0ad1d_0 conda-forge
sentencepiece 0.1.92 py37h99015e2_0 conda-forge
setuptools 52.0.0 py37h06a4308_0
six 1.15.0 py37h06a4308_0
sqlite 3.33.0 h62c20be_0
tk 8.6.10 hbc83047_0
tokenizers 0.9.4 py37h17e0dd7_1 conda-forge
torchvision 0.3.0 py37_cu10.0.130_1 pytorch
tqdm 4.56.0 pyhd8ed1ab_0 conda-forge
transformers 4.2.2 pyhd8ed1ab_0 conda-forge
typing_extensions 3.7.4.3 py_0 conda-forge
urllib3 1.26.3 pyhd8ed1ab_0 conda-forge
wheel 0.36.2 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zipp 3.4.0 py_0 conda-forge
zlib 1.2.11 h7b6447c_3
zstd 1.4.5 h9ceee32_0
**conda info --all output**
conda info --all
active environment : lang2
active env location : /home/civil/phd/cez198233/anaconda3/envs/lang2
shell level : 1
user config file : /home/civil/phd/cez198233/.condarc
populated config files : /home/civil/phd/cez198233/.condarc
conda version : 4.8.3
conda-build version : 3.18.11
python version : 3.7.6.final.0
virtual packages : __glibc=2.17
base environment : /home/civil/phd/cez198233/anaconda3 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/civil/phd/cez198233/anaconda3/pkgs
/home/civil/phd/cez198233/.conda/pkgs
envs directories : /home/civil/phd/cez198233/anaconda3/envs
/home/civil/phd/cez198233/.conda/envs
platform : linux-64
user-agent : conda/4.8.3 requests/2.22.0 CPython/3.7.6 Linux/3.10.0-957.el7.x86_64 centos/7.6.1810 glibc/2.17
UID:GID : 86941:11302
netrc file : None
offline mode : False
# conda environments:
#
base /home/civil/phd/cez198233/anaconda3
9pytorch /home/civil/phd/cez198233/anaconda3/envs/9pytorch
lang2 * /home/civil/phd/cez198233/anaconda3/envs/lang2
tf-gpu /home/civil/phd/cez198233/anaconda3/envs/tf-gpu
sys.version: 3.7.6 (default, Jan 8 2020, 19:59:22)
...
sys.prefix: /home/civil/phd/cez198233/anaconda3
sys.executable: /home/civil/phd/cez198233/anaconda3/bin/python
conda location: /home/civil/phd/cez198233/anaconda3/lib/python3.7/site-packages/conda
conda-build: /home/civil/phd/cez198233/anaconda3/bin/conda-build
conda-convert: /home/civil/phd/cez198233/anaconda3/bin/conda-convert
conda-debug: /home/civil/phd/cez198233/anaconda3/bin/conda-debug
conda-develop: /home/civil/phd/cez198233/anaconda3/bin/conda-develop
conda-env: /home/civil/phd/cez198233/anaconda3/bin/conda-env
conda-index: /home/civil/phd/cez198233/anaconda3/bin/conda-index
conda-inspect: /home/civil/phd/cez198233/anaconda3/bin/conda-inspect
conda-metapackage: /home/civil/phd/cez198233/anaconda3/bin/conda-metapackage
conda-render: /home/civil/phd/cez198233/anaconda3/bin/conda-render
conda-server: /home/civil/phd/cez198233/anaconda3/bin/conda-server
conda-skeleton: /home/civil/phd/cez198233/anaconda3/bin/conda-skeleton
conda-verify: /home/civil/phd/cez198233/anaconda3/bin/conda-verify
user site dirs: ~/.local/lib/python3.6
~/.local/lib/python3.7
CIO_TEST: <not set>
CONDA_DEFAULT_ENV: lang2
CONDA_EXE: /home/civil/phd/cez198233/anaconda3/bin/conda
CONDA_PREFIX: /home/civil/phd/cez198233/anaconda3/envs/lang2
CONDA_PROMPT_MODIFIER: (lang2)
CONDA_PYTHON_EXE: /home/civil/phd/cez198233/anaconda3/bin/python
CONDA_ROOT: /home/civil/phd/cez198233/anaconda3
CONDA_SHLVL: 1
HTTPS_PROXY: <set>
HTTP_PROXY: <set>
MANPATH: /usr/share/Modules/3.2.10/share/man::/opt/pbs/19.2.4/share/man
MODULEPATH: /home/soft/modules
PATH: /home/civil/phd/cez198233/anaconda3/bin:/home/civil/phd/cez198233/anaconda3/envs/lang2/bin:/home/civil/phd/cez198233/anaconda3/bin:/home/civil/phd/cez198233/anaconda3/bin:/home/civil/phd/cez198233/anaconda3/condabin:/opt/am/bin:/opt/am/sbin:/opt/pbs/default/bin:/opt/pbs/default/sbin:/usr/share/Modules/3.2.10/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/ibutils/bin:/root/bin:/opt/pbs/19.2.4/bin:/home/civil/phd/cez198233/bin
REQUESTS_CA_BUNDLE: <not set>
SSL_CERT_FILE: <not set>
ftp_proxy: <set>
http_proxy: <set>
https_proxy: <set> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10067/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10067/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10066/comments | https://api.github.com/repos/huggingface/transformers/issues/10066/events | https://github.com/huggingface/transformers/pull/10066 | 803,478,192 | MDExOlB1bGxSZXF1ZXN0NTY5NDAxMDMw | 10,066 | Removing run_pl_glue.py from text classification docs, include run_xnli.py & run_tf_text_classification.py | {
"login": "cbjuan",
"id": 2938045,
"node_id": "MDQ6VXNlcjI5MzgwNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2938045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cbjuan",
"html_url": "https://github.com/cbjuan",
"followers_url": "https://api.github.com/users/cbjuan/followers",
"following_url": "https://api.github.com/users/cbjuan/following{/other_user}",
"gists_url": "https://api.github.com/users/cbjuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cbjuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbjuan/subscriptions",
"organizations_url": "https://api.github.com/users/cbjuan/orgs",
"repos_url": "https://api.github.com/users/cbjuan/repos",
"events_url": "https://api.github.com/users/cbjuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cbjuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sure! Change applied",
"Done, command `make style` applied. Thanks for the guidance",
"Build still not succeeding. I will check ",
"Ah yes, those links don't need the underscores! Good catch and sorry for giving you the wrong example to follow. Just waiting for the last tests and we can merge :-)",
"No worries, @sgugger. Thanks for the guidance :)"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
Since `run_pl_glue.py` is not part of `text-classification` examples after #9010, this PR removes it from the text-classification docs. Also, it adds `run_xnli.py` and `run_tf_text_classification.py` scripts, which are in that folder now.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger may be interested in the PR, he's responsible for docs and the author of #9010
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10066/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10066",
"html_url": "https://github.com/huggingface/transformers/pull/10066",
"diff_url": "https://github.com/huggingface/transformers/pull/10066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10066.patch",
"merged_at": 1612807462000
} |
https://api.github.com/repos/huggingface/transformers/issues/10065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10065/comments | https://api.github.com/repos/huggingface/transformers/issues/10065/events | https://github.com/huggingface/transformers/issues/10065 | 803,442,429 | MDU6SXNzdWU4MDM0NDI0Mjk= | 10,065 | Model templates tests are run twice | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,612 | 1,614 | 1,614 | MEMBER | null | The CI currently runs the model template tests twice when opening a PR from a branch of the huggingface/transformers repo.
The `pull_request_target` should only trigger on external pull requests, or we should remove the `push` target so that the suite only runs once.
Additionally, the `pull_request_target` suite doesn't take into account the changes the PR is doing.
ETA until resolved ~ 2 weeks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10065/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10065/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10064/comments | https://api.github.com/repos/huggingface/transformers/issues/10064/events | https://github.com/huggingface/transformers/pull/10064 | 803,440,078 | MDExOlB1bGxSZXF1ZXN0NTY5MzY4OTc5 | 10,064 | Fix model template typo | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for fixing!"
] | 1,612 | 1,612 | 1,612 | MEMBER | null | Fix typo introduced in https://github.com/huggingface/transformers/pull/10033 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10064",
"html_url": "https://github.com/huggingface/transformers/pull/10064",
"diff_url": "https://github.com/huggingface/transformers/pull/10064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10064.patch",
"merged_at": 1612782126000
} |
https://api.github.com/repos/huggingface/transformers/issues/10063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10063/comments | https://api.github.com/repos/huggingface/transformers/issues/10063/events | https://github.com/huggingface/transformers/pull/10063 | 803,419,880 | MDExOlB1bGxSZXF1ZXN0NTY5MzUyNDQ4 | 10,063 | [Finetune Seq2Seq Trainer] fix bert2bert test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"failing test is unrelated -> waiting for @sgugger's approval before merging though."
] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes the slow test: `tests/test_trainer_seq2seq.py::Seq2seqTrainerTester::test_finetune_bert2bert` which was failing because `rouge_score` was not added to the dependencies. In this PR I remove the usage of `datasets.load("rouge")` since it is very much unnecessary here => we are only testing whether training does not throw an error and for this it doesn't matter whether we use the rouge metric or accuracy. Removing `rouge` removes a dependency, which is the better way here IMO.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10063",
"html_url": "https://github.com/huggingface/transformers/pull/10063",
"diff_url": "https://github.com/huggingface/transformers/pull/10063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10063.patch",
"merged_at": 1612789469000
} |
https://api.github.com/repos/huggingface/transformers/issues/10062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10062/comments | https://api.github.com/repos/huggingface/transformers/issues/10062/events | https://github.com/huggingface/transformers/pull/10062 | 803,410,667 | MDExOlB1bGxSZXF1ZXN0NTY5MzQ0NzU2 | 10,062 | Disable temporarily too slow tests (Longformer/LED) | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jplu feel free to merge after fixing the style!"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
The tests about SavedModel takes way too long for Longformer and LED. To do not timeout the CI, we disable them and will see how to better handle them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10062/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10062",
"html_url": "https://github.com/huggingface/transformers/pull/10062",
"diff_url": "https://github.com/huggingface/transformers/pull/10062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10062.patch",
"merged_at": 1612783952000
} |
https://api.github.com/repos/huggingface/transformers/issues/10061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10061/comments | https://api.github.com/repos/huggingface/transformers/issues/10061/events | https://github.com/huggingface/transformers/issues/10061 | 803,404,708 | MDU6SXNzdWU4MDM0MDQ3MDg= | 10,061 | Dimension error while finetuning longformer with roberta-large EncoderDecoderModel | {
"login": "amiyamandal-dev",
"id": 42173775,
"node_id": "MDQ6VXNlcjQyMTczNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/42173775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amiyamandal-dev",
"html_url": "https://github.com/amiyamandal-dev",
"followers_url": "https://api.github.com/users/amiyamandal-dev/followers",
"following_url": "https://api.github.com/users/amiyamandal-dev/following{/other_user}",
"gists_url": "https://api.github.com/users/amiyamandal-dev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amiyamandal-dev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amiyamandal-dev/subscriptions",
"organizations_url": "https://api.github.com/users/amiyamandal-dev/orgs",
"repos_url": "https://api.github.com/users/amiyamandal-dev/repos",
"events_url": "https://api.github.com/users/amiyamandal-dev/events{/privacy}",
"received_events_url": "https://api.github.com/users/amiyamandal-dev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @amiyamandal-dev, you can only combine `longformer-base-4096` with `roberta-base` since those two models have the same `hidden_size`. Combining `longformer-base-4096` with `roberta-large` will necessarily lead to errors.",
"Thank you for your reply @patrickvonplaten,\r\nAble to understand why it's not working. If I want to run train a model with Encoder `longformer` and Decoder some big model like `roberta-large` , so what steps to follow. That would be a great help.",
"I think you should be able to use longformer-large, e.g. https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa for your case",
"Thanks @patrickvonplaten it worked"
] | 1,612 | 1,612 | 1,612 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.11
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
- maintained examples (not research project or legacy): @patrickvonplaten
Models:
- allenai/longformer-base-4096 with roberta-large
- allenai/longformer-base-4096 with xlm-roberta-large
## To reproduce
Steps to reproduce the behavior:
I have follow the steps on https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16
but switched model roberta-base to roberta-large
**CODE:-**
```
model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-large")
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
```
model params
```
# enable gradient checkpointing for longformer encoder
model.encoder.config.gradient_checkpointing = True
# set decoding params
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
encoder_length = 2048
decoder_length = 128
batch_size = 16
```
training parms
```
training_args = TrainingArguments(
output_dir="./",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
#predict_from_generate=True,
#evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=1000,
save_steps=1000,
eval_steps=1000,
overwrite_output_dir=True,
warmup_steps=2000,
save_total_limit=3,
fp16=True,
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
# start training
trainer.train()
```
**ERROR:-**
```
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 430, in forward
**kwargs_decoder,
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 928, in forward
return_dict=return_dict,
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 808, in forward
return_dict=return_dict,
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 505, in forward
output_attentions,
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 424, in forward
output_attentions,
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 328, in forward
output_attentions,
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 198, in forward
key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/home/jovyan/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1692, in linear
output = input.matmul(weight.t())
RuntimeError: mat1 dim 1 must match mat2 dim 0
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10061/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10060/comments | https://api.github.com/repos/huggingface/transformers/issues/10060/events | https://github.com/huggingface/transformers/pull/10060 | 803,401,639 | MDExOlB1bGxSZXF1ZXN0NTY5MzM3MzI3 | 10,060 | [BART Tests] Fix Bart mask filling pipeline tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Test failure is unrelated. Merging "
] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Following the PR: https://github.com/huggingface/transformers/pull/9783/ some slow Bart tests were not updated. This PR updates the mask-filling bart tests accordingly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10060/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10060",
"html_url": "https://github.com/huggingface/transformers/pull/10060",
"diff_url": "https://github.com/huggingface/transformers/pull/10060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10060.patch",
"merged_at": 1612779909000
} |
https://api.github.com/repos/huggingface/transformers/issues/10059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10059/comments | https://api.github.com/repos/huggingface/transformers/issues/10059/events | https://github.com/huggingface/transformers/pull/10059 | 803,384,426 | MDExOlB1bGxSZXF1ZXN0NTY5MzIyOTUz | 10,059 | Fix slow dpr test | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10059",
"html_url": "https://github.com/huggingface/transformers/pull/10059",
"diff_url": "https://github.com/huggingface/transformers/pull/10059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10059.patch",
"merged_at": 1612777406000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/10058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10058/comments | https://api.github.com/repos/huggingface/transformers/issues/10058/events | https://github.com/huggingface/transformers/issues/10058 | 803,326,382 | MDU6SXNzdWU4MDMzMjYzODI= | 10,058 | When encoding text to feature vectors - Would be awesome to be able to use the simplest tokenizer with a split on spaces | {
"login": "Svito-zar",
"id": 15908492,
"node_id": "MDQ6VXNlcjE1OTA4NDky",
"avatar_url": "https://avatars.githubusercontent.com/u/15908492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Svito-zar",
"html_url": "https://github.com/Svito-zar",
"followers_url": "https://api.github.com/users/Svito-zar/followers",
"following_url": "https://api.github.com/users/Svito-zar/following{/other_user}",
"gists_url": "https://api.github.com/users/Svito-zar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Svito-zar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Svito-zar/subscriptions",
"organizations_url": "https://api.github.com/users/Svito-zar/orgs",
"repos_url": "https://api.github.com/users/Svito-zar/repos",
"events_url": "https://api.github.com/users/Svito-zar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Svito-zar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This solution seems to be working:\r\n```\r\nfrom transformers import DistilBertModel, DistilBertTokenizer\r\nimport torch\r\n\r\ntext_str = \"also du fängst an mit der Stadtrundfahrt\"\r\n\r\n# create DistilBERT tokenizer and model\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-german-cased')\r\nmodel = DistilBertModel.from_pretrained('distilbert-base-german-cased')\r\n\r\n# check if tokens are correct\r\ntokens = tokenizer.basic_tokenizer.tokenize(text_str)\r\nprint(\"Tokens: \", tokens)\r\n\r\n# Encode the curent text\r\ninput_ids = torch.tensor(tokenizer.encode(tokens)).unsqueeze(0)\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0]\r\nprint(last_hidden_states[0,1:-1].shape)\r\n```\r\n\r\nWhat do you think? Is it correct usage?",
"I just realized it was not correct. [Users of StackOverflow](https://stackoverflow.com/questions/66064503/in-huggingface-tokenizers-how-can-i-split-a-sequence-simply-on-spaces/) indicated that.\r\n\r\nRunning the following command: `tokenizer.convert_ids_to_tokens(input_ids.tolist()[0])` indicates that \"fängst\" and \"Stadtrundfahrt\" are encoded with the same id because they are not part of the dictionary :( ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
When using the `DistilBertTokenizer` (or `BertTokenizer`) I would love to tokenize my text by simply splitting it on spaces:
```
['also', 'du', 'fängst', 'an', 'mit', 'der', 'Stadtrundfahrt']
```
instead of the default behavior, which is splitting it into sub-parts:
```
['also', 'du', 'f', '##ängst', 'an', 'mit', 'der', 'Stadt', '##rund', '##fahrt']
```
## Motivation
That's needed in order to have feature vector length to be the same as the number of words in the text, so that we can have 1-to-1 correspondence between words and their features
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I have seen that such tokenization can be done using `tokenizer.basic_tokenizer` and I have tried to use it to encode the text:
```
from transformers import DistilBertModel, DistilBertTokenizer
import torch
text_str = "also du fängst an mit der Stadtrundfahrt"
# create DistilBERT tokenizer and model
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-german-cased')
model = DistilBertModel.from_pretrained('distilbert-base-german-cased')
# check if tokens are correct
tokens = tokenizer.basic_tokenizer.tokenize(text_str)
print("Tokens: ", tokens)
# Encode the curent text
input_ids = torch.tensor(tokenizer.basic_tokenizer.encode(text_str)).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states = outputs[0]
```
But this code raises an error because `BasicTokenizer` does not have attribute 'encode' yet:
```
Traceback (most recent call last):
File "/home/tarask/Desktop/Work/Code/Git/probabilistic-gesticulator/my_code/data_processing/annotations/feat/test.py", line 15, in <module>
input_ids = torch.tensor(tokenizer.basic_tokenizer.encode(text_str)).unsqueeze(0)
AttributeError: 'BasicTokenizer' object has no attribute 'encode'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10058/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10057/comments | https://api.github.com/repos/huggingface/transformers/issues/10057/events | https://github.com/huggingface/transformers/pull/10057 | 803,267,505 | MDExOlB1bGxSZXF1ZXN0NTY5MjIzNjcx | 10,057 | Fixed docs for the shape of `scores` in `generate()` | {
"login": "kylie-box",
"id": 29100716,
"node_id": "MDQ6VXNlcjI5MTAwNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/29100716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylie-box",
"html_url": "https://github.com/kylie-box",
"followers_url": "https://api.github.com/users/kylie-box/followers",
"following_url": "https://api.github.com/users/kylie-box/following{/other_user}",
"gists_url": "https://api.github.com/users/kylie-box/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylie-box/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylie-box/subscriptions",
"organizations_url": "https://api.github.com/users/kylie-box/orgs",
"repos_url": "https://api.github.com/users/kylie-box/repos",
"events_url": "https://api.github.com/users/kylie-box/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylie-box/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey @kylie-box, \r\n\r\nCould you run `make style` to fix a problem with the code quality. I think we can merge afterward :-)",
"Hey @patrickvonplaten,\r\n\r\nIt's fixed now. :) \r\n",
"Thanks a lot!"
] | 1,612 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
Fixed the shape of `scores` from `generate()` outputs to be `max_length-1` for all output classes. The first token `decoder_start_token_id` is not generated and thus no scores.
[Scores in generate()](https://discuss.huggingface.co/t/scores-in-generate/3450)
[Generation Probabilities: How to compute probabilities of output scores for GPT2](https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10057/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10057",
"html_url": "https://github.com/huggingface/transformers/pull/10057",
"diff_url": "https://github.com/huggingface/transformers/pull/10057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10057.patch",
"merged_at": 1619943047000
} |
https://api.github.com/repos/huggingface/transformers/issues/10056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10056/comments | https://api.github.com/repos/huggingface/transformers/issues/10056/events | https://github.com/huggingface/transformers/pull/10056 | 803,056,275 | MDExOlB1bGxSZXF1ZXN0NTY5MDQ5NDQz | 10,056 | Play around with mask-filling of original model | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10056/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10056",
"html_url": "https://github.com/huggingface/transformers/pull/10056",
"diff_url": "https://github.com/huggingface/transformers/pull/10056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10056.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10055/comments | https://api.github.com/repos/huggingface/transformers/issues/10055/events | https://github.com/huggingface/transformers/issues/10055 | 803,048,099 | MDU6SXNzdWU4MDMwNDgwOTk= | 10,055 | Cannnot train Roberta: 2 different errors | {
"login": "neel04",
"id": 11617870,
"node_id": "MDQ6VXNlcjExNjE3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neel04",
"html_url": "https://github.com/neel04",
"followers_url": "https://api.github.com/users/neel04/followers",
"following_url": "https://api.github.com/users/neel04/following{/other_user}",
"gists_url": "https://api.github.com/users/neel04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neel04/subscriptions",
"organizations_url": "https://api.github.com/users/neel04/orgs",
"repos_url": "https://api.github.com/users/neel04/repos",
"events_url": "https://api.github.com/users/neel04/events{/privacy}",
"received_events_url": "https://api.github.com/users/neel04/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please follow the instructions in the template and do not tag more than three people. In this case you are sending notifications to seven different persons for a problem no one can help you solve since you did not give enough information. Let's see why:\r\n\r\nThe first error seems to indicate your labels are strings, which cannot be known for sure since you did not provide an example of what your data look like. Just saying \"My data is private so I can't share it with you\" is not helpful. You could give us the first line of the dataset, potentially masking some private content.\r\n\r\nIf your labels indeed are strings, you need to convert them to some IDs (going from 0 to your number of labels) before trying to train your model with them. You model should also be instantiated with the correct number of labels by passing along `num_labels=xxx` (otherwise you will get other errors down the line).\r\n\r\nThe second error has nothing to do with transformers, you are passing `val.shuffle` as validation data where `val` is a pandas DataFrame and therefore as no `shuffle` method.",
"Sorry for tagging more than 3 people :( my bad\r\nAbout the labels, it is actually a string, and there are about 20 unique labels. Does that mean I should hot encode them (like 20 columns and the value `1` in the correct column) or just simple like:-\r\n\r\n```\r\nID_UNiqe_23, \"Lorem Ipsum .....\", 2\r\nID_UNiqe_2314, \"Lorem Lorem .....\", 13\r\n```\r\nNote that I want to do simple classification, NOT multi-label classification. I shall update you with the problem regarding the `shuffle` method because the error was before I added `validation_data` argument in the fit function. \r\n\r\nLastly, where should the `num_labels` argument be put, since I can't find any reference to that for `TFTrainingArguments`. \r\n\r\nThanx a ton for your help!\r\n",
"Hello @neel04! \r\n\r\nYes, all your labels have to be ids and not strings. You can find a complete example among the examples, and more precisely the text classification one, https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_text_classification.py I suggest you to thoroughly read it as it contains what you need to know for the labels :)\r\n\r\nFor you second example, you cannot call `compille` and `fit` directly on the models, you have to re-create a model by specifying the inputs and output. Such as:\r\n```\r\nr_model = TFRobertForSequenceClassification(....)\r\ninput_ids = tf.keras.layers.Input([None,], dtype=tf.int32, name=\"input_ids\")\r\nattention_mask = tf.keras.layers.Input([None,], dtype=tf.int32, name=\"attention_mask\")\r\ntoken_type_ids = tf.keras.layers.Input([None,], dtype=tf.int32, name=\"token_type_ids\")\r\noutput = model([input_ids, attention_mask, token_type_ids])\r\nmodel = tf.keras.models.Model(inputs=[input_ids, attention_mask, token_type_ids], output=output)\r\n\r\nmodel.compile(....)\r\nmodel.fit(....)\r\n``` ",
"So I converted the labels to Python Integers and tried using the `Trainer()` method but I am getting this error:-\r\n\r\n```\r\nSome weights of the model checkpoint at Roberta-base were not used when initializing TFRobertaForSequenceClassification: ['lm_head']\r\n- This IS expected if you are initializing TFRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing TFRobertaForSequenceClassification from the checkpoint of a model that you expect to be identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of TFRobertaForSequenceClassification were not initialized from the model checkpoint at Roberta-base and are newly initialized: ['classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n<ipython-input-54-f86f69d7497b> in <module>()\r\n 22 )\r\n 23 \r\n---> 24 trainer.train()\r\n\r\n11 frames\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in train(self)\r\n 410 if self.args.past_index >= 0:\r\n 411 self._past = None\r\n--> 412 for step, training_loss in enumerate(self._training_steps(train_ds, optimizer)):\r\n 413 self.global_step = iterations.numpy()\r\n 414 self.epoch_logging = epoch_iter - 1 + (step + 1) / steps_per_epoch\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in _training_steps(self, ds, optimizer)\r\n 457 Returns a generator over training steps (i.e. parameters update).\r\n 458 \"\"\"\r\n--> 459 for i, loss in enumerate(self._accumulate_next_gradients(ds)):\r\n 460 if i % self.args.gradient_accumulation_steps == 0:\r\n 461 self._apply_gradients(optimizer)\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in _accumulate_next_gradients(self, ds)\r\n 490 while True:\r\n 491 try:\r\n--> 492 yield _accumulate_next()\r\n 493 except tf.errors.OutOfRangeError:\r\n 494 break\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)\r\n 826 tracing_count = self.experimental_get_tracing_count()\r\n 827 with trace.Trace(self._name) as tm:\r\n--> 828 result = self._call(*args, **kwds)\r\n 829 compiler = \"xla\" if self._experimental_compile else \"nonXla\"\r\n 830 new_tracing_count = self.experimental_get_tracing_count()\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)\r\n 869 # This is the first call of __call__, so we have to initialize.\r\n 870 initializers = []\r\n--> 871 self._initialize(args, kwds, add_initializers_to=initializers)\r\n 872 finally:\r\n 873 # At this point we know that the initialization is complete (or less\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)\r\n 724 self._concrete_stateful_fn = (\r\n 725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n--> 726 *args, **kwds))\r\n 727 \r\n 728 def invalid_creator_scope(*unused_args, **unused_kwds):\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)\r\n 2967 args, kwargs = None, None\r\n 2968 with self._lock:\r\n-> 2969 graph_function, _ = self._maybe_define_function(args, kwargs)\r\n 2970 return graph_function\r\n 2971 \r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)\r\n 3359 \r\n 3360 self._function_cache.missed.add(call_context_key)\r\n-> 3361 graph_function = self._create_graph_function(args, kwargs)\r\n 3362 self._function_cache.primary[cache_key] = graph_function\r\n 3363 \r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)\r\n 3204 arg_names=arg_names,\r\n 3205 override_flat_arg_shapes=override_flat_arg_shapes,\r\n-> 3206 capture_by_value=self._capture_by_value),\r\n 3207 self._function_attributes,\r\n 3208 function_spec=self.function_spec,\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)\r\n 988 _, original_func = tf_decorator.unwrap(python_func)\r\n 989 \r\n--> 990 func_outputs = python_func(*func_args, **func_kwargs)\r\n 991 \r\n 992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)\r\n 632 xla_context.Exit()\r\n 633 else:\r\n--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n 635 return out\r\n 636 \r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)\r\n 975 except Exception as e: # pylint:disable=broad-except\r\n 976 if hasattr(e, \"ag_error_metadata\"):\r\n--> 977 raise e.ag_error_metadata.to_exception(e)\r\n 978 else:\r\n 979 raise\r\n\r\nAttributeError: in user code:\r\n\r\n /usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:488 _accumulate_next *\r\n return self._accumulate_gradients(per_replica_features, per_replica_labels)\r\n /usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:498 _accumulate_gradients *\r\n per_replica_loss = self.args.strategy.experimental_run_v2(\r\n\r\n AttributeError: 'OneDeviceStrategy' object has no attribute 'experimental_run_v2'\r\n\r\n```\r\nAny Idea what it might be? code is still the same.\r\n```\r\n\r\nfrom transformers import TFTrainingArguments, TFTrainer\r\n\r\ntraining_args = TFTrainingArguments(\r\n output_dir='./results', # output directory\r\n num_train_epochs=3, # total number of training epochs\r\n per_device_train_batch_size=16, # batch size per device during training\r\n per_device_eval_batch_size=64, # batch size for evaluation\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=10,\r\n)\r\n\r\nwith training_args.strategy.scope():\r\n model = TFRobertaForSequenceClassification.from_pretrained(\"roberta-base\")\r\n\r\ntrainer = TFTrainer(\r\n model=model, # the instantiated Transformers model to be trained\r\n args=training_args, # training arguments, defined above\r\n train_dataset=train_dataset, # training dataset\r\n eval_dataset=val_dataset # evaluation dataset\r\n)\r\n\r\ntrainer.train()\r\n\r\n```",
"This error should be fixed if you use the latest trainer 4.3.1 version of transformers :) ",
"I am on `4.4.0.dev0`. Downgrade?",
"from what I can see you have the error ` AttributeError: 'OneDeviceStrategy' object has no attribute 'experimental_run_v2'`. This error has been fixed in 4.2. So this tells me that you are using an outdated version of transformers.",
"Shouldn't the latest \"Bleeding-edge\" version at `4.4.0` be better than that of `4.2`?\r\nOr is this just a bug in the latest?\r\n\r\n**EDIT:-** I used the specific version (`4.3.1`) from Pypi and ran the code. This time it just produced some warnings and stopped (i.e the cell completed execution). It didn't start training despite calling `trainer.train()`.\r\n\r\nThis is the output:-\r\n\r\nAll model checkpoint layers were used when initializing TFRobertaForSequenceClassification.\r\n```\r\nSome layers of TFRobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\n\r\n```\r\nGPU memory is still occupied but Usage drops to `0%`. Any Idea what causes it not to train? @jplu ",
"That was a bug in the versions < 4.2. So you might have some conflicts in your env with multiple versions installed.",
"So the bug still remains - **I cannot train the model with `Trainer()`** but using native Tensorflow+edit for instantiating the number of labels, it's now training alright. Thanx a lot @jplu and @sgugger for your help! :+1: :1st_place_medal: \r\n\r\nNow its just to figure out how to get Trainer to work - I want to do a Hyperparameter search which apparently Trainer can be used for interfacing. Let's see how that's gonna be resolved",
"> Now its just to figure out how to get Trainer to work - I want to do a Hyperparameter search which apparently Trainer can be used for interfacing. \r\n\r\nThis isn't implemented on the TensorFlow side, only PyTorch.",
"Well that's bad. Any other way I could use HypSearch in TF?",
"> > Now its just to figure out how to get Trainer to work - I want to do a Hyperparameter search which apparently Trainer can be used for interfacing.\r\n> \r\n> This isn't implemented on the TensorFlow side, only PyTorch.\r\n\r\nAlso, if this isn't implemented on TF side, then why is there a toggle button for TF version, and why don't the docs tell that?\r\nhttps://huggingface.co/transformers/training.html?highlight=fine%20tune#trainer",
"Fine-tuning is implemented for both PyTorch and TF. You were talking about hyper-parameter search.",
"I am talking about using `Trainer()`. I can't use it - the cell executes successfully but it never starts training\r\n",
"Also, I found that the model has pretty low validation accuracy (~1-2%) and it doesn't go any further (this was achieved in 1 epoch). I suspect that the problem could be the activation function not being `Sigmoid` which is required for `Categorical Crossentropy loss`. Should I post my questions here or should I make a new issue? @jplu @sgugger "
] | 1,612 | 1,613 | 1,613 | NONE | null | ## Environment info
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No (Single GPU) --> **COLAB**
### Who can help
I am not sure, since the error is very vague and untraceable
## Information
Model I am using (Bert, XLNet ...): Roberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
It is a private dataset, so I am not at liberty to share it. However, I can provide a clue as to how the `csv` looks like:-
,ID,Text,Label
......................
> I do not think there can be anything wrong with the DataFrame as I am taking data from specific columns and converting them to numpy arrays for the rest of the steps in the HF "Fine-tuning" guide.
## To reproduce
Steps to reproduce the behavior:
```
!git clone https://github.com/huggingface/transformers.git
!cd transformers
!pip install -e .
train_text = list(train['Text'].values)
train_label = list(train['Label'].values)
val_text = list(val['Text'].values)
val_label = list(val['Label'].values)
from transformers import RobertaTokenizer, TFRobertaForSequenceClassification
import tensorflow as tf
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')
train_encodings = tokenizer(train_text, truncation=True, padding=True)
val_encodings = tokenizer(val_text, truncation=True, padding=True)
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(train_encodings),
train_label
))
val_dataset = tf.data.Dataset.from_tensor_slices((
dict(val_encodings),
val_label
))
```
All this code is common. Howver, now there is a difference in errors depending upon the training method.
###Training using `trainer`
Code:
```
from transformers import TFTrainingArguments, TFTrainer
training_args = TFTrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
with training_args.strategy.scope():
model = TFRobertaForSequenceClassification.from_pretrained("roberta-base")
trainer = TFTrainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
```
**ERROR:-**
```
All model checkpoint layers were used when initializing TFRobertaForSequenceClassification.
Some layers of TFRobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-52-f86f69d7497b> in <module>()
22 )
23
---> 24 trainer.train()
10 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in train(self)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
826 tracing_count = self.experimental_get_tracing_count()
827 with trace.Trace(self._name) as tm:
--> 828 result = self._call(*args, **kwds)
829 compiler = "xla" if self._experimental_compile else "nonXla"
830 new_tracing_count = self.experimental_get_tracing_count()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
869 # This is the first call of __call__, so we have to initialize.
870 initializers = []
--> 871 self._initialize(args, kwds, add_initializers_to=initializers)
872 finally:
873 # At this point we know that the initialization is complete (or less
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
724 self._concrete_stateful_fn = (
725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 726 *args, **kwds))
727
728 def invalid_creator_scope(*unused_args, **unused_kwds):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2967 args, kwargs = None, None
2968 with self._lock:
-> 2969 graph_function, _ = self._maybe_define_function(args, kwargs)
2970 return graph_function
2971
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3359
3360 self._function_cache.missed.add(call_context_key)
-> 3361 graph_function = self._create_graph_function(args, kwargs)
3362 self._function_cache.primary[cache_key] = graph_function
3363
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3204 arg_names=arg_names,
3205 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3206 capture_by_value=self._capture_by_value),
3207 self._function_attributes,
3208 function_spec=self.function_spec,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
988 _, original_func = tf_decorator.unwrap(python_func)
989
--> 990 func_outputs = python_func(*func_args, **func_kwargs)
991
992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
632 xla_context.Exit()
633 else:
--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
635 return out
636
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in bound_method_wrapper(*args, **kwargs)
3885 # However, the replacer is still responsible for attaching self properly.
3886 # TODO(mdan): Is it possible to do it here instead?
-> 3887 return wrapped_fn(*args, **kwargs)
3888 weak_bound_method_wrapper = weakref.ref(bound_method_wrapper)
3889
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
975 except Exception as e: # pylint:disable=broad-except
976 if hasattr(e, "ag_error_metadata"):
--> 977 raise e.ag_error_metadata.to_exception(e)
978 else:
979 raise
TypeError: in user code:
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:669 distributed_training_steps *
nb_instances_in_batch = self._compute_nb_instances(batch)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:681 _compute_nb_instances *
nb_instances = tf.reduce_sum(tf.cast(labels != -100, dtype=tf.int32))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:1786 tensor_not_equals
return gen_math_ops.not_equal(self, other, incompatible_shape_error=False)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py:6412 not_equal
name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:531 _apply_op_helper
repr(values), type(values).__name__, err))
TypeError: Expected string passed to parameter 'y' of op 'NotEqual', got -100 of type 'int' instead. Error: Expected string, got -100 of type 'int' instead.
```
###Using Native `Tensorflow` code (from official example)
CODE:
from transformers import TFRobertaForSequenceClassification
```
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(16), validation_data=val.shuffle(1000).batch(16), epochs=3, batch_size=16)
```
**ERROR:-**
```
All model checkpoint layers were used when initializing TFRobertaForSequenceClassification.
Some layers of TFRobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-51-a13d177c752e> in <module>()
5 optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
6 model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn
----> 7 model.fit(train_dataset.shuffle(1000).batch(16), validation_data=val.shuffle(1000).batch(16), epochs=3, batch_size=16)
/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py in __getattr__(self, name)
5139 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5140 return self[name]
-> 5141 return object.__getattribute__(self, name)
5142
5143 def __setattr__(self, name: str, value) -> None:
AttributeError: 'DataFrame' object has no attribute 'shuffle'
```
This is very surprising since the error are pretty different and I can't find many fixes online. I tested the datatypes of the input data and it seems to check out.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The model to start training on this `SequenceClassification` task and achieve good accuracy on it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10055/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10054/comments | https://api.github.com/repos/huggingface/transformers/issues/10054/events | https://github.com/huggingface/transformers/issues/10054 | 803,020,120 | MDU6SXNzdWU4MDMwMjAxMjA= | 10,054 | Error: "Transformers CLI tool: error: unrecognized arguments: kvantorium-small" while deploying machine learning model to hugging face profile | {
"login": "MLDovakin",
"id": 78375175,
"node_id": "MDQ6VXNlcjc4Mzc1MTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/78375175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MLDovakin",
"html_url": "https://github.com/MLDovakin",
"followers_url": "https://api.github.com/users/MLDovakin/followers",
"following_url": "https://api.github.com/users/MLDovakin/following{/other_user}",
"gists_url": "https://api.github.com/users/MLDovakin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MLDovakin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MLDovakin/subscriptions",
"organizations_url": "https://api.github.com/users/MLDovakin/orgs",
"repos_url": "https://api.github.com/users/MLDovakin/repos",
"events_url": "https://api.github.com/users/MLDovakin/events{/privacy}",
"received_events_url": "https://api.github.com/users/MLDovakin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Since version v4.0.0, we recommend using git and git-lfs to upload your models. Could you take a look at the following documentation page: [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html) and try what is shown in that document? Thank you!",
"> Hello! Since version v4.0.0, we recommend using git and git-lfs to upload your models. Could you take a look at the following documentation page: [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html) and try what is shown in that document? Thank you!\r\n\r\n@LysandreJik That is, I first need to connect github to my google colab and only then upload my model files to the hugging face? Thank you very much in advance\r\n\r\n",
"No, you'll only need a huggingface hub account to upload to it.",
"@LysandreJik Error Again\r\n````\r\n!transformers-cli upload https://huggingface.co/Fidlobabovic/your-model-name\r\n\r\n2021-02-08 19:24:21.906281: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\r\nusage: transformers-cli <command> [<args>]\r\nTransformers CLI tool: error: unrecognized arguments: Fidlobabovic/your-model-name\r\n````",
"Hi @IndianMLGay. I'm sorry but I don't understand what is the issue. Nowhere in the file I linked is there a `transformers-cli upload` command.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | I work with hugging face on Google Colab
My process of training and tuning the model looks like this. The model is successfully trained and saved in the models folder, but when I am going to upload it to my huggingface repository, an error occurs
````
tokenizer = `PreTrainedTokenizerFast(tokenizer_file='/content/bert-datas7.json')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="/content/For_ITMO (1).txt",block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=mlm, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="models/kvantorium-small",
overwrite_output_dir=True,
num_train_epochs=1000,
per_gpu_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset)
trainer.train()
````
#8480
When I am going to upload it to my huggingface repository, an error occurs, why is it displayed for me? And how to specify the correct documentation? My folder in the Google Colab directory is called "/ content / models / kvantorium-small" This is the folder where I save the model after training
````
!transformers-cli login
!transformers-cli upload "/content/models/kvantorium-small"
2021-02-07 16:56:32.089450: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
usage: transformers-cli <command> [<args>]
Transformers CLI tool: error: unrecognized arguments: /content/models/kvantorium-small
````
Is this a google collab problem and how can I rewrite the request or fix the error?
Sorry in advance if my issue seems to be incorrect to you, I'm new to git.
I also attached my data and my tokenizer file
[For_ITMO (1).txt](https://github.com/huggingface/transformers/files/5939713/For_ITMO.1.txt)
[For_ITMO.txt-vocab (1) (1).txt](https://github.com/huggingface/transformers/files/5939719/For_ITMO.txt-vocab.1.1.txt)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10054/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10053/comments | https://api.github.com/repos/huggingface/transformers/issues/10053/events | https://github.com/huggingface/transformers/pull/10053 | 803,010,706 | MDExOlB1bGxSZXF1ZXN0NTY5MDE1MjU2 | 10,053 | Add CharacterBERT model [WIP] | {
"login": "helboukkouri",
"id": 36409068,
"node_id": "MDQ6VXNlcjM2NDA5MDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/36409068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helboukkouri",
"html_url": "https://github.com/helboukkouri",
"followers_url": "https://api.github.com/users/helboukkouri/followers",
"following_url": "https://api.github.com/users/helboukkouri/following{/other_user}",
"gists_url": "https://api.github.com/users/helboukkouri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helboukkouri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helboukkouri/subscriptions",
"organizations_url": "https://api.github.com/users/helboukkouri/orgs",
"repos_url": "https://api.github.com/users/helboukkouri/repos",
"events_url": "https://api.github.com/users/helboukkouri/events{/privacy}",
"received_events_url": "https://api.github.com/users/helboukkouri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"I will, thanks for the ping @helboukkouri!",
"Hi @helboukkouri, how are you doing? What do you think of the proposed changes above? Would you like us to take over from here?",
"> Hi @helboukkouri, how are you doing? What do you think of the proposed changes above? Would you like us to take over from here?\r\n\r\nHi @LysandreJik, sorry for the delay. No need to take over, I have been working on another topic (pre-training code for CharacterBERT) but I'll go back to the PR as soon as possible (beginning of next week at the latest).\r\n\r\nI will fix the documentation then move on to the tests - about which I wanted to know if it is okay to change some of the code that is common to all models. I think that most tests that do not pass with CharacterBERT don't because they expect the input to be shaped as `(batch size, seq length)` instead of `(batch size, seq length, token length)`.\r\n\r\nCheers!",
"Really cool, looking forward to it!\r\n\r\nFor tests in the common tests that you believe don't apply to CharacterBERT, I would recommend you override them in the CharacterBERT test file directly. For example TAPAS's tokenizer didn't fit to most common tests as it is a table-based model/tokenizer, so we reimplemented most tests in the test file directly:\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/tests/test_tokenization_tapas.py#L49\r\n\r\nThis is an example for the tokenizer, but you can apply the same for the model!",
"I still need to fix the tests. I'll try to progress on that asap 😊",
"Not a bug, just a general comment:\r\n\r\nThis is perfectly working when using a (potential future) CharacterBERT model, that is cased:\r\n\r\nTokenizer config:\r\n\r\n```json\r\n{\r\n \"do_lower_case\": false,\r\n \"strip_accents\":false\r\n}\r\n```\r\n\r\n(place it under a folder e.g. `./convbert-tokenizer-test`.\r\n\r\nThen:\r\n\r\n```python\r\nIn [1]: from transformers import CharacterBertTokenizer\r\n\r\nIn [2]: tokenizer = CharacterBertTokenizer.from_pretrained(\"./convbert-tokenizer-test\")\r\n\r\nIn [3]: tokenizer.tokenize(\"Nice weather in Munich töday!\")\r\nOut[3]: ['Nice', 'weather', 'in', 'Munich', 'töday', '!']\r\n```\r\n\r\nis perfectly working: no lowercasing and accent stripping (as defined in tokenizer configuration) is done.",
"Hi @LysandreJik, so I made sure all the tests in `tests/test_modeling_character_bert.py` pass by:\r\n\r\n- Fixing some issues regarding the input shape\r\n- Removing the inheritance to ModelTesterMixin\r\n\r\nThe command I used to run the tests was:\r\n```python\r\npython -m pytest ./tests/test_modeling_character_bert.py\r\n```\r\n\r\nNow I guess I could copy/paste and adapt the tests from `ModelTesterMixin` to include them back. But this seems against the whole point of having common tests (and will make the test file for CharacterBERT pretty verbose). Should I do it anyway ? Is it necessary ?\r\n\r\nWanted your input before moving forward. 😊\r\n\r\nAlso, at some point I will probably need to add the hardcoded hyperparameters like the maximum token length and the character embedding dimension (basically all of [this](https://github.com/helboukkouri/transformers/blob/add-character-bert/src/transformers/models/character_bert/modeling_character_bert.py#L232-L249)) to the `CharacterBertConfig` class.",
"Hi @helboukkouri, that's fantastic news!!\r\n\r\nRegarding the `ModelTesterMixin`, a better approach would be to keep your class as a child of it, but to re-implement directly in that class the tests that need to receive changes. By overriding those tests, you can choose what happens in them.\r\n\r\nRegarding the configuration, by all means do! You can add as many CharacterBERT specific configuration properties as you need, and remove the ones that you don't need, of course.\r\n",
"> Regarding the `ModelTesterMixin`, a better approach would be to keep your class as a child of it, but to re-implement directly in that class the tests that need to receive changes.\r\n\r\nOf course! Should've thought of that :)\r\n\r\nSo, I added the tests from `ModelTesterMixin`. They all pass with the exception of those related to embedding tying which I bypass as `CharacterBERT` does not have a WordPiece embedding layer.\r\n\r\nI will now complete `CharacterBertConfig` to include the parameters from the CharacterCNN module.\r\nIs there anything else to do ? Please let me know 😊",
"Hey @helboukkouri ,\r\n\r\ndo you think it makes sense to add the `max_word_length` value to that configuration :thinking: \r\n\r\nAs we discussed this parameter in the corresponding CharacterBERT issue, I'm not sure if it ever will be changed/adjusted :thinking: \r\n\r\nOn the other side it is included in the ELMo [configuration files](https://s3-us-west-2.amazonaws.com/allennlp/models/elmo/2x4096_512_2048cnn_2xhighway_5.5B/elmo_2x4096_512_2048cnn_2xhighway_5.5B_options.json) and it also can be seen as a \"magic number\" in the code...",
"> do you think it makes sense to add the `max_word_length` value to that configuration 🤔\r\n\r\nHi @stefan-it,\r\n\r\nI'm actually working on it right now. It will be a parameter in `tokenizer_config.json` as well as a possible argument for `CharacterBertTokenizer`. This way, it possible for anybody to choose to train models that handle shorter or longer words.\r\n\r\nI'm also adding a whole bunch of parameters in the model configuration like the character embeddings dimension and number of highway layers in the `CharacterCnn` module. The only fixed value will be the character \"vocabulary size\". This will stay at 263 for the 256 possible utf-8 bytes (I'm not sure I'm using the right term here) + the special symbols for [CLS], [SEP], [MASK], [PAD], Beginning/EndOfWord, CharacterPadding.",
"@LysandreJik Please let me know if there is anything else I can do. 😊",
"It seems most of the tests failing are solved on `master`, do you mind rebasing on `master`? I'll give a deeper look ASAP!",
"Sorry Lysandre, I'm not really used to doing merges and rebases. I guess this is good practice ^^\r\nPlease let me know if I somehow managed to do what you needed me to do 😊",
"Flair team here 😅\r\n\r\nThis is currently not working:\r\n\r\n```python\r\nfrom transformers import CharacterBertTokenizer\r\n\r\ntokenizer = CharacterBertTokenizer() \r\n\r\ntokenized_string = tokenizer.tokenize(\"Hello from Munich!\")\r\n\r\nencoded_inputs = tokenizer.encode_plus(tokenized_string, max_length=1024, \r\n truncation=True, stride=512, return_overflowing_tokens=True)\r\n```\r\n\r\nProblem comes with the `return_overflowing_tokens` argument, it throws the following error message:\r\n\r\n```\r\n/mnt/character-bert-pretraining/external/transformers/src/transformers/tokenization_utils_base.py in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)\r\n 2357 )\r\n 2358\r\n-> 2359 return self._encode_plus(\r\n 2360 text=text,\r\n 2361 text_pair=text_pair,\r\n\r\n/mnt/character-bert-pretraining/external/transformers/src/transformers/tokenization_utils.py in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)\r\n 440 second_ids = get_input_ids(text_pair) if text_pair is not None else None\r\n 441\r\n--> 442 return self.prepare_for_model(\r\n 443 first_ids,\r\n 444 pair_ids=second_ids,\r\n\r\n/mnt/character-bert-pretraining/external/transformers/src/transformers/tokenization_utils_base.py in prepare_for_model(self, ids, pair_ids, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, prepend_batch_axis, **kwargs)\r\n 2807 # Padding\r\n 2808 if padding_strategy != PaddingStrategy.DO_NOT_PAD or return_attention_mask:\r\n-> 2809 encoded_inputs = self.pad(\r\n 2810 encoded_inputs,\r\n 2811 max_length=max_length,\r\n\r\n/mnt/character-bert-pretraining/external/transformers/src/transformers/tokenization_utils_base.py in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose)\r\n 2635\r\n 2636 batch_size = len(required_input)\r\n-> 2637 assert all(\r\n 2638 len(v) == batch_size for v in encoded_inputs.values()\r\n 2639 ), \"Some items in the output dictionary have a different batch size than others.\"\r\n\r\nAssertionError: Some items in the output dictionary have a different batch size than others.\r\n```\r\n\r\nWhen a user wants to use the `encode_plus` function we should maybe add some additional checks to avoid these errors :thinking: \r\n",
"It seems that `attention_scores` and `attention_mask` have different shapes, I just tried the following example from the PR:\r\n\r\n```python\r\nfrom transformers import CharacterBertTokenizer, CharacterBertForNextSentencePrediction\r\nimport torch\r\n\r\ntokenizer = CharacterBertTokenizer.from_pretrained('helboukkouri/character-bert')\r\nmodel = CharacterBertForNextSentencePrediction.from_pretrained('helboukkouri/character-bert')\r\n\r\nprompt = \"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced.\"\r\nnext_sentence = \"The sky is blue due to the shorter wavelength of blue light.\"\r\n\r\nencoding = tokenizer(prompt, next_sentence, return_tensors='pt')\r\noutputs = model(**encoding, labels=torch.LongTensor([1]))\r\nlogits = outputs.logits\r\n\r\nassert logits[0, 0] < logits[0, 1] # next sentence was random\r\n```\r\n\r\nThis throws:\r\n\r\n```bash\r\n File \"nsp.py\", line 11, in <module>\r\n outputs = model(**encoding, labels=torch.LongTensor([1]))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 744, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/character-bert-pretraining/external/transformers/src/transformers/models/character_bert/modeling_character_bert.py\", line 1611, in forward\r\n outputs = self.character_bert(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 744, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/character-bert-pretraining/external/transformers/src/transformers/models/character_bert/modeling_character_bert.py\", line 1149, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 744, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/character-bert-pretraining/external/transformers/src/transformers/models/character_bert/modeling_character_bert.py\", line 742, in forward\r\n layer_outputs = layer_module(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 744, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/character-bert-pretraining/external/transformers/src/transformers/models/character_bert/modeling_character_bert.py\", line 629, in forward\r\n self_attention_outputs = self.attention(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 744, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/character-bert-pretraining/external/transformers/src/transformers/models/character_bert/modeling_character_bert.py\", line 557, in forward\r\n self_outputs = self.self(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 744, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/mnt/character-bert-pretraining/external/transformers/src/transformers/models/character_bert/modeling_character_bert.py\", line 480, in forward\r\n attention_scores = attention_scores + attention_mask\r\nRuntimeError: The size of tensor a (35) must match the size of tensor b (50) at non-singleton dimension 3\r\n\r\n```",
"Hi @helboukkouri, the rebase was exactly what I wanted you to do, so that's great! How is it going? Can we help in any way regarding the failing tests/Stefan's comments?",
"> Hi @helboukkouri, the rebase was exactly what I wanted you to do, so that's great! How is it going? Can we help in any way regarding the failing tests/Stefan's comments?\r\n\r\nHi @LysandreJik, so right now `CharacterBertTokenizer` works well if you simply do tokenize/convert_token_to_ids, but I still need to make sure other methods work well (e.g. encode_plus - i.e. stefan's comments). I'll work on it when I get the chance. 😊\r\n\r\nOther than that I'm not really sure why there are still tests that do not pass. After rebasing, only:\r\n\r\n- run_tests_templates\r\n- build_doc\r\n- check_code_quality \r\n\r\nhad issues. But now I see that more tests break... I'll try to investigate the other tests but in the meantime, do you have any pointers for solving the three tests listed above ?\r\n\r\nCheers!",
"> I'll try to investigate the other tests\r\n\r\nIs there any chance that I need to rebase again ? There seems to be some conflicts in\r\n\r\n- src/transformers/__init__.py\r\n- src/transformers/models/auto/configuration_auto.py\r\n- src/transformers/models/auto/modeling_auto.py",
"You might have to rebase once again indeed, the issue here is because of torch 1.8.0. Let me know if you would want me to handle it, happy to!\r\n\r\n- Regarding the templates, that's actually the same issue as the code quality, so you can ignore that\r\n- The build_doc gives you a bit of information when you open the failure:\r\n```\r\n/home/circleci/transformers/src/transformers/models/character_bert/configuration_character_bert.py:docstring of transformers.CharacterBertConfig:17:Unexpected indentation.\r\n```\r\n- And finally, you have some code quality issues. In order to do that, you should:\r\n - Install the code quality tools with `pip install -e .[quality]` at the root of the repo\r\n - Run `make fixup` which should tell you the issues with the style in your code\r\n\r\nPlease let me know if I can be of any help, happy to!",
"Hi, I've been playing around with the implementation as well, specifically the `CharacterBertForMaskedLM`. I keep running into issues with the decoding, which (as I've read in the code) is not currently implemented. Specifically, I'm having a hard time to understand how you are aligning the MLM vocab size (100k in the available model snapshot) with the character-level representations, and how you would (schematically) re-label predictions from your model.\r\n\r\nIf there is any way to help out with the MLM setup specifically, let me know!",
"> I'm having a hard time to understand how you are aligning the MLM vocab size (100k in the available model snapshot) with the character-level representations\r\n\r\nHi @dennlinger, glad to hear you're interested in CharacterBERT. I added a MLM vocabulary just as a workaround to allow me to do masked language modeling since CharacterBERT does not have a wordpiece vocab, which in the case of BERT is re-used at the output layer during MLM. So in my case, it is only used for this purpose.\r\n\r\nHow are you trying to use CharacterBERT ? In a seq2seq context ? When do you need decoding ? ",
"The specific use case is literally just to predict masked tokens, which I'm using in the following example right now:\r\n\r\n```python\r\nfrom transformers import CharacterBertTokenizer, CharacterBertForMaskedLM, BertTokenizer\r\nimport torch\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer = CharacterBertTokenizer.from_pretrained(\"helboukkouri/character-bert\")\r\n model = CharacterBertForMaskedLM.from_pretrained(\"helboukkouri/character-bert\")\r\n \r\n tokens = tokenizer.tokenize(\"[CLS] This is a [MASK] [SEP]\")\r\n input_tensor = torch.tensor(tokenizer.convert_tokens_to_ids(tokens)).unsqueeze(0)\r\n\r\n with torch.no_grad():\r\n outputs = model(input_tensor)\r\n predictions = outputs[0]\r\n\r\n # How can we interpret the output idx from this?\r\n predicted_index = torch.argmax(predictions[0, 4, :]).item()\r\n # This fails currently with NotImplementedError\r\n predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n```\r\n\r\nI think the main issue I have is the [empty vocab file](https://huggingface.co/helboukkouri/character-bert/blob/main/vocab.txt), since I am assuming that you had a specific 100k vocab during your training, right?",
"> I am assuming that you had a specific 100k vocab during your training, right?\r\n\r\nOh I see now. So, the first thing to know is that the checkpoint in \"helboukkouri/character-bert\" only has weights for the `CharacterBertModel` part. So, no pre-trained weights for the MLM/NSP parts. So, even if the tokenizer worked properly, you would have had meaningless outputs 😊\r\n\r\nOn the other hand, the `convert_ids_to_tokens` method from the tokenizer is indeed \"missing\", and that is because there are no \"token ids\" with CharacterBERT as each token is seen as a sequence of character/byte ids. However, I think I should implement anyway but in a way where it takes a tensor `(batch, token sequence, character ids)` and returns `(batch, token sequence)`. I'll add it to my todo list :)\r\n\r\nAlso, I think I can manage to recover the mlm vocab / checkpoints and add them at some point to \"helboukkouri/character-bert\" so that the entire `CharacterBertForPretraining` model can be loaded and give meaningful output. I'll also need to find a solution for easily recovering the tokens from the predicted MLM ids (maybe add a `convert_mlm_ids_to_tokens` to the tokenizer ?)\r\n\r\nHope this is helpful",
"Ah, it's making more sense now, the detailed explanation did help a lot! :)\r\nMaybe as a last question: It seems the MLMHead is only able to return `(batch, seq_len, MLM_vocab_size)`, and (as far as I can tell) not the required `(batch, seq_len,character_ids)`. How would one acquire the necessary character predictions from the MLM model?\r\n\r\n I'll continue to watch the PR, thanks for all the effort!",
"> Please let me know if I can be of any help, happy to!\r\n\r\nThanks @LysandreJik! I don't mind continuing to work on PR but since I have some pressing things to handle first, progress may be slow for some time. If you think you can help solve any issues in the meantime please don't hesitate 😊",
"Here's my conversion script that I've used to convert my pre-trained model into a masked lm model:\r\n\r\n```python\r\nimport torch\r\n\r\nfrom transformers import CharacterBertConfig, CharacterBertForMaskedLM\r\n\r\norig_model = torch.load(\"./ckpt_1184.pt\")\r\n\r\norig_state_dict = orig_model[\"model\"]\r\n\r\n# wget https://huggingface.co/helboukkouri/character-bert/resolve/main/config.json\r\nconfig = CharacterBertConfig.from_pretrained(\"./\")\r\n\r\nmodel = CharacterBertForMaskedLM(config)\r\n\r\nignore_keys = [\"character_bert.pooler.dense.weight\",\r\n \"character_bert.pooler.dense.bias\",\r\n \"cls.seq_relationship.weight\",\r\n \"cls.seq_relationship.bias\"]\r\n\r\nfor key in ignore_keys:\r\n del orig_model[\"model\"][key]\r\n\r\nmodel.load_state_dict(orig_model[\"model\"], strict=True)\r\n\r\nmodel.half()\r\n\r\nmodel.save_pretrained(\"./export\")\r\n```\r\n\r\nHowever, when I pass a (masked) sequence to the model, it returns always the same predictions. Here's some example code:\r\n\r\n```python\r\nfrom transformers import CharacterBertTokenizer, CharacterBertForMaskedLM\r\nimport torch\r\n\r\nmodel_name = \"./export\"\r\n\r\ntokenizer = CharacterBertTokenizer.from_pretrained(model_name)\r\nmodel = CharacterBertForMaskedLM.from_pretrained(model_name)\r\n\r\nsentence = \"Heute ist ein [MASK] Tag\"\r\nencoding = tokenizer.encode_plus(sentence, return_tensors=\"pt\")\r\n\r\nmasked_index = 4\r\n\r\npredictions = model(input_ids=encoding[\"input_ids\"])[0]\r\npredicted_index = torch.argmax(predictions[0, masked_index]).item()\r\n\r\nmlm_vocabulary = [line.strip().split()[-1] for line in open(\"mlm_vocab.txt\", \"rt\")]\r\n\r\nprint(\"Predicted token:\", mlm_vocabulary[predicted_index])\r\n```\r\n\r\nI'm currently trying to figure out, where the problem could be :)",
"> How would one acquire the necessary character predictions from the MLM model?\r\n\r\nIt's not possible. The MLM task is at the word level, where each word has an index. If you want to convert these indices into tokens, you need the MLM vocabulary for a lookup. The character_id stuff is only at the input level. Both are dissociated, which I understand is a bit weird since both are essentially the same thing in BERT (but that's more of a convenience thing than a necessary aspect).",
"> How would one acquire the necessary character predictions from the MLM model?\r\n\r\nActually, there might be a way but it's not very natural : you could take the MLM ids -> lookup the token in the MLM vocab -> tokenize it with the CharacterBertTokenizer -> get character ids.\r\n\r\nIf you repeat this too much it may become very slow. But you may be able to cache some things and make some optimizations :)",
"Hey @helboukkouri, please let me know if there remains some issues and you don't have time to work on them - happy to unblock you if that's so!"
] | 1,612 | 1,696 | 1,696 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9061
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
*I'm still missing the updates in the general documentation.*
- [x] Did you write any new necessary tests?
*Some of the tests are currently failing. This is due to CharacterBERT having a different input shape than bert (batch size, seq length, token length) instead of (batch size, seq length). Also, there are some other tests that are related to reshaping the embedding which fail for the same reason. I did not fix these test as that would mean changing things in the way the common tests are currently working. For all other cases, I tried my best to implement tests that by adapting those from the BERT suite (these pass).*
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR.
@LysandreJik please have a look at this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10053/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10053/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10053",
"html_url": "https://github.com/huggingface/transformers/pull/10053",
"diff_url": "https://github.com/huggingface/transformers/pull/10053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10053.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10052/comments | https://api.github.com/repos/huggingface/transformers/issues/10052/events | https://github.com/huggingface/transformers/pull/10052 | 802,996,648 | MDExOlB1bGxSZXF1ZXN0NTY5MDA1MTA0 | 10,052 | implementing tflxmertmodel integration test | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@LysandreJik ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @LysandreJik what about this one ",
"Sure! It seems there is no branch linked to this PR, however. The changes are still visible here https://github.com/huggingface/transformers/pull/10052/files but I am unable to reopen."
] | 1,612 | 1,625 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
this PR implement an integration test for TFLxmertmodel as requested in #9954
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10052/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10052",
"html_url": "https://github.com/huggingface/transformers/pull/10052",
"diff_url": "https://github.com/huggingface/transformers/pull/10052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10052.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10051/comments | https://api.github.com/repos/huggingface/transformers/issues/10051/events | https://github.com/huggingface/transformers/issues/10051 | 802,964,243 | MDU6SXNzdWU4MDI5NjQyNDM= | 10,051 | [example] run_ner.py raised error: IndexError: Target 3 is out of bounds. | {
"login": "gongel",
"id": 24390500,
"node_id": "MDQ6VXNlcjI0MzkwNTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/24390500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gongel",
"html_url": "https://github.com/gongel",
"followers_url": "https://api.github.com/users/gongel/followers",
"following_url": "https://api.github.com/users/gongel/following{/other_user}",
"gists_url": "https://api.github.com/users/gongel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gongel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gongel/subscriptions",
"organizations_url": "https://api.github.com/users/gongel/orgs",
"repos_url": "https://api.github.com/users/gongel/repos",
"events_url": "https://api.github.com/users/gongel/events{/privacy}",
"received_events_url": "https://api.github.com/users/gongel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Which version of the datasets library are you using? The script runs fine on my side.",
"Hi @sgugger,\r\nI use Version 1.2.1 of the datasets library.",
"I tried the different version of the datasets library. It seems that it is not due to the version of the datasets library.\r\n- Version 1.2.1, 1.2.0, 1.1.3\r\n```\r\nTraceback (most recent call last):\r\n File \"run_ner.py\", line 443, in <module>\r\n main()\r\n File \"run_ner.py\", line 377, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/Users/bytedance/transformers/src/transformers/trainer.py\", line 940, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/Users/bytedance/transformers/src/transformers/trainer.py\", line 1304, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/Users/bytedance/transformers/src/transformers/trainer.py\", line 1334, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/Users/bytedance/transformers/src/transformers/models/bert/modeling_bert.py\", line 1701, in forward\r\n loss = loss_fct(active_logits, active_labels)\r\n File \"/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py\", line 962, in forward\r\n ignore_index=self.ignore_index, reduction=self.reduction)\r\n File \"/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py\", line 2468, in cross_entropy\r\n return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n File \"/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py\", line 2264, in nll_loss\r\n ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)\r\nIndexError: Target 3 is out of bounds.\r\n```\r\n- Version 1.1.2, 1.1.1, 1.1.0\r\n```\r\nTraceback (most recent call last):\r\n File \"run_ner.py\", line 443, in <module>\r\n main()\r\n File \"run_ner.py\", line 230, in main\r\n if isinstance(features[label_column_name].feature, ClassLabel):\r\nAttributeError: 'Value' object has no attribute 'feature'\r\n```\r\n",
"And to be clear you are just running the non-modified version of `token-classification/run.sh`?",
"> And to be clear you are just running the non-modified version of `token-classification/run.sh`?\r\n\r\nYes, I am just running the non-modified version of token-classification/run.sh.",
"Hi! The error you have for versions 1.2.1, 1.2.0 and 1.1.3 is probably the same as #10050. \r\nSee https://github.com/huggingface/transformers/issues/10050#issuecomment-775034308 for how to resolve it.",
"> Hi! The error you have for versions 1.2.1, 1.2.0 and 1.1.3 is probably the same as #10050.\r\n> See [#10050 (comment)](https://github.com/huggingface/transformers/issues/10050#issuecomment-775034308) for how to resolve it.\r\n\r\nI tried. It doesn't work.😭",
"I am trying to perform token classification task through fine tuning pretrained model. My input data is in conll format in which first column consist token and second column indicate it's grammatical category (value is separated by tab). I am passing these info\r\n\r\nfrom transformers import DistilBertForTokenClassification, Trainer, TrainingArguments\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='./results', # output directory\r\n num_train_epochs=3, # total number of training epochs\r\n per_device_train_batch_size=16, # batch size per device during training\r\n per_device_eval_batch_size=16, # batch size for evaluation\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=10,\r\n)\r\n\r\nmodel = DistilBertForTokenClassification.from_pretrained(\"distilbert-base-uncased\")\r\n\r\ntrainer = Trainer(\r\n model=model, # the instantiated 🤗 Transformers model to be trained\r\n args=training_args, # training arguments, defined above\r\n train_dataset=train_dataset, # training dataset\r\n eval_dataset=val_dataset # evaluation dataset\r\n)\r\n\r\ntrainer.train()\r\n\r\n\r\nafter running the trainer.train() in google colab with GPU runtime I am getting index error \r\n\"IndexError: Target 6 is out of bounds.\"\r\n\r\nHow to get rid of this problem, Can anyone help me to get rid from this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> I am trying to perform token classification task through fine tuning pretrained model. My input data is in conll format in which first column consist token and second column indicate it's grammatical category (value is separated by tab). I am passing these info\r\n> \r\n> from transformers import DistilBertForTokenClassification, Trainer, TrainingArguments\r\n> \r\n> training_args = TrainingArguments(\r\n> output_dir='./results', # output directory\r\n> num_train_epochs=3, # total number of training epochs\r\n> per_device_train_batch_size=16, # batch size per device during training\r\n> per_device_eval_batch_size=16, # batch size for evaluation\r\n> warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n> weight_decay=0.01, # strength of weight decay\r\n> logging_dir='./logs', # directory for storing logs\r\n> logging_steps=10,\r\n> )\r\n> \r\n> model = DistilBertForTokenClassification.from_pretrained(\"distilbert-base-uncased\")\r\n> \r\n> trainer = Trainer(\r\n> model=model, # the instantiated 🤗 Transformers model to be trained\r\n> args=training_args, # training arguments, defined above\r\n> train_dataset=train_dataset, # training dataset\r\n> eval_dataset=val_dataset # evaluation dataset\r\n> )\r\n> \r\n> trainer.train()\r\n> \r\n> after running the trainer.train() in google colab with GPU runtime I am getting index error\r\n> \"IndexError: Target 6 is out of bounds.\"\r\n> \r\n> How to get rid of this problem, Can anyone help me to get rid from this issue.\r\n\r\nHi HuggingFace,\r\nI am having exact same issue. While `running trainer.train()` in Google Colab using GPU I get this error. I tried with TPU as well, get the same issue.\r\n\r\nRunning on my own data with 10 classes. My classes are labelled from 0 to 9. There are 297 training items, 97 testing and 75 Validation items. Please help us me to fix this error. ",
"By the way, I solved it.. \r\n\r\nSolution:\r\nWhile creating the model, pass `num_labels` arugment\r\n\r\nSomething like this \r\n```\r\nmodel = DistilBertForSequenceClassification.from_pretrained(\"distilbert-base-uncased\", num_labels=n_labels)\r\n```"
] | 1,612 | 1,621 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0 or 4.2.2
- Platform: MacOS
- Python version: 3.6
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## To reproduce
Steps to reproduce the behavior:
1. bash [run.sh](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh) to [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)
## Error
```
[INFO|trainer.py:837] 2021-02-07 22:22:31,755 >> ***** Running training *****
[INFO|trainer.py:838] 2021-02-07 22:22:31,755 >> Num examples = 14041
[INFO|trainer.py:839] 2021-02-07 22:22:31,755 >> Num Epochs = 3
[INFO|trainer.py:840] 2021-02-07 22:22:31,755 >> Instantaneous batch size per device = 8
[INFO|trainer.py:841] 2021-02-07 22:22:31,755 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:842] 2021-02-07 22:22:31,755 >> Gradient Accumulation steps = 1
[INFO|trainer.py:843] 2021-02-07 22:22:31,755 >> Total optimization steps = 5268
0%| | 0/5268 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_ner.py", line 443, in <module>
main()
File "run_ner.py", line 377, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/Users/bytedance/transformers/src/transformers/trainer.py", line 940, in train
tr_loss += self.training_step(model, inputs)
File "/Users/bytedance/transformers/src/transformers/trainer.py", line 1304, in training_step
loss = self.compute_loss(model, inputs)
File "/Users/bytedance/transformers/src/transformers/trainer.py", line 1334, in compute_loss
outputs = model(**inputs)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/bytedance/transformers/src/transformers/models/bert/modeling_bert.py", line 1701, in forward
loss = loss_fct(active_logits, active_labels)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 962, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 2264, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 3 is out of bounds.
Exception ignored in: <bound method tqdm.__del__ of 0%| | 0/5268 [00:01<?, ?it/s]>
Traceback (most recent call last):
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/tqdm/std.py", line 1086, in __del__
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/tqdm/std.py", line 1270, in close
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/tqdm/std.py", line 572, in _decr_instances
File "/Users/bytedance/opt/anaconda3/lib/python3.6/site-packages/tqdm/_monitor.py", line 51, in exit
File "/Users/bytedance/opt/anaconda3/lib/python3.6/threading.py", line 521, in set
File "/Users/bytedance/opt/anaconda3/lib/python3.6/threading.py", line 364, in notify_all
File "/Users/bytedance/opt/anaconda3/lib/python3.6/threading.py", line 347, in notify
TypeError: 'NoneType' object is not callable
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10051/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10050/comments | https://api.github.com/repos/huggingface/transformers/issues/10050/events | https://github.com/huggingface/transformers/issues/10050 | 802,956,607 | MDU6SXNzdWU4MDI5NTY2MDc= | 10,050 | run_ner.py fails when loading a model/checkpoint from a directory | {
"login": "ikergarcia1996",
"id": 18737249,
"node_id": "MDQ6VXNlcjE4NzM3MjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/18737249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikergarcia1996",
"html_url": "https://github.com/ikergarcia1996",
"followers_url": "https://api.github.com/users/ikergarcia1996/followers",
"following_url": "https://api.github.com/users/ikergarcia1996/following{/other_user}",
"gists_url": "https://api.github.com/users/ikergarcia1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikergarcia1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikergarcia1996/subscriptions",
"organizations_url": "https://api.github.com/users/ikergarcia1996/orgs",
"repos_url": "https://api.github.com/users/ikergarcia1996/repos",
"events_url": "https://api.github.com/users/ikergarcia1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikergarcia1996/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Same [issue](https://github.com/huggingface/transformers/issues/10051) as me 😄 ",
"I'm guessing this has to do with the number of labels your model has. When doing the following:\r\n\r\n```py\r\nmodel = XLMRobertaModel.from_pretrained('xlm-roberta-base')\r\nspath = \"models/xlmroberta\"\r\nmodel.save_pretrained(spath)\r\n```\r\n\r\nyou're saving a model to disk that has no classification head, and the script will therefore use the default number of labels when loading it. I would advise you do the following instead, with `NUM_LABELS` your number of labels:\r\n\r\n```py\r\nmodel = XLMRobertaForTokenClassification.from_pretrained('xlm-roberta-base', num_labels=NUM_LABELS)\r\nspath = \"models/xlmroberta\"\r\nmodel.save_pretrained(spath)\r\n```\r\n\r\nPlease let me know if this fixes your issue.",
"@LysandreJik problem solved. Thank you!!"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
--> @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): XLMRobertaModel
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behaviour:
The run_ner.py seems to be unable to load models from a directory. The following command works as expected:
```
python transformers/examples/token-classification/run_ner.py --dataset_name conll2003 --model_name_or_path xlm-roberta-base --output_dir output --num_train_epochs 10 --per_device_train_batch_size 32 --per_device_eval_batch_size 32 --learning_rate 5e-05 --seed 29 --save_steps 9223372036854775807 --do_train --do_eval --overwrite_cache --overwrite_output_dir --fp16
```
However, if we load 'xlm-roberta-base', we save the model to a directory, and we try to run the script with the directory set as the model path, the script fails. (Same behaviour if we use XLMRobertaForTokenClassification instead of XLMRobertaModel in step 1)
1.
```
from transformers import XLMRobertaTokenizer, XLMRobertaModel
tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')
model = XLMRobertaModel.from_pretrained('xlm-roberta-base')
spath = "models/xlmroberta"
tokenizer.save_pretrained(spath)
model.save_pretrained(spath)
```
2.
```
python transformers/examples/token-classification/run_ner.py --dataset_name conll2003 --model_name_or_path models/xlmroberta --cache_dir models/xlmroberta --output_dir output --num_train_epochs 10 --per_device_train_batch_size 32 --per_device_eval_batch_size 32 --learning_rate 5e-05 --seed 29 --do_train --do_eval --overwrite_cache --overwrite_output_dir
```
Error message (Running the command in CPU to be able to see it):
```
INFO|tokenization_utils_base.py:1688] 2021-02-07 14:38:00,146 >> Model name 'models/xlmroberta' not found in model shortcut name list (xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german). Assuming 'models/xlmroberta' is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1721] 2021-02-07 14:38:00,146 >> Didn't find file models/xlmroberta/tokenizer.json. We won't load it.
[INFO|tokenization_utils_base.py:1721] 2021-02-07 14:38:00,147 >> Didn't find file models/xlmroberta/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1784] 2021-02-07 14:38:00,147 >> loading file models/xlmroberta/sentencepiece.bpe.model
[INFO|tokenization_utils_base.py:1784] 2021-02-07 14:38:00,148 >> loading file None
[INFO|tokenization_utils_base.py:1784] 2021-02-07 14:38:00,148 >> loading file None
[INFO|tokenization_utils_base.py:1784] 2021-02-07 14:38:00,148 >> loading file models/xlmroberta/special_tokens_map.json
[INFO|tokenization_utils_base.py:1784] 2021-02-07 14:38:00,148 >> loading file models/xlmroberta/tokenizer_config.json
[INFO|modeling_utils.py:1025] 2021-02-07 14:38:02,120 >> loading weights file models/xlmroberta/pytorch_model.bin
[INFO|modeling_utils.py:1143] 2021-02-07 14:38:13,379 >> All model checkpoint weights were used when initializing XLMRobertaForTokenClassification.
[WARNING|modeling_utils.py:1146] 2021-02-07 14:38:13,380 >> Some weights of XLMRobertaForTokenClassification were not initialized from the model checkpoint at models/xlmroberta and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
100%|###################################################################################################################################################################################################################################################################################################################################################################################################| 15/15 [00:01<00:00, 7.55ba/s]
100%|#####################################################################################################################################################################################################################################################################################################################################################################################################| 4/4 [00:00<00:00, 8.69ba/s]
100%|#####################################################################################################################################################################################################################################################################################################################################################################################################| 4/4 [00:00<00:00, 8.25ba/s]
[INFO|trainer.py:429] 2021-02-07 14:38:18,120 >> The following columns in the training set don't have a corresponding argument in `XLMRobertaForTokenClassification.forward` and have been ignored: tokens, ner_tags, chunk_tags, id, pos_tags.
[INFO|trainer.py:429] 2021-02-07 14:38:18,121 >> The following columns in the evaluation set don't have a corresponding argument in `XLMRobertaForTokenClassification.forward` and have been ignored: tokens, ner_tags, chunk_tags, id, pos_tags.
[INFO|trainer.py:721] 2021-02-07 14:38:18,122 >> Loading model from models/xlmroberta).
[INFO|configuration_utils.py:443] 2021-02-07 14:38:18,123 >> loading configuration file models/xlmroberta/config.json
[INFO|configuration_utils.py:481] 2021-02-07 14:38:18,125 >> Model config XLMRobertaConfig {
"architectures": [
"XLMRobertaModel"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "xlm-roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.3.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 250002
}
[INFO|modeling_utils.py:1025] 2021-02-07 14:38:18,126 >> loading weights file models/xlmroberta/pytorch_model.bin
[INFO|modeling_utils.py:1143] 2021-02-07 14:38:30,381 >> All model checkpoint weights were used when initializing XLMRobertaForTokenClassification.
[WARNING|modeling_utils.py:1146] 2021-02-07 14:38:30,381 >> Some weights of XLMRobertaForTokenClassification were not initialized from the model checkpoint at models/xlmroberta and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[INFO|trainer.py:832] 2021-02-07 14:38:30,394 >> ***** Running training *****
[INFO|trainer.py:833] 2021-02-07 14:38:30,394 >> Num examples = 14041
[INFO|trainer.py:834] 2021-02-07 14:38:30,394 >> Num Epochs = 10
[INFO|trainer.py:835] 2021-02-07 14:38:30,394 >> Instantaneous batch size per device = 32
[INFO|trainer.py:836] 2021-02-07 14:38:30,394 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:837] 2021-02-07 14:38:30,395 >> Gradient Accumulation steps = 1
[INFO|trainer.py:838] 2021-02-07 14:38:30,395 >> Total optimization steps = 4390
0%| | 0/4390 [00:00<?, ?it/s]Traceback (most recent call last):
File "third_party/transformers/examples/token-classification/run_ner.py", line 454, in <module>
main()
File "third_party/transformers/examples/token-classification/run_ner.py", line 388, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/transformers/trainer.py", line 931, in train
tr_loss += self.training_step(model, inputs)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/transformers/trainer.py", line 1295, in training_step
loss = self.compute_loss(model, inputs)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/transformers/trainer.py", line 1325, in compute_loss
outputs = model(**inputs)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/transformers/models/roberta/modeling_roberta.py", line 1349, in forward
loss = loss_fct(active_logits, active_labels)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 962, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/ikerlariak/igarcia945/envs/transformers422/lib/python3.6/site-packages/torch/nn/functional.py", line 2264, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 3 is out of bounds.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10050/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10049/comments | https://api.github.com/repos/huggingface/transformers/issues/10049/events | https://github.com/huggingface/transformers/issues/10049 | 802,954,190 | MDU6SXNzdWU4MDI5NTQxOTA= | 10,049 | Installing tf2.0 in my env but still get ImportError in my code | {
"login": "YangHan-Morningstar",
"id": 67748964,
"node_id": "MDQ6VXNlcjY3NzQ4OTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/67748964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YangHan-Morningstar",
"html_url": "https://github.com/YangHan-Morningstar",
"followers_url": "https://api.github.com/users/YangHan-Morningstar/followers",
"following_url": "https://api.github.com/users/YangHan-Morningstar/following{/other_user}",
"gists_url": "https://api.github.com/users/YangHan-Morningstar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YangHan-Morningstar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YangHan-Morningstar/subscriptions",
"organizations_url": "https://api.github.com/users/YangHan-Morningstar/orgs",
"repos_url": "https://api.github.com/users/YangHan-Morningstar/repos",
"events_url": "https://api.github.com/users/YangHan-Morningstar/events{/privacy}",
"received_events_url": "https://api.github.com/users/YangHan-Morningstar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tried BertForSequenceClassification instead of TFBertForSequenceClassification and it works, I'm confused now",
"Hi! Version v4.2.2 supports pytorch >=1.3.0 and TensorFlow >=2.3.0. Could you install TensorFlow 2.3.0 and let me know if it fixes your issue?\r\n\r\nYou can import `BertForSequenceClassification` because that's a PyTorch model and you have your environment correctly setup for torch, but not `TFBertForSequenceClassification` as you have TensorFlow <2.3.0.",
"After updating tensorflow from 2.0.0 to 2.4.1, it works! Thanks a lot! And I also want to know where can I find version correspondence between huggingface-transformers and tensorflow(or pytorch), for example, if I have tf2.0.0 which version of transformers I should install. Thanks!",
"This is a good question, and unfortunately we don't have an answer for that except checking the history of the `setup.py`. @jplu can you chime in on what is the last version that was working with the version v2.0.0 of TensorFlow?",
"The last was v4.1.1",
"I install the transformers v4.1.1 and my tf is v2.0.0, but when I run the demo, I got an error which says \"AttributeError: module 'tensorflow_core.keras.activations' has no attribute 'swish'\"\r\n\r\n\r\n\r\nFirst I went to view the official documentation transformers v4.1.1 but got 404\r\n\r\n\r\n",
"Hello, could you try loading the doc again? It should be back up. Thanks!",
"Ah yes, the `swish` activation needs at least TF 2.1. Then you should be able to run 2.0 with at least Transformers v3.1.0",
"> Hello, could you try loading the doc again? It should be back up. Thanks!\r\n\r\nI tried in my iPhone and it could be loaded but when I tried it in my mac, it failed...",
"> Ah yes, the `swish` activation needs at least TF 2.1. Then you should be able to run 2.0 with at least Transformers v3.1.0\r\n\r\nTF2.0 and Transformers3.1.0 and there are another Error happend: \"ImportError: cannot import name 'Parallel' from 'joblib' (unknown location)\"",
"I think you have to downgrade your version of joblib as well.",
"Now I can import it correctly but still got an error that \"OSError: Unable to load weights from h5 file. If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True.\"\r\n\r\nI downloaded the file \"bert-base-uncased\" from the official website but there is only tf_model.h5 instead of tf_model.hdf5.\r\n\r\nMy code:\r\n\r\nfrom transformers import BertTokenizer, TFBertForSequenceClassification\r\nimport tensorflow as tf\r\n\r\npretrain_path = \"/root/huggingface-pretrain/bert-base-uncased\"\r\ntokenizer = BertTokenizer.from_pretrained(pretrain_path)\r\nmodel = TFBertForSequenceClassification.from_pretrained(pretrain_path)\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"tf\")\r\ninputs[\"labels\"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1\r\noutputs = model(inputs)\r\nloss = outputs.loss\r\nlogits = outputs.logits",
"Did you try with `TFBertForSequenceClassification.from_pretrained(\"bert-base-uncased\")` instead? What is the content of your `/root/huggingface-pretrain/bert-base-uncased` folder?",
"I have tried but it didn't work",
"I suggest that you can open multiple version including the Code and Model Format so that everyone can use them just using their TensorFlow version.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | Hi, I have installed tf2.0 in my env and I followed the readme which says if you have installed the tf2.0 you can just run pip install transformers. But I got Error: "ImportError: cannot import name 'TFBertForSequenceClassification' from 'transformers' (unknown location)"
My code:
from transformers import BertTokenizer, TFBertForSequenceClassification
import tensorflow as tf
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
outputs = model(inputs)
loss = outputs.loss
logits = outputs.logits
Picture(I have tf2.0 and transformers):

And I also used conda install -c huggingface transformers but it still doesn't work. Could you help me, Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10049/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10048/comments | https://api.github.com/repos/huggingface/transformers/issues/10048/events | https://github.com/huggingface/transformers/issues/10048 | 802,944,161 | MDU6SXNzdWU4MDI5NDQxNjE= | 10,048 | ImportError: cannot import name 'list_datasets' | {
"login": "hassani24",
"id": 29934287,
"node_id": "MDQ6VXNlcjI5OTM0Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/29934287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hassani24",
"html_url": "https://github.com/hassani24",
"followers_url": "https://api.github.com/users/hassani24/followers",
"following_url": "https://api.github.com/users/hassani24/following{/other_user}",
"gists_url": "https://api.github.com/users/hassani24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hassani24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hassani24/subscriptions",
"organizations_url": "https://api.github.com/users/hassani24/orgs",
"repos_url": "https://api.github.com/users/hassani24/repos",
"events_url": "https://api.github.com/users/hassani24/events{/privacy}",
"received_events_url": "https://api.github.com/users/hassani24/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @lhoestq can chime in here!",
"Hi ! It might come from a conflict. Somehow python is loading a bad `datasets` module. Can you check that you don't have a folder named \"datasets\" in your working directory or in a directory of you python path ?\r\n\r\nCan you try to run this as well ?\r\n```python\r\n# if `datasets` points to the right `datasets` module, this should print the location of the module\r\nprint(datasets.__file__)\r\n# if `datasets` points to a bad `datasets` module, this should print the location of the folder named \"datasets\"\r\nprint(datasets.__path__)\r\n```",
"Thanks for the quick response. I had something in my python path from a long time ago with a 'datasets' folder. I was able to find it thanks to your suggestions (and learned something new :) ) so this problem is solved."
] | 1,612 | 1,612 | 1,612 | NONE | null | I'm having an unusual issue on one computer and I'm hoping that someone out there has seen something like this before. This issue does not exist on another computer. Both computers are windows 10 machines, using python 3.6.4, virtualenv, and visual studio code as the ide
I have created a clean virtualenv and installed only datasets. I get an import error when I try to import any of the built in functions, list_datasets, load_dataset, etc.
I have tried installing different versions of datasets and I have tried installing datasets from source instead of through pip with no success.
Has anyone seen anything like this? Any suggestions for something I can try to help debug?
Here's the code:
import sys
```
print (sys.version)
from datasets import list_datasets
for i, ds in enumerate(list_datasets()):
print (f"{i}: {ds}")
```
Here is the output:
3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)]
Traceback (most recent call last):
File "c:\code\dataset_test\main.py", line 5, in <module>
from datasets import list_datasets
ImportError: cannot import name 'list_datasets'
Here is the pip list:
Package Version
------------------ ---------
certifi 2020.12.5
chardet 4.0.0
dataclasses 0.8
datasets 1.2.1
dill 0.3.3
idna 2.10
importlib-metadata 3.4.0
multiprocess 0.70.11.1
numpy 1.19.5
object-detection 0.1
pandas 1.1.5
pip 21.0.1
pyarrow 3.0.0
python-dateutil 2.8.1
pytz 2021.1
requests 2.25.1
setuptools 53.0.0
six 1.15.0
tqdm 4.49.0
typing-extensions 3.7.4.3
urllib3 1.26.3
wheel 0.36.2
xxhash 2.0.0
zipp 3.4.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10048/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10047/comments | https://api.github.com/repos/huggingface/transformers/issues/10047/events | https://github.com/huggingface/transformers/issues/10047 | 802,838,319 | MDU6SXNzdWU4MDI4MzgzMTk= | 10,047 | Can you give some suggestion about add features with input_ids to token-classification model ? | {
"login": "svjack",
"id": 27874014,
"node_id": "MDQ6VXNlcjI3ODc0MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svjack",
"html_url": "https://github.com/svjack",
"followers_url": "https://api.github.com/users/svjack/followers",
"following_url": "https://api.github.com/users/svjack/following{/other_user}",
"gists_url": "https://api.github.com/users/svjack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svjack/subscriptions",
"organizations_url": "https://api.github.com/users/svjack/orgs",
"repos_url": "https://api.github.com/users/svjack/repos",
"events_url": "https://api.github.com/users/svjack/events{/privacy}",
"received_events_url": "https://api.github.com/users/svjack/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The more general question is that dose huggingface model support some interface about\r\nadd other char level feature as auxiliary features ?",
"And i search some bert with ner implementations, they all not use pos tag features.\r\nBut when i view the following code :\r\n**https://github.com/sberbank-ai/ner-bert/blob/master/modules/models/bert_models.py**\r\ni think if add pos tag feature in lstm level (use bert embedding as input)\r\nit seems more suitable.\r\nDo you think this is a general solution to combine token level feature with huggingface features.\r\nOr the future releases will support some feature combination model above bert constructions \r\nas options ? ",
"Or i can use set_input_embeddings and\r\nhttps://github.com/plasticityai/magnitude\r\nAdditional Featurization \r\nto the original embedding?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | Hi, I want to add POS TAG labels as input with input_ids as auxiliary feature feature in NER model.
How can i get the entry in the forward function ?
And i review some implementations about model use bert model to perform NER such as
https://github.com/monologg/JointBERT/blob/master/model/modeling_jointbert.py
It seems if i want to ad this feature (POS TAG) i must rewrite the forward function in BertModel
and seems i should train the edit model from scratch.
Can you give some suggestions about edit model structure and partial weights load from
huggingface pre-trained-models ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10047/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10046/comments | https://api.github.com/repos/huggingface/transformers/issues/10046/events | https://github.com/huggingface/transformers/pull/10046 | 802,815,657 | MDExOlB1bGxSZXF1ZXN0NTY4ODcwMTMz | 10,046 | [s2s examples] Replace -100 token ids with the tokenizer pad_id for compute_metrics | {
"login": "olinguyen",
"id": 4341867,
"node_id": "MDQ6VXNlcjQzNDE4Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olinguyen",
"html_url": "https://github.com/olinguyen",
"followers_url": "https://api.github.com/users/olinguyen/followers",
"following_url": "https://api.github.com/users/olinguyen/following{/other_user}",
"gists_url": "https://api.github.com/users/olinguyen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olinguyen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olinguyen/subscriptions",
"organizations_url": "https://api.github.com/users/olinguyen/orgs",
"repos_url": "https://api.github.com/users/olinguyen/repos",
"events_url": "https://api.github.com/users/olinguyen/events{/privacy}",
"received_events_url": "https://api.github.com/users/olinguyen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, \r\nStill running into this issue here with the run_summarization.py file - \r\n\r\n\r\n```Traceback (most recent call last):\r\n File \"run_summarization.py\", line 674, in <module>\r\n main()\r\n File \"run_summarization.py\", line 628, in main\r\n predict_results = trainer.predict(\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer_seq2seq.py\", line 125, in predict\r\n return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer.py\", line 2133, in predict\r\n output = eval_loop(\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer.py\", line 2235, in evaluation_loop\r\n loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer_seq2seq.py\", line 180, in prediction_step\r\n print(self.tokenizer_t5.batch_decode(inputs[\"labels\"], skip_special_tokens=True, clean_up_tokenization_spaces=True))\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py\", line 3047, in batch_decode\r\n return [\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py\", line 3048, in <listcomp>\r\n self.decode(\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py\", line 3086, in decode\r\n return self._decode(\r\n File \"/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_fast.py\", line 507, in _decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\nOverflowError: out of range integral type conversion attempted```\r\n\r\nHas this been addressed in that script - it relies on the trainer_seq2seq.py file, not sure if this issue has been fixed there. Thanks cc @patil-suraj \r\n\r\n"
] | 1,612 | 1,630 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
This PR is a small fix that replaces the -100 token ids with the tokenizer pad_id when decoding sequences to compute metrics as was done in [this HF blog post](https://huggingface.co/blog/warm-starting-encoder-decoder)
## When does this problem occur?
When running `examples/seq2seq/finetune_trainer.py` with padding to the `max_seq_len`, an error is thrown at the evaluation step:
```
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1004, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 96, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1442, in evaluate
output = self.prediction_loop(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1601, in prediction_loop
metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
File "/eai/transformers/examples/seq2seq/utils.py", line 98, in translation_metrics
pred_str, label_str = decode_pred(pred)
File "/eai/transformers/examples/seq2seq/utils.py", line 85, in decode_pred
label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)
File "/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3070, in batch_decode
return [
File "/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3071, in <listcomp>
self.decode(
File "/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3109, in decode
return self._decode(
File "/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 495, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
OverflowError: out of range integral type conversion attempted
```
This is because in the prediction loop, the labels will be padded with -100 if the prediction or labels have different sequence length https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1637.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10046/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10046",
"html_url": "https://github.com/huggingface/transformers/pull/10046",
"diff_url": "https://github.com/huggingface/transformers/pull/10046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10046.patch",
"merged_at": 1612796897000
} |
https://api.github.com/repos/huggingface/transformers/issues/10045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10045/comments | https://api.github.com/repos/huggingface/transformers/issues/10045/events | https://github.com/huggingface/transformers/issues/10045 | 802,640,136 | MDU6SXNzdWU4MDI2NDAxMzY= | 10,045 | BertGenerationTokenizer provides an unexpected value for BertGenerationModel | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @sadakmed!\r\n\r\nYou're right, there's no need for token type IDs in this tokenizer. The workaround for this is to remove `token_type_ids` from the model input names, as it is done in the DistilBERT tokenizer:\r\n\r\nhttps://github.com/huggingface/transformers/blob/cdd86592317e7db3bab75555c3837fabc74e3429/src/transformers/models/distilbert/tokenization_distilbert.py#L71\r\n\r\nDo you want to open a PR to fix this?\r\n\r\nRegarding the necessity of sentencepiece module, yes it is necessary. It was previously in the transformers dependencies and we removed it because it was causing compilation issues on some hardware. The error should be straightforward and mention a `sentencepiece` installation is necessary in order to use that tokenizer, so no problem there."
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | - `transformers` version: 4.2.2
- PyTorch version (GPU?): 1.7.0+cu101
- tokenizers: @n1t0, @LysandreJik
## Information
in both models BertGenerationEncoder, BertGenerationDecoder, there's no need for `token_type_ids` however the BertGenerationTokenizer provides it, this issue will be raised if you want to input the tokenizer results directly with `**`,
and if it meant to be like this, and the user should be aware of this behaviour, I think a change should be in the documentation.
Note: Another issue with BertGenerationTokenizer is the necessity of sentencepiece module, do you prefer that it should for the user to install it separately or it should be included in transformers dependencies.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10045/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10044/comments | https://api.github.com/repos/huggingface/transformers/issues/10044/events | https://github.com/huggingface/transformers/issues/10044 | 802,628,977 | MDU6SXNzdWU4MDI2Mjg5Nzc= | 10,044 | [s2s examples] dataset porting | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Here is a script to convert these:\r\n\r\n```\r\nimport io\r\nimport json\r\nimport re\r\n\r\nsrc_lang, tgt_lang = [\"en\", \"ro\"]\r\n\r\nfor split in [\"train\", \"val\", \"test\"]:\r\n recs = []\r\n fout = f\"{split}.json\"\r\n with io.open(fout, \"w\", encoding=\"utf-8\") as f:\r\n for type in [\"source\", \"target\"]:\r\n fin = f\"{split}.{type}\"\r\n recs.append([line.strip() for line in open(fin)])\r\n for src, tgt in zip(*recs):\r\n out = {\"translation\": { src_lang: src, tgt_lang: tgt } }\r\n x = json.dumps(out, indent=0, ensure_ascii=False)\r\n x = re.sub(r'\\n', ' ', x, 0, re.M)\r\n f.write(x + \"\\n\")\r\n```",
"The short answer is I don't recall, but my best guess is:\r\n\r\n[en-de](https://github.com/pytorch/fairseq/blob/master/examples/translation/prepare-wmt14en2de.sh#L1)\r\n\r\n[en-ro](https://github.com/rsennrich/wmt16-scripts/blob/master/sample/download_files.sh)\r\n\r\n[cnn_dm](https://github.com/abisee/cnn-dailymail#option-1-download-the-processed-data)",
"That's perfect, @sshleifer! Thanks a lot!\r\n\r\nI missed one more entry: https://cdn-datasets.huggingface.co/summarization/xsum.tar.gz\r\n\r\nI found the main source at https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset but not sure about what pre-processing if any went in.\r\n\r\nThank you!",
"Preprocessing: I don't actually know. The author sent me a link (in a github issue I can't find) which leads me to predict with high confidence that the preprocessing is the same as whatever is in the repo.\r\n\r\nThere is a larger *scraping wherever they are scraping is not deterministic* issue discussed in this thread https://github.com/huggingface/datasets/issues/672 (quentin and I run same code get different numbers)\r\nwhich I was far too lazy to do anything more than complain about :)\r\n",
"OK, for xsum this is as good as it gets, and so I will document what you shared. Thank you!\r\n\r\nI was able to to reproduce 100% the datasets you created with the instructions you provided:\r\n- [en-de](https://github.com/pytorch/fairseq/blob/master/examples/translation/prepare-wmt14en2de.sh#L1)\r\n- [en-ro](https://github.com/rsennrich/wmt16-scripts/blob/master/sample/download_files.sh)\r\n\r\nThis one doesn't match the instructions on the page you linked to:\r\n- [cnn_dm](https://github.com/abisee/cnn-dailymail#option-1-download-the-processed-data)\r\nthe content is the same but the format is quite different after all the pre-processing steps were applied. Their results are all lower-cased and tagged with `<s></s>`, and it's word-level tokenized. Yours is just clean normal text. So you must have used a different process.\r\n\r\nIf I dig into it, it looks like your source is for sure just the original with new lines removed - in fact this is what it says in the README.md:\r\n```\r\nwget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz\r\ntar -xzvf cnn_dm_v2.tgz # empty lines removed\r\n```\r\ntarget looks like a combination of `@highlight` entries into the abstract. So little pre-processing here, but I think I sorted it out.\r\n\r\nI'm trying to preserve these datasets you created since they are awesome for monitoring any regressions in the code, because they avail themselves to a high bleu scores on even a small sample, so I constantly use them as a reference and I can quickly detect any problems if the score drops. Thank you for creating those in first place, @sshleifer!\r\n\r\nWith all datasets or pretty much any data I make, I include the scripts that created it (or a link to where it can be found), so that it's easy to know how things came to be. Mainly doing it for myself, since I am not good at remembering such details. And I don't like researching the same thing more than once.\r\n\r\n",
"To substitute the old datasets directories with new ones, replace:\r\n\r\n* [x] `--data_dir wmt_en_de` => `--dataset_name wmt14 --dataset_config \"de-en\"` or if you want the highest score use: `--dataset_name stas/wmt14-en-de-pre-processed`\r\n* [x] `--data_dir wmt_en_ro` => `--dataset_name wmt16 --dataset_config \"ro-en\"` \r\n* [x] `--data_dir cnn_dm` => `--dataset_name cnn_dailymail --dataset_config \"3.0.0\"`\r\n* [x] `--data_dir xsum` => `--dataset_name xsum`\r\n\r\nconversion to `datasets` status:\r\n\r\n* [x] `stas/wmt14-en-de-pre-processed` https://huggingface.co/datasets/stas/wmt14-en-de-pre-processed (this dataset version scores about twice as good as the unprocessed one) \r\n* [x] `stas/wmt16-en-ro-pre-processed` https://huggingface.co/datasets/stas/wmt16-en-ro-pre-processed (this dataset version scores identical to the unprocessed one) \r\n\r\nI didn't bother porting the following 2, since their score is just slightly better than the unprocessed versions. So just use the following:\r\n* [x] `--data_dir cnn_dm` is just slightly better than `--dataset_name cnn_dailymail --dataset_config \"3.0.0\"`\r\n* [x] `--data_dir xsum` is just slightly better than `--dataset_name xsum`\r\n\r\nHere are the full benchmarks where I verified that all but wmt14 are OK with unprocessed dataset versions:\r\n\r\n```\r\n\r\n\r\n### wmt16-en-ro-pre-processed\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir \\\r\n--adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps \\\r\n--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 \\\r\n--max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS \\\r\n--per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler \\\r\n--task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 \\\r\n--max_train_samples 2000 --max_val_samples 500 \\\r\n--dataset_name stas/wmt16-en-ro-pre-processed --source_prefix \"translate English to Romanian: \"\r\n\r\n02/16/2021 00:01:55 - INFO - __main__ - val_bleu = 24.1319\r\n\r\n\r\nvs normal wmt16-en-ro dataset:\r\n\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 2000 --max_val_samples 500 --dataset_name wmt16 --dataset_config \"ro-en\" --source_prefix \"translate English to Romanian: \"\r\n\r\n02/15/2021 23:59:56 - INFO - __main__ - val_bleu = 24.1319\r\n\r\n\r\nresults: the preprocessed scores identically as the non-preprocessed one\r\n\r\n\r\n\r\n### wmt14-en-de-pre-processed\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir \\\r\n--adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps \\\r\n--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 \\\r\n--max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS \\\r\n--per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler \\\r\n--task translation_en_to_de --val_max_target_length 128 --warmup_steps 500 \\\r\n--max_train_samples 2000 --max_val_samples 500 \\\r\n--dataset_name stas/wmt14-en-de-pre-processed --source_prefix \"translate English to English: \"\r\n\r\n\r\n02/19/2021 11:53:46 - INFO - __main__ - eval_bleu = 22.2348\r\n\r\nvs normal wmt14-en-de dataset:\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_de --val_max_target_length 128 --warmup_steps 500 --max_train_samples 2000 --max_val_samples 500 --dataset_name wmt14 --dataset_config \"de-en\"\r\n\r\n02/19/2021 11:55:37 - INFO - __main__ - eval_bleu = 10.5513\r\n\r\nresults: the preprocessed one scores significantly better\r\n\r\n\r\n# cnn_dailymail\r\n\r\nwget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz\r\ntar -xzvf cnn_dm_v2.tgz # empty lines removed\r\nmv cnn_cln cnn_dm\r\nexport BS=16 MODEL=t5-small; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 examples/legacy/seq2seq/finetune_trainer.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --data_dir cnn_dm --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 50 --n_train 2000 --n_val 500 --predict_with_generate --task summarization\r\n\r\n\r\n2021-02-19 15:16:41 | INFO | __main__ | ***** val metrics *****\r\n2021-02-19 15:16:41 | INFO | __main__ | val_gen_len = 54.3\r\n2021-02-19 15:16:41 | INFO | __main__ | val_loss = 3432.908\r\n2021-02-19 15:16:41 | INFO | __main__ | val_n_objs = 500\r\n2021-02-19 15:16:41 | INFO | __main__ | val_rouge1 = 30.2151\r\n2021-02-19 15:16:41 | INFO | __main__ | val_rouge2 = 11.1576\r\n2021-02-19 15:16:41 | INFO | __main__ | val_rougeL = 21.545\r\n2021-02-19 15:16:41 | INFO | __main__ | val_rougeLsum = 27.1914\r\n2021-02-19 15:16:41 | INFO | __main__ | val_runtime = 70.1847\r\n2021-02-19 15:16:41 | INFO | __main__ | val_samples_per_second = 7.124\r\n\r\n\r\nvs normal cnn_dailymail 3.0.0 dataset\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir \\\r\n--adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps \\\r\n--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 \\\r\n--max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS \\\r\n--per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler \\\r\n--task summarization --val_max_target_length 128 --warmup_steps 500 \\\r\n--max_train_samples 2000 --max_val_samples 500 \\\r\n--dataset_name cnn_dailymail --dataset_config \"3.0.0\"\r\n\r\n\r\n02/19/2021 15:02:13 - INFO - __main__ - ***** val metrics *****\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_gen_len = 74.902\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_loss = 4.7365\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_rouge1 = 28.3215\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_rouge2 = 9.8609\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_rougeL = 20.1687\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_rougeLsum = 25.0959\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_runtime = 37.8969\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_samples = 500\r\n02/19/2021 15:02:13 - INFO - __main__ - eval_samples_per_second = 13.194\r\n\r\n\r\nresults: the preprocessed one scores slightly better\r\n\r\n\r\n\r\n# xsum\r\n\r\nwget https://cdn-datasets.huggingface.co/summarization/xsum.tar.gz\r\ntar -xzvf xsum.tar.gz\r\n\r\nexport BS=16 MODEL=t5-small; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 examples/legacy/seq2seq/finetune_trainer.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --data_dir xsum --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 50 --n_train 2000 --n_val 500 --predict_with_generate --task summarization\r\n\r\n2021-02-19 15:25:32 | INFO | __main__ | val_gen_len = 42.7\r\n2021-02-19 15:25:32 | INFO | __main__ | val_loss = 2272.3525\r\n2021-02-19 15:25:32 | INFO | __main__ | val_n_objs = 500\r\n2021-02-19 15:25:32 | INFO | __main__ | val_rouge1 = 20.6343\r\n2021-02-19 15:25:32 | INFO | __main__ | val_rouge2 = 2.8416\r\n2021-02-19 15:25:32 | INFO | __main__ | val_rougeL = 14.3483\r\n2021-02-19 15:25:32 | INFO | __main__ | val_rougeLsum = 14.8529\r\n2021-02-19 15:25:32 | INFO | __main__ | val_runtime = 51.8796\r\n2021-02-19 15:25:32 | INFO | __main__ | val_samples_per_second = 9.638\r\n\r\nvs normal cnn_dailymail 3.0.0 dataset\r\n\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir \\\r\n--adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps \\\r\n--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 \\\r\n--max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS \\\r\n--per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler \\\r\n--task summarization --val_max_target_length 128 --warmup_steps 500 \\\r\n--max_train_samples 2000 --max_val_samples 500 \\\r\n--dataset_name xsum\r\n\r\n02/19/2021 15:23:38 - INFO - __main__ - epoch = 1.0\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_gen_len = 56.858\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_loss = 5.2487\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_rouge1 = 18.0063\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_rouge2 = 2.276\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_rougeL = 12.8842\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_rougeLsum = 13.9633\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_runtime = 31.2343\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_samples = 500\r\n02/19/2021 15:23:38 - INFO - __main__ - eval_samples_per_second = 16.008\r\n\r\nresults: the preprocessed one scores slightly better\r\n```"
] | 1,612 | 1,615 | 1,613 | CONTRIBUTOR | null | We need to port these to jsonlines and ideally make them part of dataset hub:
https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
https://cdn-datasets.huggingface.co/translation/wmt_en_de.tgz
https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz
The problem is that nobody knows how these came to be - they are pre-processed but it's unclear how.
@sshleifer, do you by chance remember how these were created? If we put those on the new datasets hub - it'd be good to explain how these are different from normal wmt datasets.
Also do you remember which wmtXX they came from?
Thank you!
----
The resolution is here: https://github.com/huggingface/transformers/issues/10044#issuecomment-779555741 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10044/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10043/comments | https://api.github.com/repos/huggingface/transformers/issues/10043/events | https://github.com/huggingface/transformers/pull/10043 | 802,622,141 | MDExOlB1bGxSZXF1ZXN0NTY4NzI5NTYz | 10,043 | [s2s examples] README.md fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | This PR:
* fixes a cl arg typo
* clarifies that it's jsonlines format and not json that's expected
* adds a link explaining jsonlines
@sgugger, how do we apply the auto-re-wrapping to examples README.md files? Currently the new file is all very long lines. Thank you!
@patil-suraj, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10043/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10043",
"html_url": "https://github.com/huggingface/transformers/pull/10043",
"diff_url": "https://github.com/huggingface/transformers/pull/10043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10043.patch",
"merged_at": 1612749095000
} |
https://api.github.com/repos/huggingface/transformers/issues/10042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10042/comments | https://api.github.com/repos/huggingface/transformers/issues/10042/events | https://github.com/huggingface/transformers/issues/10042 | 802,621,215 | MDU6SXNzdWU4MDI2MjEyMTU= | 10,042 | Pegasus ONNX format? | {
"login": "Ashwin367",
"id": 67852248,
"node_id": "MDQ6VXNlcjY3ODUyMjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/67852248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ashwin367",
"html_url": "https://github.com/Ashwin367",
"followers_url": "https://api.github.com/users/Ashwin367/followers",
"following_url": "https://api.github.com/users/Ashwin367/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashwin367/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ashwin367/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashwin367/subscriptions",
"organizations_url": "https://api.github.com/users/Ashwin367/orgs",
"repos_url": "https://api.github.com/users/Ashwin367/repos",
"events_url": "https://api.github.com/users/Ashwin367/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ashwin367/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"follow this [answer](https://stackoverflow.com/a/66117248/13273054), posted on StackOverflow.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | I tried to convert Pegasus to the ONNX format using [this ](https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb#scrollTo=foYlXrSksR_v) guide but it failed. Can Pegasus be converted to the ONNX format or is that not possible yet? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10042/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10041/comments | https://api.github.com/repos/huggingface/transformers/issues/10041/events | https://github.com/huggingface/transformers/pull/10041 | 802,602,417 | MDExOlB1bGxSZXF1ZXN0NTY4NzE0Nzcz | 10,041 | [s2s] Can't mix --fp16 and --device cpu | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not following, `finetune_trainer.py` is replaced by `run_seq2seq.py`.\r\n\r\nWhat replaces these 3?\r\n```\r\nrun_distributed_eval.py\r\nrun_eval.py\r\nrun_eval_search.py\r\n```",
"Not familiar with `run_eval_search` but the new `run_seq2seq` is supposed to run the right evaluation (so no need for `run_eval`) and can do it in a distributed fashion (so no need for `run_eval_distributed`). But I may have missed something.",
"`run_eval_search` uses `run_eval` to find the best hparams. It's not an example, but a real tool to pick the best initial model config hparams when a new model is ported. Written by yours truly.\r\n\r\nSo it probably needs to be ported to use `run_seq2seq` then.\r\n"
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | This PR fixes this user-side error:
```
RuntimeError: "threshold_cpu" not implemented for 'Half'
```
reported at https://github.com/huggingface/transformers/issues/10040
This combination `--fp16 --device cpu` is not possible, as explained here: https://github.com/pytorch/pytorch/issues/48245#issuecomment-730714723
and it's not really usable anyway - it takes minutes to run fp16 on cpu while it takes a split second on the gpu.
full trace:
```
export DATA_DIR=wmt_en_ro; PYTHONPATH=../../src ./run_eval.py t5-base $DATA_DIR/val.source t5_val_generations.txt --reference_path $DATA_DIR/val.target --score_path enro_bleu.json --task translation_en_to_ro --n_obs 100 --device cpu --fp16 --bs 32
Traceback (most recent call last):
File "./run_eval.py", line 176, in <module>
run_generate(verbose=True)
File "./run_eval.py", line 137, in run_generate
runtime_metrics = generate_summaries_or_translations(
File "./run_eval.py", line 67, in generate_summaries_or_translations
summaries = model.generate(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-cpu-no-fp16/src/transformers/generation_utils.py", line 847, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/mnt/nvme1/code/huggingface/transformers-cpu-no-fp16/src/transformers/generation_utils.py", line 379, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-cpu-no-fp16/src/transformers/models/t5/modeling_t5.py", line 946, in forward
layer_outputs = layer_module(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-cpu-no-fp16/src/transformers/models/t5/modeling_t5.py", line 683, in forward
hidden_states = self.layer[-1](hidden_states)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-cpu-no-fp16/src/transformers/models/t5/modeling_t5.py", line 299, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/mnt/nvme1/code/huggingface/transformers-cpu-no-fp16/src/transformers/models/t5/modeling_t5.py", line 258, in forward
hidden_states = F.relu(hidden_states)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/functional.py", line 1206, in relu
result = torch.relu(input)
RuntimeError: "threshold_cpu" not implemented for 'Half'
```
@sgugger, @patil-suraj
Fixes: https://github.com/huggingface/transformers/issues/10040 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10041/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10041",
"html_url": "https://github.com/huggingface/transformers/pull/10041",
"diff_url": "https://github.com/huggingface/transformers/pull/10041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10041.patch",
"merged_at": 1612749260000
} |
https://api.github.com/repos/huggingface/transformers/issues/10040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10040/comments | https://api.github.com/repos/huggingface/transformers/issues/10040/events | https://github.com/huggingface/transformers/issues/10040 | 802,594,219 | MDU6SXNzdWU4MDI1OTQyMTk= | 10,040 | seq2seq: fail gracefully when predicting using --device cpu and --fp16 | {
"login": "PeterAJansen",
"id": 3813268,
"node_id": "MDQ6VXNlcjM4MTMyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterAJansen",
"html_url": "https://github.com/PeterAJansen",
"followers_url": "https://api.github.com/users/PeterAJansen/followers",
"following_url": "https://api.github.com/users/PeterAJansen/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterAJansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterAJansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterAJansen/subscriptions",
"organizations_url": "https://api.github.com/users/PeterAJansen/orgs",
"repos_url": "https://api.github.com/users/PeterAJansen/repos",
"events_url": "https://api.github.com/users/PeterAJansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterAJansen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,612 | 1,612 | 1,612 | NONE | null | When using the recommended seq2seq evaluation procedure in https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md :
```
export DATA_DIR=wmt_en_ro
./run_eval.py t5-base \
$DATA_DIR/val.source t5_val_generations.txt \
--reference_path $DATA_DIR/val.target \
--score_path enro_bleu.json \
--task translation_en_to_ro \
--n_obs 100 \
--device cuda \
--fp16 \
--bs 32
```
If `--device cuda` is switched to `--device cpu` then it eventually fails with a pytorch error if `--fp16` is also enabled:
```
"threshold_cpu" not implemented for 'Half'
```
It seems that fp16 and cpu evaluation may currently be incompatible.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10040/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10039/comments | https://api.github.com/repos/huggingface/transformers/issues/10039/events | https://github.com/huggingface/transformers/pull/10039 | 802,576,192 | MDExOlB1bGxSZXF1ZXN0NTY4Njk0OTcx | 10,039 | [trainer] deepspeed bug fixes and tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"> Thanks for the PR. Concerning the new `test_deepspeed.py` file, the goal is to remove things from the seq2seq folder to make it less intimidating to new users, not add stuff in it ;-)\r\n> \r\n> Maybe we should put all tests in examples/tests/, that would be easier.\r\n\r\nI'm open to suggestions, this is not really an addition but a split or a larger test file as it was becoming unnecessarily complicated. \r\n\r\nThis PR is a bug fix and the scene will evolve to be more user-friendly, but let's discuss this post merge to enable users to do their work.\r\n\r\nI will start a new issue discussing your suggestion. https://github.com/huggingface/transformers/issues/10076",
"Well it's not just a bug fix since you split the test file with it ;-) But I agree we can do the regrouping in another PR.",
"Because I had to add new tests and the previous situation would make things too complicated, so that bug fix required a new test which was the last straw that triggered a need for a dedicated test file. That is, I couldn't easily add a test to the way things were and the test was needed to go with the bug fix. So the split was done out of necessity. "
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | This PR:
* fixes a bug with `model.no_sync()` which is not supported by DeepSpeed - we had the test but it's not running on CI
* fixes a bug with when `--train` is not used - have to `.to(device)` in that case - reported here (https://github.com/huggingface/transformers/issues/9996#issuecomment-774232901)
* splits the deepspeed tests into its own dedicated file and will slowly start to build it up - but it's good enough for now - especially since we are going to switch over `run_seq2seq.py`, except it's not fully ready yet for adoption.
* adds a new test which doesn't use `--train`
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10039/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10039",
"html_url": "https://github.com/huggingface/transformers/pull/10039",
"diff_url": "https://github.com/huggingface/transformers/pull/10039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10039.patch",
"merged_at": 1612806243000
} |
https://api.github.com/repos/huggingface/transformers/issues/10038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10038/comments | https://api.github.com/repos/huggingface/transformers/issues/10038/events | https://github.com/huggingface/transformers/issues/10038 | 802,512,989 | MDU6SXNzdWU4MDI1MTI5ODk= | 10,038 | [examples s2s] run_seq2seq.py tweaks | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"this has been resolved in various PRs. It's looking even better than before."
] | 1,612 | 1,616 | 1,616 | CONTRIBUTOR | null | Would it be possible to sync the new `run_seq2seq.py` with `./finetune_trainer.py` outputs?
Before:
```
2021-02-05 12:59:55 | INFO | __main__ | ***** train metrics *****
2021-02-05 12:59:55 | INFO | __main__ | epoch = 1.0
2021-02-05 12:59:55 | INFO | __main__ | train_n_objs = 60
2021-02-05 12:59:55 | INFO | __main__ | train_runtime = 9.7768
2021-02-05 12:59:55 | INFO | __main__ | train_samples_per_second = 6.137
2021-02-05 12:59:55 | INFO | __main__ | *** Evaluate ***
[INFO|trainer.py:1600] 2021-02-05 12:59:55,434 >> ***** Running Evaluation *****
[INFO|trainer.py:1601] 2021-02-05 12:59:55,434 >> Num examples = 10
[INFO|trainer.py:1602] 2021-02-05 12:59:55,434 >> Batch size = 1
2021-02-05 13:00:00 | INFO | __main__ | ***** val metrics *****
2021-02-05 13:00:00 | INFO | __main__ | epoch = 1.0
2021-02-05 13:00:00 | INFO | __main__ | val_bleu = 33.3125
2021-02-05 13:00:00 | INFO | __main__ | val_gen_len = 50.1
2021-02-05 13:00:00 | INFO | __main__ | val_loss = inf
2021-02-05 13:00:00 | INFO | __main__ | val_n_objs = 10
2021-02-05 13:00:00 | INFO | __main__ | val_runtime = 4.7266
2021-02-05 13:00:00 | INFO | __main__ | val_samples_per_second = 2.116
2021-02-05 13:00:00 | INFO | __main__ | *** Predict ***
```
With the new script:
```
02/05/2021 13:00:41 - INFO - __main__ - ***** Train results *****
02/05/2021 13:00:41 - INFO - __main__ - epoch = 1.0
02/05/2021 13:00:41 - INFO - __main__ - train_runtime = 1.33
02/05/2021 13:00:41 - INFO - __main__ - train_samples_per_second = 3.008
02/05/2021 13:00:41 - INFO - __main__ - *** Evaluate ***
***** Running Evaluation *****
Num examples = 100
Batch size = 32
02/05/2021 13:00:42 - INFO - __main__ - ***** Eval results *****
02/05/2021 13:00:42 - INFO - __main__ - epoch = 1.0
02/05/2021 13:00:42 - INFO - __main__ - eval_bleu = 1.3059269919149237
02/05/2021 13:00:42 - INFO - __main__ - eval_gen_len = 17.66
02/05/2021 13:00:42 - INFO - __main__ - eval_loss = 5.084951400756836
02/05/2021 13:00:42 - INFO - __main__ - eval_runtime = 0.7079
02/05/2021 13:00:42 - INFO - __main__ - eval_samples_per_second = 141.261
```
As you can see:
1. the metrics numbers aren't rounded up
2. missing `*_n_obj` in metrics
3. logging is inconsistent.
the old script has its own issues with log consistency, this one introduces its own inconsistencies.
To make it easy to read the results - the user-targeted output should ideally be aligned on one side and not in 2 columns - i.e. left column date, etc. right column information. In the new script `***** Running Evaluation *****` is missing the logger prefix.
Thank you!
@patil-suraj, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10038/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10037/comments | https://api.github.com/repos/huggingface/transformers/issues/10037/events | https://github.com/huggingface/transformers/pull/10037 | 802,509,518 | MDExOlB1bGxSZXF1ZXN0NTY4NjQwOTIy | 10,037 | [examples] make run scripts executable | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Totally!\r\n\r\nJust this one:\r\n```\r\n/templates/adding_a_new_example_script/{{cookiecutter.directory_name}}/run_{{cookiecutter.example_shortcut}}.py\r\n```\r\ncorrect?\r\n\r\nAny other scripts that I didn't catch with `examples/*/run_*.py`?\r\n",
"I don't think so, we should be good."
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | For consistently and convenience of not needing to type `python` to run a script this PR adds to all `examples/*/run_*py` scripts a python shebang and make the scripts executable.
Was done with:
```
perl -0777 -pi -e 's|^|#!/usr/bin/env python\n|' */run_*.py
chmod a+x */run_*.py
```
@patil-suraj , @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10037/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10037",
"html_url": "https://github.com/huggingface/transformers/pull/10037",
"diff_url": "https://github.com/huggingface/transformers/pull/10037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10037.patch",
"merged_at": 1612569079000
} |
https://api.github.com/repos/huggingface/transformers/issues/10036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10036/comments | https://api.github.com/repos/huggingface/transformers/issues/10036/events | https://github.com/huggingface/transformers/issues/10036 | 802,491,462 | MDU6SXNzdWU4MDI0OTE0NjI= | 10,036 | [s2s examples] convert existing scripts to run_seq2seq.py from finetune_trainer.py | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"`run_seq2seq.py` didn't survive for long, it's no more in master, so yet another automatic conversion for translation scripts is:\r\n\r\n```\r\nperl -pi -e 's|run_seq2seq.py|run_translation.py|g; s|--task translation_(\\w\\w)_to_(\\w\\w)|--source_lang $1 --target_lang $2|;' process.txt\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi there, I found there might be a typo in your script.\r\n\r\nthe `n_val` param was renamed to `max_eval_samples` in `examples/pytorch/translation/run_translation.py`m not `max_val_samples`\r\n\r\nI'm not sure whether I'm correct, as I'm totally new in funetuning.",
"Examples are just that and there is no guarantee that they are still the same 2.5 years later from when this thread was started, so chances are very high that many things discussed many years ago won't work now and you have to (1) either read the current version and adapt to it (2) use the transformers version from the date this thread was started and then it'd work as discussed in this thread."
] | 1,612 | 1,699 | 1,619 | CONTRIBUTOR | null | As `transformers` examples are evolving it seems that the good old `finetune_trainer.py` is going to be moved into unmaintained `examples/legacy/` area, and `run_seq2seq.py` is to be the new king, so let's automate this process
Assuming your cmd script is `process.txt` (and replace with the file names that you have (one or many), let's auto-adjust it:
1. Renames
```
# main name and args rename
perl -pi -e 's|finetune_trainer|run_seq2seq|g; s#--n_(train|val)#--max_$1_samples#g; \
s|--src_lang|--source_lang|g; s|--tgt_lang|--target_lang|g; s|--eval_beams|--num_beams|' process.txt
# drop no longer supported args
perl -pi -e 's|--freeze_embeds||; s|--test_max_target_length[ =]+\d+||;' process.txt
```
2. t5 auto-adding prefix has been dropped, so you need to add it manually, e.g.:
```
--source_prefix "translate English to Romanian: "
```
otherwise the results would be terrible.
3. Datasets are different
a. need to convert the normal dataset into jsonlines (unless the data is already on datasets hub)
instructions are: https://huggingface.co/docs/datasets/loading_datasets.html#json-files
b. new arguments:
instead of
```
--data_dir {data_dir}
```
now you need:
```
--train_file {data_dir}/train.json
--validation_file {data_dir}/val.json
```
Here's is an example conversion script for the `wmt_en_ro` dataset:
```
# convert.py
import io
import json
import re
src_lang, tgt_lang = ["en", "ro"]
for split in ["train", "val", "test"]:
recs = []
fout = f"{split}.json"
with io.open(fout, "w", encoding="utf-8") as f:
for type in ["source", "target"]:
fin = f"{split}.{type}"
recs.append([line.strip() for line in open(fin)])
for src, tgt in zip(*recs):
out = {"translation": { src_lang: src, tgt_lang: tgt } }
x = json.dumps(out, indent=0, ensure_ascii=False)
x = re.sub(r'\n', ' ', x, 0, re.M)
f.write(x + "\n")
```
Or if you find an existing dataset in `datasets`, you can supply it instead of the `--data_dir` arg as following:
```
--dataset_name wmt16 --dataset_config_name ro-en
```
Here is the full conversion table from the previously recommended 4 datasets in the `examples/seq2seq` folder:
* [x] `--data_dir wmt_en_de` => `--dataset_name wmt14 --dataset_config "de-en"` or if you want the highest score use: `--dataset_name wmt14-en-de-pre-processed`
* [x] `--data_dir wmt_en_ro` => --dataset_name wmt16 --dataset_config "ro-en"`
* [x] `--data_dir cnn_dm` => `--dataset_name cnn_dailymail --dataset_config "3.0.0"`
* [x] `--data_dir xsum` => `--dataset_name xsum`
You will find more details [here](https://github.com/huggingface/transformers/issues/10044#issuecomment-779555741)
----
t5-specific changes: from https://github.com/huggingface/transformers/pull/10133#issuecomment-778071812
1. Use the same dataset
2. if using T5 manually pass the `prefix` argument,
3. manually copy the `task_specific_parms` to `config`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10036/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10035/comments | https://api.github.com/repos/huggingface/transformers/issues/10035/events | https://github.com/huggingface/transformers/issues/10035 | 802,336,117 | MDU6SXNzdWU4MDIzMzYxMTc= | 10,035 | Cannot import DataCollatorForSeq2Seq from Transformers library | {
"login": "Arman-IMRSV",
"id": 70702200,
"node_id": "MDQ6VXNlcjcwNzAyMjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/70702200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arman-IMRSV",
"html_url": "https://github.com/Arman-IMRSV",
"followers_url": "https://api.github.com/users/Arman-IMRSV/followers",
"following_url": "https://api.github.com/users/Arman-IMRSV/following{/other_user}",
"gists_url": "https://api.github.com/users/Arman-IMRSV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arman-IMRSV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arman-IMRSV/subscriptions",
"organizations_url": "https://api.github.com/users/Arman-IMRSV/orgs",
"repos_url": "https://api.github.com/users/Arman-IMRSV/repos",
"events_url": "https://api.github.com/users/Arman-IMRSV/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arman-IMRSV/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you try running this on master ? I just ran this script on master and it's working fine.",
"As mentioned in the README of the [examples folder](https://github.com/huggingface/transformers/tree/master/examples#important-note), all examples require a [source install](https://huggingface.co/transformers/installation.html#installing-from-source).\r\n\r\n`DataCollatorForSeq2Seq` is not in the last release, it was introduced since then.",
"@sgugger @patil-suraj Thanks :) "
] | 1,612 | 1,612 | 1,612 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 1.7.1 - GPU : T4
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
### Who can help
@patil-suraj
@sgugger
## Information
Model I am using (Bert, XLNet ...): mT5
The problem arises when using:
* [ ] the official example scripts: run_seq2seq.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Translation
* [ ] my own task or dataset: My own json dataset
## To reproduce
Steps to reproduce the behavior:
```
python run_seq2seq.py \
--model_name_or_path google/mt5-small \
--do_train \
--do_eval \
--task translation_en_to_fa \
--train_file Persian/seq2seq_train.json \
--validation_file Persian/seq2seq_val.json \
--output_dir translation_task_output \
--overwrite_output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--predict_with_generate \
--text_column text_column_name \
--max_source_length 64 \
--max_target_length 64 \
--max_train_samples 10240 \
--max_val_samples 512 \
--source_lang en \
--target_lang fa \
--eval_beams 1 \
--source_prefix "translate English to Persian: "
```
## Error:
```
File "run_seq2seq.py", line 31, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForSeq2Seq' from 'transformers' (unknown location)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10035/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10034/comments | https://api.github.com/repos/huggingface/transformers/issues/10034/events | https://github.com/huggingface/transformers/pull/10034 | 802,293,471 | MDExOlB1bGxSZXF1ZXN0NTY4NDYwNjk0 | 10,034 | Truncate max length if needed in all examples | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger Thanks for fixing this. I made this mistake again (running many jobs at once with different models) and my jobs didn't crash (meanwhile I had upgraded to the latest transformers 🚀 )"
] | 1,612 | 1,614 | 1,612 | COLLABORATOR | null | # What does this PR do?
As pointed out in #10015, most examples will let the tokenzation and training run when `tokenizer.model_max_length < max_seq_length` and the default value is sometimes bigger than the max length for some models (like BertTweet). This PR addresses that.
Fixes #10015 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10034/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10034",
"html_url": "https://github.com/huggingface/transformers/pull/10034",
"diff_url": "https://github.com/huggingface/transformers/pull/10034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10034.patch",
"merged_at": 1612778636000
} |
https://api.github.com/repos/huggingface/transformers/issues/10033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10033/comments | https://api.github.com/repos/huggingface/transformers/issues/10033/events | https://github.com/huggingface/transformers/pull/10033 | 802,270,771 | MDExOlB1bGxSZXF1ZXN0NTY4NDQyMjA3 | 10,033 | A few fixes in the documentation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | COLLABORATOR | null | # What does this PR do?
This PR fixes a few things in the documentation mainly:
- version tags to reflect the latest patch-release (4.2.2
- documents decode and batch_decode in `PreTrainedTokenizer` and `PreTrainedTokenizerFast`.
Fixes #10019
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10033",
"html_url": "https://github.com/huggingface/transformers/pull/10033",
"diff_url": "https://github.com/huggingface/transformers/pull/10033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10033.patch",
"merged_at": 1612778521000
} |
https://api.github.com/repos/huggingface/transformers/issues/10032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10032/comments | https://api.github.com/repos/huggingface/transformers/issues/10032/events | https://github.com/huggingface/transformers/issues/10032 | 802,263,782 | MDU6SXNzdWU4MDIyNjM3ODI= | 10,032 | generation length always equal to 20 when using run_seq2seq.py script | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @moussaKam \r\n\r\nThank you for reporting this.\r\nCould you try evaluating directly using the generate method and calculate `avg eval_gen_len` ?",
"Hi @patil-suraj\r\n\r\nI tried with the `generate` method, the model is well trained and can generate sequences with more than 20 tokens. Apparently the problem is in the prediction in the `run_seq2seq.py` script.",
"The issue was that the argument `val_max_target_length ` was never passed to `evaluate`, so `generate` used the default value for `max_length` which is 20. It'll be fixed after #10085 ",
"Thanks @patil-suraj "
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: Linux-4.4.0-197-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @LysandreJik
Model I am using : any seq2seq model
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: cnn-dailymail / xsum
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --do_train --do_eval --task summarization --dataset_name cnn_dailymail --dataset_config_name 3.0.0 --output_dir tmp --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --max_steps 250 --eval_steps 249 --save_steps 249 --max_val_samples 250 --max_target_length 100 --val_max_target_length 100 --predict_with_generate
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
```
02/05/2021 16:36:34 - INFO - __main__ - ***** Eval results *****
02/05/2021 16:36:34 - INFO - __main__ - epoch = 0.01
02/05/2021 16:36:34 - INFO - __main__ - eval_gen_len = 19.0
02/05/2021 16:36:34 - INFO - __main__ - eval_loss = 2.064879894256592
02/05/2021 16:36:34 - INFO - __main__ - eval_rouge1 = 21.837379907220157
02/05/2021 16:36:34 - INFO - __main__ - eval_rouge2 = 7.506564948396541
02/05/2021 16:36:34 - INFO - __main__ - eval_rougeL = 18.074704390199546
02/05/2021 16:36:34 - INFO - __main__ - eval_rougeLsum = 17.99211046381146
02/05/2021 16:36:34 - INFO - __main__ - eval_runtime = 19.079
02/05/2021 16:36:34 - INFO - __main__ - eval_samples_per_second = 13.103
```
`eval_gen_len` is never exceeding 20 for some reason. I tried with several models and datasets
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10032/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10031/comments | https://api.github.com/repos/huggingface/transformers/issues/10031/events | https://github.com/huggingface/transformers/issues/10031 | 802,236,731 | MDU6SXNzdWU4MDIyMzY3MzE= | 10,031 | AttributeError: module 'transformers' has no attribute 'PegasusForCausalLM' | {
"login": "ibrahim-elsawy",
"id": 53919684,
"node_id": "MDQ6VXNlcjUzOTE5Njg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53919684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibrahim-elsawy",
"html_url": "https://github.com/ibrahim-elsawy",
"followers_url": "https://api.github.com/users/ibrahim-elsawy/followers",
"following_url": "https://api.github.com/users/ibrahim-elsawy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibrahim-elsawy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibrahim-elsawy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibrahim-elsawy/subscriptions",
"organizations_url": "https://api.github.com/users/ibrahim-elsawy/orgs",
"repos_url": "https://api.github.com/users/ibrahim-elsawy/repos",
"events_url": "https://api.github.com/users/ibrahim-elsawy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibrahim-elsawy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this model was only added recently, it's not in v4.2.1 (check the [documentation](https://huggingface.co/transformers/model_doc/pegasus.html)). \r\nYou need to install the pre-release:\r\n```\r\npip install transformers --pre\r\n```\r\nor [install from source](https://huggingface.co/transformers/installation.html#installing-from-source) to use it.\r\n\r\n"
] | 1,612 | 1,612 | 1,612 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-5.8.0-40-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger @patil-suraj
Models:
-tuner007/pegasus_paraphrase
## Information
trying to import "PegasusForCausalLM" from transformers and I got AttributeError: module 'transformers' has no attribute 'PegasusForCausalLM'
## To reproduce
```python
from transformers import PegasusForCausalLM, AutoConfig
model_name = "tuner007/pegasus_paraphrase"
output_logits = True
output_hidden_states = True
p_config =AutoConfig.from_pretrained(
model_name,
output_hidden_states=output_hidden_states,
output_logits=output_logits)
pegasus = PegasusForCausalLM.from_pretrained(model_name, config=p_config)
```
```
AttributeError: module transformers has no attribute PegasusForCausalLM
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10031/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.