url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/11744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11744/comments | https://api.github.com/repos/huggingface/transformers/issues/11744/events | https://github.com/huggingface/transformers/pull/11744 | 893,172,542 | MDExOlB1bGxSZXF1ZXN0NjQ1NzMxMjc2 | 11,744 | [BigBird Pegasus] Make tests faster | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
BigBird Pegasus Tests are faster now
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11744/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11744",
"html_url": "https://github.com/huggingface/transformers/pull/11744",
"diff_url": "https://github.com/huggingface/transformers/pull/11744.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11744.patch",
"merged_at": 1621247454000
} |
https://api.github.com/repos/huggingface/transformers/issues/11743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11743/comments | https://api.github.com/repos/huggingface/transformers/issues/11743/events | https://github.com/huggingface/transformers/issues/11743 | 893,161,555 | MDU6SXNzdWU4OTMxNjE1NTU= | 11,743 | Wrong output used by RobertaForSequenceClassification classification head | {
"login": "slowwavesleep",
"id": 44175589,
"node_id": "MDQ6VXNlcjQ0MTc1NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slowwavesleep",
"html_url": "https://github.com/slowwavesleep",
"followers_url": "https://api.github.com/users/slowwavesleep/followers",
"following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}",
"gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions",
"organizations_url": "https://api.github.com/users/slowwavesleep/orgs",
"repos_url": "https://api.github.com/users/slowwavesleep/repos",
"events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}",
"received_events_url": "https://api.github.com/users/slowwavesleep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\n\r\nI'd still like to get a comment on whether this is the intended behavior. If it is, then why it is done this way?",
"Hello! Sorry for getting back to this so late.\r\n\r\nWhen porting models over to the `transformers` library, we aim to keep them identical to their original implementation. The original RoBERTa implementation in fairseq uses the same classification head, hence why it was ported like this: https://github.com/pytorch/fairseq/blob/c2e8904b6072d8eddab362ac50b324e374b5951d/fairseq/models/roberta/model.py#L382\r\n\r\nI recommend opening an issue over at fairseq if you have questions relative to how they designed their architecture. Thank you!",
"Ah, I see. Didn't realize it's done the same way in the original implementation. Thank you for your response!"
] | 1,621 | 1,624 | 1,624 | NONE | null | Hi,
According to the [documentation](https://huggingface.co/transformers/model_doc/roberta.html#transformers.RobertaForSequenceClassification) classification head should work `on top of the pooled output`, which makes sense, considering the fact that RoBERTa, unlike BERT, wasn't trained on Next Sentence Prediction task, so the `<s>` token equivalent to `CLS` is not as useful. However, if you look at the at the code [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L1166) and [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L1394) you'll see that it doesn't appear to actually use the pooler output for Sequence Classification, as one would expect. Meanwhile, RobertaForMultipleChoice does use it.
It's not clear to me whether this is intended or not, however RoBERTa using a representation of `<s>` for classification may perform considerably _worse_ than a regular BERT on some tasks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11743/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11743/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11742/comments | https://api.github.com/repos/huggingface/transformers/issues/11742/events | https://github.com/huggingface/transformers/pull/11742 | 893,158,827 | MDExOlB1bGxSZXF1ZXN0NjQ1NzE5NzAw | 11,742 | Issue with symbolic tracing for T5 | {
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
This solves the issue for symbolic tracing with T5.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11742/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11742",
"html_url": "https://github.com/huggingface/transformers/pull/11742",
"diff_url": "https://github.com/huggingface/transformers/pull/11742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11742.patch",
"merged_at": 1621246651000
} |
https://api.github.com/repos/huggingface/transformers/issues/11741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11741/comments | https://api.github.com/repos/huggingface/transformers/issues/11741/events | https://github.com/huggingface/transformers/issues/11741 | 893,134,525 | MDU6SXNzdWU4OTMxMzQ1MjU= | 11,741 | Convert blenderbot checkpoint to tensorflow (TF) | {
"login": "sooftware",
"id": 42150335,
"node_id": "MDQ6VXNlcjQyMTUwMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sooftware",
"html_url": "https://github.com/sooftware",
"followers_url": "https://api.github.com/users/sooftware/followers",
"following_url": "https://api.github.com/users/sooftware/following{/other_user}",
"gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sooftware/subscriptions",
"organizations_url": "https://api.github.com/users/sooftware/orgs",
"repos_url": "https://api.github.com/users/sooftware/repos",
"events_url": "https://api.github.com/users/sooftware/events{/privacy}",
"received_events_url": "https://api.github.com/users/sooftware/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten Can you help me?",
"I convert Parl-AI's checkpoint to huggingface using `convert_blenderbot_original_pytorch_checkpoint_to_pytorch.py`. \r\nAnd I convert pytorch checkpoint to tf checkpoint using `convert_pytorch_checkpoint_to_tf2.py`. \r\n \r\nIf there is something wrong, please comment.",
"Hey @sooftware,\r\n\r\nCould you add a code snippet you are trying to execute here? E.g. which checkpoint do you want to convert exactly?",
"Hi @patrickvonplaten !! \r\n \r\nI want to convert Parl-AI's blanerbot (3B, 9B) models. \r\nI tried to convert Parl-AI to huggingface checkpoint by `convert_blenderbot_original_pytorch_checkpoint_to_pytorch.py`. \r\n \r\nI change some keys. \r\n \r\n```python\r\ndef rename_layernorm_keys(sd): \r\n keys = [ \r\n \"model.encoder.layernorm_embedding.weight\", \r\n \"model.encoder.layernorm_embedding.bias\", \r\n \"model.decoder.layernorm_embedding.weight\", \r\n \"model.decoder.layernorm_embedding.bias\", \r\n ]\r\n```\r\n \r\nEx) `model.encoder.layernorm_embedding.weight` => `encoder.norm_embeddings.weight`. \r\n \r\nAnd I got a config file by `wget https://huggingface.co/facebook/blenderbot-3B/resolve/main/config.json`. \r\n \r\nNext, I tried to convert huggingface pytorch checkpoint to tensorflow checkpoint by `huggingface convert_pytorch_to_tf.py`. \r\nI set `model_type` by `bart`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,623 | 1,623 | NONE | null | Hi! Thank you for a great project.
I wonder if I can convert blenderbot checkpoint to tensorflow.
If I can, how I convert checkpoint? Give me some comment. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11741/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11740/comments | https://api.github.com/repos/huggingface/transformers/issues/11740/events | https://github.com/huggingface/transformers/pull/11740 | 893,118,113 | MDExOlB1bGxSZXF1ZXN0NjQ1Njg1NzM5 | 11,740 | Add visual + link to Premium Support webpage | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Built docs is at https://212403-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html\r\n\r\n<img width=\"1360\" alt=\"Screenshot 2021-05-17 at 11 08 58\" src=\"https://user-images.githubusercontent.com/326577/118463639-ff238180-b6cd-11eb-8c45-a6a1e6471b60.png\">\r\n",
"CI failure seems unrelated"
] | 1,621 | 1,621 | 1,621 | MEMBER | null | Close #11635 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11740/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11740",
"html_url": "https://github.com/huggingface/transformers/pull/11740",
"diff_url": "https://github.com/huggingface/transformers/pull/11740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11740.patch",
"merged_at": 1621243737000
} |
https://api.github.com/repos/huggingface/transformers/issues/11739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11739/comments | https://api.github.com/repos/huggingface/transformers/issues/11739/events | https://github.com/huggingface/transformers/pull/11739 | 893,047,267 | MDExOlB1bGxSZXF1ZXN0NjQ1NjI1NTk5 | 11,739 | Remove tapas model card | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | MEMBER | null | the one in https://huggingface.co/google/tapas-base is slightly but not significantly different.
cc @NielsRogge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11739/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11739/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11739",
"html_url": "https://github.com/huggingface/transformers/pull/11739",
"diff_url": "https://github.com/huggingface/transformers/pull/11739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11739.patch",
"merged_at": 1621240957000
} |
https://api.github.com/repos/huggingface/transformers/issues/11738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11738/comments | https://api.github.com/repos/huggingface/transformers/issues/11738/events | https://github.com/huggingface/transformers/pull/11738 | 892,755,441 | MDExOlB1bGxSZXF1ZXN0NjQ1Mzc3NDU4 | 11,738 | Remove extra self from _save_checkpoint call | {
"login": "orf",
"id": 1027207,
"node_id": "MDQ6VXNlcjEwMjcyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1027207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orf",
"html_url": "https://github.com/orf",
"followers_url": "https://api.github.com/users/orf/followers",
"following_url": "https://api.github.com/users/orf/following{/other_user}",
"gists_url": "https://api.github.com/users/orf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orf/subscriptions",
"organizations_url": "https://api.github.com/users/orf/orgs",
"repos_url": "https://api.github.com/users/orf/repos",
"events_url": "https://api.github.com/users/orf/events{/privacy}",
"received_events_url": "https://api.github.com/users/orf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a PR against an older version of Transformers, which we do not accept. This code has been completely removed since then and is now fully integrated into Trainer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,623 | 1,623 | NONE | null | Currently this code is completely broken with non-distributed training. I'm not clear on how it has ever worked:
```
File "run.py", line 152, in <module>
trainer.train() #resume_from_checkpoint=get_last_checkpoint("/opt/ml/checkpoints"))
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1105, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1202, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/opt/conda/lib/python3.6/site-packages/transformers/sagemaker/trainer_sm.py", line 245, in _save_checkpoint
super()._save_checkpoint(self, model, trial, metrics=metrics)
TypeError: _save_checkpoint() got multiple values for argument 'metrics'
```
This is because the `self` argument shouldn't be passed, so `trial` ends up as `metrics` via it's position. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11738/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11738",
"html_url": "https://github.com/huggingface/transformers/pull/11738",
"diff_url": "https://github.com/huggingface/transformers/pull/11738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11738.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11737/comments | https://api.github.com/repos/huggingface/transformers/issues/11737/events | https://github.com/huggingface/transformers/pull/11737 | 892,740,640 | MDExOlB1bGxSZXF1ZXN0NjQ1MzY2Mzk1 | 11,737 | Add regression tests for slow sentencepiece tokenizers. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"rebased on master",
"This PR is ready for review please. @LysandreJik @sgugger \r\n\r\nThe failing test is connected to #11731",
"> Cool, thanks a lot for working on these tests! I think that these are already somewhat covered by the common tests, but they're fast and should help identify issues faster.\r\n> \r\n> However, in order to make sure PR #11716 can be merged, I was mentioning integration tests, rather than regression/unit tests. For example the ALBERT integration test:\r\n> \r\n> https://github.com/huggingface/transformers/blob/b8344a274fe13b390fa60c74b76117f5ea8144cb/tests/test_tokenization_albert.py#L108-L152\r\n> \r\n> Those are particularly important when doing refactors that may affect the encoding/decoding aspect of tokenizers.\r\n> \r\n> I think this is a bit of a larger work though, so we can post \"Good first issues\" for the SPM-based tokenizers in a first step so that the community may help.\r\n\r\nOk @LysandreJik .\r\nSo I will extend the PR and add integration tests for the `_tokenizer` function like the one you linked above to all sentencepiece tokenizers.\r\n\r\nDo you think the already written tests can stay as they are? What other steps are needed?",
"Hi @PhilipMay, thanks for offering to do it! Feel free to let us know if you would like us to offer some of these to the community, as it can be a bit of work to get every tokenizer tested.\r\n\r\nOther than the integration tests, I don't think anything is needed.\r\n\r\nAlso, you might be interested in rebasing on the `master` branch - we've solved the issue regarding the `run_tests_torch` timing out yesterday so by rebasing you would have reliable CI feedback.",
"> Hi @PhilipMay, thanks for offering to do it! Feel free to let us know if you would like us to offer some of these to the community, as it can be a bit of work to get every tokenizer tested.\r\n\r\nI was thinking to add integration tests for the tokeinzers that I want to refactor (the sentencepiece). And not foll all tokenizers.\r\n\r\n**What about this:** In this PR I add integration tests for the **sentencepiece** tokeinzers only - a full list see here #11417 \r\n\r\nAfter that has been merged I (or you) will open an issue asking for similar tests for all tokenizers.\r\n\r\n@LysandreJik what do you think?\r\n",
"Yes, sentencepiece tokenizers only, definitely! But even so, that's quite a large number of tokenizers :)",
"> Also, you might be interested in rebasing on the master branch - we've solved the issue regarding the run_tests_torch timing out yesterday so by rebasing you would have reliable CI feedback.\r\n\r\nRebased on master - CI is green again. :-)",
"@LysandreJik I refactored the tokenizer integration test of albert;\r\n\r\nhttps://github.com/German-NLP-Group/transformers/blob/1894bcc5d116d0107150f4659551e6e21111d736/tests/test_tokenization_albert.py#L127\r\n\r\nBy adding a util class to `TokenizerTesterMixin`\r\nhttps://github.com/German-NLP-Group/transformers/blob/1894bcc5d116d0107150f4659551e6e21111d736/tests/test_tokenization_common.py#L186\r\n\r\nAnd also added an integration test to Barthez:\r\nhttps://github.com/German-NLP-Group/transformers/blob/1894bcc5d116d0107150f4659551e6e21111d736/tests/test_tokenization_barthez.py#L99\r\n\r\nWhat do you think about this \"pattern\"? Should I continue in that direction and add the other tokenizers?",
"I like it, I find it very clean!",
"@LysandreJik the reformer tokenizer integration test somehow fails:\r\n\r\n```text\r\nExpected :Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet...) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between Jax, PyTorch and TensorFlow.\r\nActual :Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert provides general-purpose architectures (BERT, GPT-, RoBERTa, LM, DistilBert, LNet... for Natural Language nderstanding (NL and Natural Language Generation (NLG with over pretrained models in languages and deep interoperability between ax, PyTorch and TensorFlow.\r\n```\r\n\r\nCharacters like \")\" are missing from the vocab. They are converted to `0` or `<unk>`.\r\n@LysandreJik I just pass in an simpler test text to make the test succeed.\r\nOr should we investigate this stange error to discover a possible hidden bug?",
"@LysandreJik while we are here - can we remove this or is it some kind of \"open todo\"?\r\n\r\nhttps://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/tests/test_tokenization_common.py#L178-L184",
"@LysandreJik @sgugger as discussed above the suggested integration tests are added to the sentencepiece tokenizers.\r\n\r\nCI is green, IMO this is done and ready for merge.\r\n\r\nPlease have a look at the strange behavior of the reformer tokenizer: https://github.com/huggingface/transformers/pull/11737#issuecomment-850769064\r\n\r\nAnd this question: https://github.com/huggingface/transformers/pull/11737#issuecomment-850776366\r\n",
"Pinging @patrickvonplaten regarding the Reformer test.\r\n\r\nRegarding https://github.com/huggingface/transformers/pull/11737#issuecomment-850776366 we can just remove this",
"> Regarding #11737 (comment) we can just remove this\r\n\r\nDone.",
"Ok - so this should be ready to be merged so I can continue with #11716 ?",
"Thanks again for all your work!"
] | 1,621 | 1,622 | 1,622 | CONTRIBUTOR | null | This PR adds regression tests for slow sentencepiece tokenizers. These tests are needed for a refactoring in PR #11716
## Strange findings
- s2t: `_convert_token_to_id` of `"<s>"` does not give 0
## ToDo
- <s>add test for `_convert_token_to_id`</s> - done, see `test_convert_token_and_id`
- <s>add test for `_convert_id_to_token`</s> - done, see `test_convert_token_and_id`
- <s>add test for `get_vocab`</s> - done
- <s>add test for `vocab_size`</s> - done
- <s>add test for `convert_tokens_to_string`</s> - done, see `test_sentencepiece_tokenize_and_convert_tokens_to_string` in `TokenizerTesterMixin`
- <s>add test for pickle</s> - is tested in `test_pickle_subword_regularization_tokenizer`
- <s>manual review</s> - done
- <s>fix / add reformer integration test</s> - see https://github.com/huggingface/transformers/pull/11737#issuecomment-850769064 - done
- <s>add typing</s> - done
- <s>add docstrings</s> - done | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11737/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11737/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11737",
"html_url": "https://github.com/huggingface/transformers/pull/11737",
"diff_url": "https://github.com/huggingface/transformers/pull/11737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11737.patch",
"merged_at": 1622553879000
} |
https://api.github.com/repos/huggingface/transformers/issues/11736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11736/comments | https://api.github.com/repos/huggingface/transformers/issues/11736/events | https://github.com/huggingface/transformers/pull/11736 | 892,657,258 | MDExOlB1bGxSZXF1ZXN0NjQ1MzA2MzM3 | 11,736 | Support for running Gpt-Neo 2.7B with 6 GB vram for inference | {
"login": "arrmansa",
"id": 41120982,
"node_id": "MDQ6VXNlcjQxMTIwOTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41120982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arrmansa",
"html_url": "https://github.com/arrmansa",
"followers_url": "https://api.github.com/users/arrmansa/followers",
"following_url": "https://api.github.com/users/arrmansa/following{/other_user}",
"gists_url": "https://api.github.com/users/arrmansa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arrmansa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arrmansa/subscriptions",
"organizations_url": "https://api.github.com/users/arrmansa/orgs",
"repos_url": "https://api.github.com/users/arrmansa/repos",
"events_url": "https://api.github.com/users/arrmansa/events{/privacy}",
"received_events_url": "https://api.github.com/users/arrmansa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | # What does this PR do?
It adds functionality to allow gpt-Neo 2.7B to run in 6gb vram.
If it detects that some modules are on gpu and there is not enough vram, a dict called extrastorage is created which holds the data for model.transformer.h
These weights are loaded from ram to vram one at a time, reducing vram usage.
Expected speed is around 1 token/2s. (slower on the first run)
## Usage
1. Have between 5 and 9.5 Gb Vram
2. run -
```
model.eval().half().to("cpu")
model.transformer.wte.to("cuda")
model.transformer.wpe.to("cuda")
model.transformer.ln_f.to("cuda")
model.lm_head.to("cuda")
torch.cuda.empty_cache()
```
3. Use
`model.generate()
`or
`model(**inputs)
`
## Motivation
Will become faster as ram->vram (pcie) bandwidth increases
Running larger models on consumer hardware is important
## Incomplete
I need some help with the documentation, also I'm not sure if `import copy` should be inside an if statement or not (line 769).
## Before submitting
* [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
* [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
* [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
* [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models:
gpt-neo: @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11736",
"html_url": "https://github.com/huggingface/transformers/pull/11736",
"diff_url": "https://github.com/huggingface/transformers/pull/11736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11736.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11735/comments | https://api.github.com/repos/huggingface/transformers/issues/11735/events | https://github.com/huggingface/transformers/issues/11735 | 892,627,871 | MDU6SXNzdWU4OTI2Mjc4NzE= | 11,735 | Problem with mT5 and the official Summarization notebook | {
"login": "demegire",
"id": 62503047,
"node_id": "MDQ6VXNlcjYyNTAzMDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/62503047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/demegire",
"html_url": "https://github.com/demegire",
"followers_url": "https://api.github.com/users/demegire/followers",
"following_url": "https://api.github.com/users/demegire/following{/other_user}",
"gists_url": "https://api.github.com/users/demegire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/demegire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demegire/subscriptions",
"organizations_url": "https://api.github.com/users/demegire/orgs",
"repos_url": "https://api.github.com/users/demegire/repos",
"events_url": "https://api.github.com/users/demegire/events{/privacy}",
"received_events_url": "https://api.github.com/users/demegire/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"disabling fp16 seems to solve the issue of nan loss, but I wouldn't call this issue closed because this doubles the training time :(",
"Hey @demegire,\r\n\r\nSadly MT5 doesn't really work with fp16. There are a bunch of issues regarding this problem...see:\r\n- https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139/5\r\n- https://github.com/huggingface/transformers/issues/10830",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @patil-suraj @sgugger
## Information
I am using mT5-small on the [official summarization notebook](https://github.com/huggingface/transformers/tree/master/examples/pytorch). However when trained, the model gets nan loss values and outputs non-sense.
I made some changes to speed up the training such as loading 5% of the data, changing max input length to 256 from 1024 and the batch size to 8 from 16 however my settings work perfectly fine with t5-small and I get a high rouge score with sensible outputs and loss values. The problem seems to be mT5.
## To reproduce
Here is my [Colab notebook ](https://colab.research.google.com/drive/16-6yIHFQQ1Q8meVYqFn21Tw2eoG9wliU?usp=sharing)which you can see the output at the end.
TrainOutput(global_step=1276, training_loss=nan, metrics={'train_runtime': 340.4428, 'train_samples_per_second': 3.748, 'total_flos': 1196714720985600.0, 'epoch': 1.0, 'init_mem_cpu_alloc_delta': 1543725056, 'init_mem_gpu_alloc_delta': 1200707584, 'init_mem_cpu_peaked_delta': 0, 'init_mem_gpu_peaked_delta': 0, 'train_mem_cpu_alloc_delta': 10055680, 'train_mem_gpu_alloc_delta': 1203914240, 'train_mem_cpu_peaked_delta': 65540096, 'train_mem_gpu_peaked_delta': 4225469440})
## Expected behavior
Training loss should not be nan.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11735/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11734/comments | https://api.github.com/repos/huggingface/transformers/issues/11734/events | https://github.com/huggingface/transformers/issues/11734 | 892,433,426 | MDU6SXNzdWU4OTI0MzM0MjY= | 11,734 | Cant load google/reformer-enwik8 | {
"login": "hadaev8",
"id": 20247085,
"node_id": "MDQ6VXNlcjIwMjQ3MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/20247085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadaev8",
"html_url": "https://github.com/hadaev8",
"followers_url": "https://api.github.com/users/hadaev8/followers",
"following_url": "https://api.github.com/users/hadaev8/following{/other_user}",
"gists_url": "https://api.github.com/users/hadaev8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadaev8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadaev8/subscriptions",
"organizations_url": "https://api.github.com/users/hadaev8/orgs",
"repos_url": "https://api.github.com/users/hadaev8/repos",
"events_url": "https://api.github.com/users/hadaev8/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadaev8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #11649\r\n"
] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null |
OSError: Can't load tokenizer for 'google/reformer-enwik8'. Make sure that:
- 'google/reformer-enwik8' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'google/reformer-enwik8' is the correct path to a directory containing relevant tokenizer files
https://huggingface.co/google/reformer-enwik8?text=My+name+is+Julien+and+I+like+to | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11734/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11733/comments | https://api.github.com/repos/huggingface/transformers/issues/11733/events | https://github.com/huggingface/transformers/issues/11733 | 892,382,926 | MDU6SXNzdWU4OTIzODI5MjY= | 11,733 | CPU Memory Leak when using RoBERTa for just word vector representation | {
"login": "ZahraGithub",
"id": 25362593,
"node_id": "MDQ6VXNlcjI1MzYyNTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/25362593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZahraGithub",
"html_url": "https://github.com/ZahraGithub",
"followers_url": "https://api.github.com/users/ZahraGithub/followers",
"following_url": "https://api.github.com/users/ZahraGithub/following{/other_user}",
"gists_url": "https://api.github.com/users/ZahraGithub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZahraGithub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZahraGithub/subscriptions",
"organizations_url": "https://api.github.com/users/ZahraGithub/orgs",
"repos_url": "https://api.github.com/users/ZahraGithub/repos",
"events_url": "https://api.github.com/users/ZahraGithub/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZahraGithub/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ZahraGithub, \r\n1) Please use `model.eval()` to reduce your memory consumption.\r\n2) From what I understood, you're processing 64 tokens in one go and total tokens are 1024. This means you'll get vector representation of (16,768) for each document right? How are you storing these representations? You won't be able to load all of them in your RAM hence that CPU memory leak. Try saving these representations to disk for each (that can vary) document to avoid memory leak.",
"Hi @bhavitvyamalik\r\n\r\n1. I did it but again the memory consumption is high.\r\n2. I can not understand what is the amount of 16768?",
"For 2, I think what's happening is you're storing all your representations in a list or something. You should load them off your RAM and store in it your disk (you can save it as .npy file and later load them) so as to avoid 100% memory consumption.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,624 | 1,624 | NONE | null | Hi,
I do not use model for training or fine tuning. I just want to give strings and take their representations. My dataset is Robust04 (about 2G) and max length of each document is truncated to 1024 tokens. So, I break each documents to pieces of 64 tokens and represent it and then concatenate the representation of each 64 tokens pieces to get word vector representation of document with length of 1024 (1024*768). I use 64GB CPU RAM but it crashed after about 40% of documents represented. The used code is in the following:
`tokenizer = AutoTokenizer.from_pretrained('roberta-base')`
`model = AutoModel.from_pretrained('roberta-base')`
`cl_text = tokenizer.encode("I am using RoBERTa")`
`piece = tokenizer.decode(cl_text)`
`piece = tokenizer(piece, return_tensors="pt", max_length=le, pad_to_max_length=True, truncation=True,
return_attention_mask=True, return_token_type_ids=True, add_special_tokens=False)`
`piece = model(**piece)`
`piece = piece.last_hidden_state`
`piece.detach()`
Would you please guide me?
Thanks in advance,
Regards | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11733/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11732/comments | https://api.github.com/repos/huggingface/transformers/issues/11732/events | https://github.com/huggingface/transformers/issues/11732 | 892,370,886 | MDU6SXNzdWU4OTIzNzA4ODY= | 11,732 | Import `SPIECE_UNDERLINE` from `file_utils` instead of WET definition | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am still planning to provide a PR later."
] | 1,621 | 1,623 | null | CONTRIBUTOR | null | Many places define `SPIECE_UNDERLINE` in the code like this: `SPIECE_UNDERLINE = "▁"`
Instead it should me imported: `from transformers.file_utils import SPIECE_UNDERLINE`.
I can provide a PR... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11732/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11731/comments | https://api.github.com/repos/huggingface/transformers/issues/11731/events | https://github.com/huggingface/transformers/issues/11731 | 892,364,143 | MDU6SXNzdWU4OTIzNjQxNDM= | 11,731 | `ci/circleci: run_tests_torch` reaches 10 min. time limit | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"An other one here: https://app.circleci.com/pipelines/github/huggingface/transformers/23647/workflows/b0de5fa6-3f1d-446f-8ce9-11461ff1fb10/jobs/214869",
"@sgugger are you aware of this issue?",
"Yes we are aware. This is something we will work on in the next weeks, we're just wrapping another project first.",
"Seems to be fixed now. Closing."
] | 1,621 | 1,622 | 1,622 | CONTRIBUTOR | null | `ci/circleci: run_tests_torch` reaches 10 min. time limit - see here:
https://app.circleci.com/pipelines/github/huggingface/transformers/23426/workflows/349bd527-b66a-46ed-a168-365794da6856/jobs/211948 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11731/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11730/comments | https://api.github.com/repos/huggingface/transformers/issues/11730/events | https://github.com/huggingface/transformers/issues/11730 | 892,256,350 | MDU6SXNzdWU4OTIyNTYzNTA= | 11,730 | Bert2bert on Swag with very low accuracy | {
"login": "helloworld123-lab",
"id": 75953751,
"node_id": "MDQ6VXNlcjc1OTUzNzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/75953751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helloworld123-lab",
"html_url": "https://github.com/helloworld123-lab",
"followers_url": "https://api.github.com/users/helloworld123-lab/followers",
"following_url": "https://api.github.com/users/helloworld123-lab/following{/other_user}",
"gists_url": "https://api.github.com/users/helloworld123-lab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helloworld123-lab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helloworld123-lab/subscriptions",
"organizations_url": "https://api.github.com/users/helloworld123-lab/orgs",
"repos_url": "https://api.github.com/users/helloworld123-lab/repos",
"events_url": "https://api.github.com/users/helloworld123-lab/events{/privacy}",
"received_events_url": "https://api.github.com/users/helloworld123-lab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @helloworld123-lab,\r\n\r\nThanks for the issue :-) Is there a specific reason to use Bert2bert for SWAG instead of just a BERT model?",
"I am sorry for the issue :) actually i am new in this field. i just started working on models using transformers. T5 is a text-to-text model, I just wanted to try how it can perform with bert2bert. Is this the wrong approach to Swag?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,621 | 1,623 | 1,623 | NONE | null | Hello everyone,
I try to build multiple choice QA system using Bert2Bert. I follow the model given for Swag using t5 in [https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb](url)
My complete code is here.[https://colab.research.google.com/drive/1MAGCi5TC1S6GNW3CFEB0f2cMkQ5gpxdN?usp=sharing](url)
To integrate bert2bert model, I follow this [https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing](url) notebook.
I created a Bert2BertFineTuner class considering T5FineTuner class in [https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb](url)
I add the following changes to T5FineTuner class for Bert2Bert consideration. I just add
> EncoderDecoderModel.from_encoder_decoder_pretrained(.)
and
> BertTokenizer.from_pretrained(.)
```
class Bert2BertFineTuner(pl.LightningModule):
def __init__(self, hparams):
super(Bert2BertFineTuner, self).__init__()
self.hparams = hparams
#self.model = T5ForConditionalGeneration.from_pretrained(hparams.model_name_or_path)
#self.tokenizer = T5Tokenizer.from_pretrained(hparams.tokenizer_name_or_path)
self.tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
self.model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
self.model.config.decoder_start_token_id = self.tokenizer.bos_token_id
self.model.config.eos_token_id = self.tokenizer.eos_token_id
self.model.config.pad_token_id = self.tokenizer.pad_token_id
# sensible parameters for beam search
self.model.config.vocab_size = self.model.config.decoder.vocab_size
self.model.config.max_length = 142
self.model.config.min_length = 56
self.model.config.no_repeat_ngram_size = 3
self.model.config.early_stopping = True
self.model.config.length_penalty = 2.0
self.model.config.num_beams = 4
def is_logger(self):
return self.trainer.proc_rank <= 0
def forward(
self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, lm_labels=None
):
return self.model(
input_ids=input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
labels=lm_labels,
)
def _step(self, batch):
lm_labels = batch["target_ids"]
lm_labels[lm_labels[:, :] == self.tokenizer.pad_token_id] = -100
outputs = self(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
lm_labels=lm_labels,
decoder_attention_mask=batch['target_mask'],
decoder_input_ids=batch['target_ids']
)
loss = outputs[0]
return loss
```
As above, I have updated the model, config, and tokenizer for bert2bert model.
Also, sample input and target encoded pairs are as:
```
data = dataset[6]
print(tokenizer.decode(data['source_ids']))
print("**")
print(tokenizer.decode(data['target_ids']))
```
[CLS] context : in what spanish speaking north american country can you get a great cup of coffee? options : 1 : mildred's coffee shop 2 : mexico 3 : diner 4 : kitchen 5 : canteen < / s > [SEP] [PAD] [PAD] [PAD] [PAD] [PAD]
**
[CLS] 2 < / s > [SEP]
```
```
In the above example, 2 is indicating the label. And I run the model with the following parameters:
`{'output_dir': 't5_swag', 'model_name_or_path': 'bert2bert', 'tokenizer_name_or_path': 'bert-base', 'max_seq_length': 512, 'learning_rate': 3e-05, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'warmup_steps': 0, 'train_batch_size': 8, 'eval_batch_size': 8, 'num_train_epochs': 4, 'gradient_accumulation_steps': 16, 'n_gpu': 1, 'early_stop_callback': False, 'fp_16': False, 'opt_level': 'O1', 'max_grad_norm': 1.0, 'seed': 42, 'data_dir': ''}`
It finishes the execution with following loss values:
```
Validation sanity check: 100%
5/5 [00:03<00:00, 1.71it/s]
INFO:__main__:LOOKING AT train
INFO:__main__:hello
Epoch 4: 100%
1370/1370 [35:51<00:00, 1.57s/it, loss=0.017, v_num=0, val_loss=0.268]
Validating: 100%
153/153 [01:31<00:00, 1.67it/s]
INFO:__main__:***** Validation results *****
INFO:__main__:avg_val_loss = tensor(0.2726, device='cuda:0')
INFO:__main__:loss = tensor(0.2695, device='cuda:0')
INFO:__main__:train_loss = tensor(0.2695, device='cuda:0')
INFO:__main__:val_loss = tensor(0.2726, device='cuda:0')
Validating: 100%
153/153 [01:31<00:00, 1.67it/s]
INFO:__main__:***** Validation results *****
INFO:__main__:avg_train_loss = tensor(1.1325, device='cuda:0')
INFO:__main__:avg_val_loss = tensor(0.2689, device='cuda:0')
INFO:__main__:epoch = 0
INFO:__main__:loss = tensor(0.2677, device='cuda:0')
INFO:__main__:train_loss = tensor(0.2677, device='cuda:0')
INFO:__main__:val_loss = tensor(0.2689, device='cuda:0')
Validating: 100%
153/153 [01:33<00:00, 1.64it/s]
INFO:__main__:***** Validation results *****
INFO:__main__:avg_train_loss = tensor(0.2719, device='cuda:0')
INFO:__main__:avg_val_loss = tensor(0.2686, device='cuda:0')
INFO:__main__:epoch = 1
INFO:__main__:loss = tensor(0.2674, device='cuda:0')
INFO:__main__:train_loss = tensor(0.2674, device='cuda:0')
INFO:__main__:val_loss = tensor(0.2686, device='cuda:0')
Validating: 100%
153/153 [01:33<00:00, 1.64it/s]
INFO:__main__:***** Validation results *****
INFO:__main__:avg_train_loss = tensor(0.2702, device='cuda:0')
INFO:__main__:avg_val_loss = tensor(0.2684, device='cuda:0')
INFO:__main__:epoch = 2
INFO:__main__:loss = tensor(0.2623, device='cuda:0')
INFO:__main__:train_loss = tensor(0.2623, device='cuda:0')
INFO:__main__:val_loss = tensor(0.2684, device='cuda:0')
```
The validation part:
```
model.model.eval()
outputs = []
targets = []
for batch in tqdm(loader):
outs = model.model.generate(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda())
dec = [tokenizer.decode(ids) for ids in outs]
target = [tokenizer.decode(ids) for ids in batch["target_ids"]]
outputs.extend(dec)
targets.extend(target)
```
metrics.accuracy_score(targets1, outputs1)
0.20065520065520065
```
The accuracy is too low. What can the reason be? Most probably I am missing something, but I could not find it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11729/comments | https://api.github.com/repos/huggingface/transformers/issues/11729/events | https://github.com/huggingface/transformers/issues/11729 | 892,122,258 | MDU6SXNzdWU4OTIxMjIyNTg= | 11,729 | [Benchmark] | {
"login": "oanhphuong",
"id": 70864217,
"node_id": "MDQ6VXNlcjcwODY0MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/70864217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oanhphuong",
"html_url": "https://github.com/oanhphuong",
"followers_url": "https://api.github.com/users/oanhphuong/followers",
"following_url": "https://api.github.com/users/oanhphuong/following{/other_user}",
"gists_url": "https://api.github.com/users/oanhphuong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oanhphuong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oanhphuong/subscriptions",
"organizations_url": "https://api.github.com/users/oanhphuong/orgs",
"repos_url": "https://api.github.com/users/oanhphuong/repos",
"events_url": "https://api.github.com/users/oanhphuong/events{/privacy}",
"received_events_url": "https://api.github.com/users/oanhphuong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11729/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11728/comments | https://api.github.com/repos/huggingface/transformers/issues/11728/events | https://github.com/huggingface/transformers/issues/11728 | 892,066,680 | MDU6SXNzdWU4OTIwNjY2ODA= | 11,728 | ImportError: cannot import name 'load_dataset' from 'datasets' | {
"login": "eadsa1998",
"id": 69226325,
"node_id": "MDQ6VXNlcjY5MjI2MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/69226325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eadsa1998",
"html_url": "https://github.com/eadsa1998",
"followers_url": "https://api.github.com/users/eadsa1998/followers",
"following_url": "https://api.github.com/users/eadsa1998/following{/other_user}",
"gists_url": "https://api.github.com/users/eadsa1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eadsa1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eadsa1998/subscriptions",
"organizations_url": "https://api.github.com/users/eadsa1998/orgs",
"repos_url": "https://api.github.com/users/eadsa1998/repos",
"events_url": "https://api.github.com/users/eadsa1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/eadsa1998/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! When you `import datasets`, python looks at your installed packages, but also at the modules defined in the directory from which you run your code. It is the case because the current working directory is added to your python path when you run your code.\r\n\r\nIn your case I think it tries to load your `datasets.py` in the `equity-analysts-sentiment` folder, since the name is conflicting. If you rename this file you should be good.",
"Ok so I renamed the file and it still wouldn't run. I also tried moving it around to run it in other directories and see if I had better luck but I still got this same error everywhere I tried it.",
"If you're still having this error:\r\n```\r\nImportError: cannot import name 'load_dataset' from 'datasets' (C:\\Users\\bookw\\Dropbox\\Equity-Analyst-Project\\equity-analysts-sentiment\\datasets.py)\r\n```\r\n\r\nThen it probably means that `C:\\Users\\bookw\\Dropbox\\Equity-Analyst-Project\\equity-analysts-sentiment` is still in your python path. Can you check that you didn't add this path to your python path via environment variables or via your IDE ? I know that some of them like PyCharm add project directories to the python path automatically for example.",
"I don't think I'm using a virtual enviroment or IDE, just Jupyter Notebooks. I'll paste my python path below but I don't see that in there.\r\n\r\nC:\\Users\\bookw\\anaconda3;C:\\Users\\bookw\\anaconda3\\Library\\mingw-w64\\bin;C:\\Users\\bookw\\anaconda3\\Library\\usr\\bin;C:\\Users\\book\r\nw\\anaconda3\\Library\\bin;C:\\Users\\bookw\\anaconda3\\Scripts;C:\\Users\\bookw\\anaconda3\\bin;C:\\Users\\bookw\\anaconda3\\condabin;C:\\Use\r\nrs\\bookw\\anaconda3;C:\\Users\\bookw\\anaconda3\\Library\\mingw-w64\\bin;C:\\Users\\bookw\\anaconda3\\Library\\usr\\bin;C:\\Users\\bookw\\anac\r\nonda3\\Library\\bin;C:\\Users\\bookw\\anaconda3\\Scripts;C:\\Program Files\\Common Files\\Oracle\\Java\\javapath;C:\\Windows\\system32;C:\\W\r\nindows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files (x86)\\\r\nNVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\Git\\cmd;C:\\Program Files\\P\r\nuTTY;C:\\Program Files\\dotnet;C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\Binn;C:\\Program Files\\Microsoft SQL Server\\Client\r\n SDK\\ODBC\\170\\Tools\\Binn;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\r\n\\WINDOWS\\System32\\OpenSSH;C:\\Users\\bookw\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\bookw\\AppData\\Local\\Programs\\MiKTeX\\mikt\r\nex\\bin\\x64;C:\\Users\\bookw\\.dotnet\\tools;.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am facing the same issue when trying to follow the datasets tutorial from the Huggingface course. The line\r\n`from datasets import load_dataset`\r\ncauses the following error: \r\n`ImportError: cannot import name 'load_dataset' from 'datasets' (unknown location)`.\r\n\r\nMy environment: \r\n\r\n- macOS Big Sur 11.6. on M1 Macbook\r\n- python 3.8.0\r\n- conda 4.11.0\r\n- transformers 4.16.2\r\n- datasets 1.18.3 (installed with `conda install -c huggingface -c conda-forge datasets`)\r\n- torch 1.10.2\r\n\r\nThe Colab notebook provided by the course works fine. This error occurs only locally. \r\nCould this be an M1 related issue on the Macbook? I have had problems in the past with conda installations and also with tensorflow on the M1. \r\n\r\n\r\n@eadsa1998 Did you manage to resolve the problem?",
"the same issue also",
"I had the same issue and solved it by reinstalling the datasets package.",
"the same issue also now",
"I'm having the same issue, and it still don't work after reinstalling the datasets package.",
"Can you check that you don't have a directory named \"datasets\" or a file \"datasets.py\" in your working directory or in directories in your python path (including the ones that your IDE may be adding) ?\r\n\r\nThe ImportError can also show the location of the diretory/file that is imported instead of the `datasets` package",
"Thank you @lhoestq I had the `datasets` folder. 😅 ",
"try `pip install datasets`"
] | 1,621 | 1,708 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.6.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1 (True)
- Using GPU in script?: Possibly?
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik and @lhoestq helped on the other issues that I looked at so they might be able to help here. But I'll take anyone really.
## Information
I am attempting to run finBERT and am having trouble with the datasets package. I looked at a couple of other issues from people who had similar problems but none of their solutions worked for me. I'm sorry if I didn't provide some information or missed something obvious, I'm new to programming and very new to machine learning so I don't quite know what/where everything is yet!
The problem arises when using:
* [ ] my own modified scripts: (give details below)
I am using the first model in this [example script](https://github.com/yya518/FinBERT/blob/master/FinBert%20Model%20Example.ipynb) from the finBERT model developers.
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
I am just trying to use transformers to run finBERT without using the API.
## To reproduce
Steps to reproduce the behavior:
1. Import datasets
That's as far as I'm able to get before I get this error:
""""" ImportError Traceback (most recent call last)
<ipython-input-5-f2837d51185d> in <module>
24 from sklearn.metrics import classification_report
25 import transformers
---> 26 from transformers import AutoModel, BertTokenizerFast
27
28
~\anaconda3\lib\site-packages\transformers\__init__.py in __getattr__(self, name)
2485 if name == "__version__":
2486 return __version__
-> 2487 return super().__getattr__(name)
2488
2489 sys.modules[__name__] = _LazyModule(__name__, _import_structure)
~\anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name)
1698 elif name in self._class_to_module.keys():
1699 module = self._get_module(self._class_to_module[name])
-> 1700 value = getattr(module, name)
1701 else:
1702 raise AttributeError(f"module {self.__name__} has no attribute {name}")
~\anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name)
1697 value = self._get_module(name)
1698 elif name in self._class_to_module.keys():
-> 1699 module = self._get_module(self._class_to_module[name])
1700 value = getattr(module, name)
1701 else:
~\anaconda3\lib\site-packages\transformers\models\auto\__init__.py in _get_module(self, module_name)
196
197 def _get_module(self, module_name: str):
--> 198 return importlib.import_module("." + module_name, self.__name__)
199
200 sys.modules[__name__] = _LazyModule(__name__, _import_structure)
~\anaconda3\lib\importlib\__init__.py in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129
~\anaconda3\lib\site-packages\transformers\models\auto\modeling_auto.py in <module>
197 from ..pegasus.modeling_pegasus import PegasusForCausalLM, PegasusForConditionalGeneration, PegasusModel
198 from ..prophetnet.modeling_prophetnet import ProphetNetForCausalLM, ProphetNetForConditionalGeneration, ProphetNetModel
--> 199 from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function
200 RagModel,
201 RagSequenceForGeneration,
~\anaconda3\lib\site-packages\transformers\models\rag\modeling_rag.py in <module>
27 from ...utils import logging
28 from .configuration_rag import RagConfig
---> 29 from .retrieval_rag import RagRetriever
30
31
~\anaconda3\lib\site-packages\transformers\models\rag\retrieval_rag.py in <module>
37
38 if is_datasets_available():
---> 39 from datasets import Dataset, load_dataset, load_from_disk
40
41 if is_faiss_available():
ImportError: cannot import name 'load_dataset' from 'datasets' (C:\Users\bookw\Dropbox\Equity-Analyst-Project\equity-analysts-sentiment\datasets.py) """'"
## Expected behavior
I would expect the package to import correctly.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11728/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11727/comments | https://api.github.com/repos/huggingface/transformers/issues/11727/events | https://github.com/huggingface/transformers/pull/11727 | 891,965,470 | MDExOlB1bGxSZXF1ZXN0NjQ0NzQ4ODc0 | 11,727 | Improvements to Flax finetuning script | {
"login": "marcvanzee",
"id": 180100,
"node_id": "MDQ6VXNlcjE4MDEwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcvanzee",
"html_url": "https://github.com/marcvanzee",
"followers_url": "https://api.github.com/users/marcvanzee/followers",
"following_url": "https://api.github.com/users/marcvanzee/following{/other_user}",
"gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions",
"organizations_url": "https://api.github.com/users/marcvanzee/orgs",
"repos_url": "https://api.github.com/users/marcvanzee/repos",
"events_url": "https://api.github.com/users/marcvanzee/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcvanzee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,621 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
- Ensures we actually use the `weight_decay` command-line argument
- Simplified `jax.value_and_grad` by removing the auxiliary (which we don't use)
- Simplified replication logic in eval step
- Fixes a bug in RNG handling. We weren’t splitting them appropriately before sharding them during training, which is not good practice, RNGs should always be split and not re-used, see: https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#jax-prng
Note the new RNG handling affects the training accuracy, so I reran all experiments and report the new numbers, which aren't much different from the previous ones.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11727/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11727",
"html_url": "https://github.com/huggingface/transformers/pull/11727",
"diff_url": "https://github.com/huggingface/transformers/pull/11727.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11727.patch",
"merged_at": 1621239993000
} |
https://api.github.com/repos/huggingface/transformers/issues/11726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11726/comments | https://api.github.com/repos/huggingface/transformers/issues/11726/events | https://github.com/huggingface/transformers/pull/11726 | 891,828,116 | MDExOlB1bGxSZXF1ZXN0NjQ0NjMzMzk3 | 11,726 | [Flax] Correct example script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Remove useless arg & make sure that state is not replicated 2 times in a row. Thanks for spotting it @marcvanzee
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11726/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11726",
"html_url": "https://github.com/huggingface/transformers/pull/11726",
"diff_url": "https://github.com/huggingface/transformers/pull/11726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11726.patch",
"merged_at": 1620990177000
} |
https://api.github.com/repos/huggingface/transformers/issues/11725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11725/comments | https://api.github.com/repos/huggingface/transformers/issues/11725/events | https://github.com/huggingface/transformers/pull/11725 | 891,813,907 | MDExOlB1bGxSZXF1ZXN0NjQ0NjIxNDI3 | 11,725 | Fix#11724 | {
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | np.sin/np.cos is an inplace op, position_enc will be error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11725/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11725",
"html_url": "https://github.com/huggingface/transformers/pull/11725",
"diff_url": "https://github.com/huggingface/transformers/pull/11725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11725.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11724/comments | https://api.github.com/repos/huggingface/transformers/issues/11724/events | https://github.com/huggingface/transformers/issues/11724 | 891,808,210 | MDU6SXNzdWU4OTE4MDgyMTA= | 11,724 | A bug in modeling_tf_marian.py and modeling_tf_pegasus.py SinusoidalPositionalEmbedding _init_weight | {
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @JunnYu,\r\n\r\nCould you give a bit more context on what the issue is exactly? And how your solution solves the issue?",
"@patrickvonplaten \r\nAfter this line `position_enc[:, 0 : dim // 2] = np.sin(position_enc[:, 0::2]) ` and\r\n` position_enc[:, 0 : dim // 2] ` value will be overridden in place.\r\nIf we compute `np.cos(position_enc[:, 1::2])` and the result is inconsistent with the expected result.\r\nSo we should init a np.array `table = np.zeros_like(position_enc)` to store the sinusoidalposition embeddings.\r\n\r\n\r\n\r\n",
"True, good catch! Do you mind opening a PR to fix it? We should then also run the slow tests to be sure the model performance is not affected",
"@patrickvonplaten I have opened a PR https://github.com/huggingface/transformers/pull/11897 to fix this. \r\nI think the **pretrained** tf model's performance will **not be affected**. But when we init a **new** tf model, the model's performance will **be affected**!\r\nBecause the pretrained `tf_model.h5` contains the correct `embedding weight` (this embedding weight is converted by `pytorch_model.bin`). When we load a pretrained tf model , the tf model will load this correct `embedding weight` .\r\n \r\n```python\r\n# old code\r\nfrom transformers.models.marian import MarianModel, TFMarianModel\r\nimport torch\r\npt_model = MarianModel.from_pretrained(\r\n \"Helsinki-NLP/opus-mt-en-de\")\r\ntf_model = TFMarianModel.from_pretrained(\r\n \"Helsinki-NLP/opus-mt-en-de\")\r\npt_emb_weight = pt_model.encoder.embed_positions.weight\r\n\r\ntf_emb_weight = torch.from_numpy(\r\n tf_model.model.encoder.embed_positions.weight.numpy())\r\n\r\nprint(pt_emb_weight.equal(tf_emb_weight))\r\n# True\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@patrickvonplaten I have opened a PR #11897. Can you take the time to look at this PR?Thanks.",
"Thanks for pinging me again & super sorry to have this on wait for so long!"
] | 1,620 | 1,626 | 1,626 | CONTRIBUTOR | null |
## Information
should create a new np.array and then store np.sin&np.cos result in it
```python
table = np.zeros_like(position_enc)
# index 0 is all zero
table[:, 0 : dim // 2] = np.sin(position_enc[:, 0::2])
table[:, dim // 2 :] = np.cos(position_enc[:, 1::2])
# convert to tensor
table = tf.convert_to_tensor(table)
tf.stop_gradient(table)
```
https://github.com/huggingface/transformers/blob/bd3b599c12cfcf5ef517c5ffe526afbdbaa92539/src/transformers/models/marian/modeling_tf_marian.py#L147-L157
https://github.com/huggingface/transformers/blob/8d43c71a1ca3ad322cc45008eb66a5611f1e017e/src/transformers/models/pegasus/modeling_tf_pegasus.py#L148-L158
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11724/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11723/comments | https://api.github.com/repos/huggingface/transformers/issues/11723/events | https://github.com/huggingface/transformers/issues/11723 | 891,778,434 | MDU6SXNzdWU4OTE3Nzg0MzQ= | 11,723 | Warnings about some weights that were not initialized in Greek BERT | {
"login": "m-nlp-q",
"id": 78740032,
"node_id": "MDQ6VXNlcjc4NzQwMDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/78740032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m-nlp-q",
"html_url": "https://github.com/m-nlp-q",
"followers_url": "https://api.github.com/users/m-nlp-q/followers",
"following_url": "https://api.github.com/users/m-nlp-q/following{/other_user}",
"gists_url": "https://api.github.com/users/m-nlp-q/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m-nlp-q/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m-nlp-q/subscriptions",
"organizations_url": "https://api.github.com/users/m-nlp-q/orgs",
"repos_url": "https://api.github.com/users/m-nlp-q/repos",
"events_url": "https://api.github.com/users/m-nlp-q/events{/privacy}",
"received_events_url": "https://api.github.com/users/m-nlp-q/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It tells you that you are initializing a `BertModel` without the heads that were used for pre-training (namely next sentence prediction and masked language modeling). That's not a problem, as you probably don't need these weights for a downstream task of interest (such as question answering or sequence classification). ",
"Thank you @NielsRogge for your quick reply,\r\n\r\nIndeed, I don't need those weighs for my task, but how can you tell that these are the weights of the heads used in pre-training?",
"You can see it based on their names:\r\n\r\n`['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight']`\r\n\r\n=> `cls.seq_relationship` refers to the linear layer used for next sentence prediction.\r\n=> `cls.predictions` refers to the masked language modeling head. It consists of a `transform` layer followed by a `decoder` (which maps to the vocabulary). \r\n\r\nThe author of Greek BERT probably used the `BertForPreTraining` model for pre-training the BERT model. You can see the definition of the heads [here](https://github.com/huggingface/transformers/blob/bd3b599c12cfcf5ef517c5ffe526afbdbaa92539/src/transformers/models/bert/modeling_bert.py#L1011).",
"I see. I saw the names too, but I wasn't sure that these weights correspond to these heads just by reading the names.\r\n\r\nThank you @NielsRogge, you have been very helpful.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | Hello,
in order to use the Greek BERT model, I use `AutoModel` class, and specifically
`greek_bert = AutoModel.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")`
When I load the model with the above command I get this warning:
```
Some weights of the model checkpoint at nlpaueb/bert-base-greek-uncased-v1 were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
Why is this warning thrown? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11723/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11722/comments | https://api.github.com/repos/huggingface/transformers/issues/11722/events | https://github.com/huggingface/transformers/issues/11722 | 891,745,768 | MDU6SXNzdWU4OTE3NDU3Njg= | 11,722 | Plug a custom tokenizer into PreTrainedTokenizer | {
"login": "DhruvilKarani",
"id": 28706362,
"node_id": "MDQ6VXNlcjI4NzA2MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/28706362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DhruvilKarani",
"html_url": "https://github.com/DhruvilKarani",
"followers_url": "https://api.github.com/users/DhruvilKarani/followers",
"following_url": "https://api.github.com/users/DhruvilKarani/following{/other_user}",
"gists_url": "https://api.github.com/users/DhruvilKarani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DhruvilKarani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DhruvilKarani/subscriptions",
"organizations_url": "https://api.github.com/users/DhruvilKarani/orgs",
"repos_url": "https://api.github.com/users/DhruvilKarani/repos",
"events_url": "https://api.github.com/users/DhruvilKarani/events{/privacy}",
"received_events_url": "https://api.github.com/users/DhruvilKarani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Would this [documentation](https://huggingface.co/transformers/fast_tokenizers.html) help you out?",
"This is exactly what I was looking for! No idea how I missed it. Thank you :)"
] | 1,620 | 1,620 | 1,620 | NONE | null | Is there a clean way to use a customer tokenizer trained from the tokenizers library in a PretrainedTokenizer interface?
```
from tokenizers import Tokenizer
tokenizer = Tokenizer.from_file(config_file)
```
The config file is generated using tokenizer.save() method.
I want to use this tokenizer in a PretrainedTokenizer/PretrainedTokenizerFast class.
The closest thing I found in the existing issues is [this](https://github.com/huggingface/tokenizers/issues/259#issuecomment-625905930)
When I tried the above solution using a PretrainedTokenizer class,
```
class CustomTokenizer(PreTrainedTokenizer):
def __init__(
self,
vocab_file=vocab_file,
merges_file=merges_file,
bos_token="<s>",
eos_token="</s>",
sep_token="</s>",
cls_token="<s>",
unk_token="<unk>",
pad_token="<pad>",
mask_token="<mask>",
**kwargs
):
super().__init__(
tokenizer,
bos_token=bos_token,
eos_token=eos_token,
unk_token=unk_token,
sep_token=sep_token,
cls_token=cls_token,
pad_token=pad_token,
mask_token=mask_token,
**kwargs,
)
```
I got an exception
```
File /path/custom_tokenizer.py", line 37, in __init__
super().__init__(
TypeError: __init__() takes 1 positional argument but 2 were given
```
Is there a solution/workaround? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11722/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11722/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11721/comments | https://api.github.com/repos/huggingface/transformers/issues/11721/events | https://github.com/huggingface/transformers/issues/11721 | 891,684,212 | MDU6SXNzdWU4OTE2ODQyMTI= | 11,721 | ValueError: could not broadcast input array from shape (2816,384) into shape (2698,384) | {
"login": "zhijxu-MS",
"id": 43435212,
"node_id": "MDQ6VXNlcjQzNDM1MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/43435212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhijxu-MS",
"html_url": "https://github.com/zhijxu-MS",
"followers_url": "https://api.github.com/users/zhijxu-MS/followers",
"following_url": "https://api.github.com/users/zhijxu-MS/following{/other_user}",
"gists_url": "https://api.github.com/users/zhijxu-MS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhijxu-MS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhijxu-MS/subscriptions",
"organizations_url": "https://api.github.com/users/zhijxu-MS/orgs",
"repos_url": "https://api.github.com/users/zhijxu-MS/repos",
"events_url": "https://api.github.com/users/zhijxu-MS/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhijxu-MS/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger i saw you have a PR to fix similar error, could you help to take a look?\r\n\r\n",
"You should upgrade your version of Transformers: this code is not used anymore inside the `Trainer`.",
"i am installing transformers from source, the issue still threr.\r\n\r\nand issue throw from https://github.com/huggingface/transformers/blob/86d5fb0b360e68de46d40265e7c707fe68c8015b/src/transformers/trainer_pt_utils.py#L411, looks like the code is still used in master?\r\n\r\n",
"Indeed, the subclass of the `Trainer` for QA still uses the old code! The PR linked above should fix this.\r\nThanks for flagging!"
] | 1,620 | 1,621 | 1,621 | NONE | null | the command to reproduce:
cd huggingface-transformers/examples/pytorch/question-answering
python -m torch.distributed.launch --nproc_per_node=8 ./run_qa.py \
--model_name_or_path roberta-large \
--dataset_name squad \
--do_train --do_eval \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 256 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir test_result2/$trials --overwrite_output_dir \
--logging_dir test_result2/$trials/tensorboard --logging_first_step --logging_steps 50 \
--fp16
i tried add "--max_eval_samples 10240", this will fix the error, while the AUC result is quite low(exact_match = 4.9414, f1 = 8.9784). and when i ran with 1gpu, the above command can succeed(exact_match = 88.5336, f1 = 94.3266)
the full error is "File "./transformers/src/transformers/trainer_pt_utils.py", line 410, in _nested_set_tensors
i * slice_len : (i + 1) * slice_len
i * slice_len : (i + 1) * slice_len
ValueError: could not broadcast input array from shape (2816,384) into shape (2698,384)" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11721/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11720/comments | https://api.github.com/repos/huggingface/transformers/issues/11720/events | https://github.com/huggingface/transformers/issues/11720 | 891,517,085 | MDU6SXNzdWU4OTE1MTcwODU= | 11,720 | RagRetriever fails to find faiss-gpu installed with pip not conda | {
"login": "Berowne",
"id": 23649143,
"node_id": "MDQ6VXNlcjIzNjQ5MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/23649143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Berowne",
"html_url": "https://github.com/Berowne",
"followers_url": "https://api.github.com/users/Berowne/followers",
"following_url": "https://api.github.com/users/Berowne/following{/other_user}",
"gists_url": "https://api.github.com/users/Berowne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Berowne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Berowne/subscriptions",
"organizations_url": "https://api.github.com/users/Berowne/orgs",
"repos_url": "https://api.github.com/users/Berowne/repos",
"events_url": "https://api.github.com/users/Berowne/events{/privacy}",
"received_events_url": "https://api.github.com/users/Berowne/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Got it to work with rebuild... and pip install faiss and faiss-gpu\r\ngit clone https://...github.../rag\r\nexport TOKENIZERS_PARALLELISM=false\r\npip install torch torchvision ray[default] datasets faiss faiss-gpu matplotlib seaborn pandas transformers awscli s3fs scikit-plot\r\npython use_own_knowledge_dataset.py --csv_path ./text.csv --output_dir ./out/text\r\n"
] | 1,620 | 1,621 | 1,621 | CONTRIBUTOR | null | - `transformers` version: 4.5.1
- Platform: Linux-4.14.231-173.361.amzn2.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes nVidia V100 16Gb
- Using distributed or parallel set-up in script?: just single task in //
- rag: @patrickvonplaten, @lhoestq
## Information
Model I am using (RAG Retriever ...):
The problem arises when using:
[* ] the official example scripts: (worked!)
[* ] my own modified scripts: (give details below)
The tasks I am working on is:
[* ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run transformers/examples/research_projects/rag/use_own_knowledge_dataset.py
This step worked fine yesterday prior to reboot.
2. Try to inspect output dataset directly using RagRetriever model in python... 3.6 >
```python
from transformers import RagRetriever, RagSequenceForGeneration, RagTokenizer
retriever = RagRetriever.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base', cache_dir=cache_dir, index_name="custom", indexed_dataset='./rag/out')
ImportError:
RagRetriever requires the faiss library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/facebookresearch/faiss/blob/master/INSTALL.md and follow the ones
that match your environment.
```
Also, if you import faiss, then faiss.__version__ does not exist.
Note for our environment we have to pip install faiss-gpu rather than conda since conda repos are blocked at proxy.
qds/NLP/aws_nlp/rag/out
A sample script to query the /path/to/my_knowledge_dataset/ would be handy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11720/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11719/comments | https://api.github.com/repos/huggingface/transformers/issues/11719/events | https://github.com/huggingface/transformers/issues/11719 | 891,488,219 | MDU6SXNzdWU4OTE0ODgyMTk= | 11,719 | parameter `ignore_keys` of `trainer.predict` not accessible in `Trainer` or `TrainingArguments` | {
"login": "shabie",
"id": 30535146,
"node_id": "MDQ6VXNlcjMwNTM1MTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/30535146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shabie",
"html_url": "https://github.com/shabie",
"followers_url": "https://api.github.com/users/shabie/followers",
"following_url": "https://api.github.com/users/shabie/following{/other_user}",
"gists_url": "https://api.github.com/users/shabie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shabie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabie/subscriptions",
"organizations_url": "https://api.github.com/users/shabie/orgs",
"repos_url": "https://api.github.com/users/shabie/repos",
"events_url": "https://api.github.com/users/shabie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shabie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We could add an argument for this (like `ignore_keys_for_eval`) yes. Let me know if you want to tackle this!",
"I very much wanna do this and will get right down to it! 👍 Admittedly though, I am relatively new to making contributions.",
"Hi! As a short update: I will not be able to work on this until 23rd of June... so if anyone wants to pick it up good otherwise I need 3 more weeks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"OK so I'd like to work on this :)",
"Sure! Ping me when you open a PR :-)"
] | 1,620 | 1,625 | 1,625 | CONTRIBUTOR | null | # 🚀 Feature request
The [`predict`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.predict) and [`evaluate`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.evaluate) methods of the Trainer class provide an excellent option of `ignore_keys`. Here is a small example:
```python
trainer.predict(dataset, ignore_keys=["ner_loss", "cls_loss", "ner_logits", "cls_logits"])
```
@sgugger
This option is however, not accessible during the normal setup of defining `TrainingArguments` class nor the `Trainer` class so the a call to `trainer.train()` leads to errors during the mid-training evaluation.
## Motivation
I am unable to evaluate the model metrics on the validation set **during** the training to see if makes sense to continue.
## Your contribution
I am happy to make a PR if this is seen as a genuine problem. Like always, may be I am missing something.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11718/comments | https://api.github.com/repos/huggingface/transformers/issues/11718/events | https://github.com/huggingface/transformers/pull/11718 | 891,284,506 | MDExOlB1bGxSZXF1ZXN0NjQ0MTc1ODEw | 11,718 | Fix loading the best model on the last stage of training | {
"login": "vbyno",
"id": 2923624,
"node_id": "MDQ6VXNlcjI5MjM2MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2923624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbyno",
"html_url": "https://github.com/vbyno",
"followers_url": "https://api.github.com/users/vbyno/followers",
"following_url": "https://api.github.com/users/vbyno/following{/other_user}",
"gists_url": "https://api.github.com/users/vbyno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbyno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbyno/subscriptions",
"organizations_url": "https://api.github.com/users/vbyno/orgs",
"repos_url": "https://api.github.com/users/vbyno/repos",
"events_url": "https://api.github.com/users/vbyno/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbyno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great! Lat thing is to run `make style` on your branch to make the CI pass. Let me know if you run into any issue, I can also push on your branch.",
"> Great! Lat thing is to run `make style` on your branch to make the CI pass. Let me know if you run into any issue, I can also push on your branch.\r\n\r\nCI is already fixed",
"Thanks again for the fix!"
] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
It fixes the best model loading on the last stage of the training.
Fixes #11666
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11718/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11718",
"html_url": "https://github.com/huggingface/transformers/pull/11718",
"diff_url": "https://github.com/huggingface/transformers/pull/11718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11718.patch",
"merged_at": 1620936672000
} |
https://api.github.com/repos/huggingface/transformers/issues/11717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11717/comments | https://api.github.com/repos/huggingface/transformers/issues/11717/events | https://github.com/huggingface/transformers/pull/11717 | 891,250,865 | MDExOlB1bGxSZXF1ZXN0NjQ0MTQ3MTAx | 11,717 | Fix T5 beam search when using parallelize | {
"login": "OyvindTafjord",
"id": 6453366,
"node_id": "MDQ6VXNlcjY0NTMzNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6453366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OyvindTafjord",
"html_url": "https://github.com/OyvindTafjord",
"followers_url": "https://api.github.com/users/OyvindTafjord/followers",
"following_url": "https://api.github.com/users/OyvindTafjord/following{/other_user}",
"gists_url": "https://api.github.com/users/OyvindTafjord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OyvindTafjord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OyvindTafjord/subscriptions",
"organizations_url": "https://api.github.com/users/OyvindTafjord/orgs",
"repos_url": "https://api.github.com/users/OyvindTafjord/repos",
"events_url": "https://api.github.com/users/OyvindTafjord/events{/privacy}",
"received_events_url": "https://api.github.com/users/OyvindTafjord/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@OyvindTafjord Hi, I am trying to figure out how to use model parallelization on T5 but having some problems. I tried to reproduce your result but got the following error:\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM \r\ntokenizer = AutoTokenizer.from_pretrained(\"allenai/unifiedqa-t5-small\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"allenai/unifiedqa-t5-small\")\r\ndevice_map = {0: range(0,3), 1: range(3, 6)}\r\nmodel.parallelize(device_map)\r\ninput_string = \"What was the color of the sky?\\\\nIt was a dark stormy night.\"\r\ninput_ids = tokenizer.encode(input_string,return_tensors=\"pt\").to(\"cuda:0\")\r\n\r\noutput = model.generate(input_ids, num_beams=2)\r\n```\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/autograd/grad_mode.py\", line 15, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/generation_utils.py\", line 922, in generate\r\n model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/generation_utils.py\", line 417, in _prepare_encoder_decoder_kwargs_for_generation\r\n model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/models/t5/modeling_t5.py\", line 897, in forward\r\n inputs_embeds = self.embed_tokens(input_ids)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/modules/sparse.py\", line 114, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/functional.py\", line 1724, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: arguments are located on different GPUs at /opt/conda/conda-bld/pytorch_1587428091666/work/aten/src/THC/generic/THCTensorIndex.cu:403\r\n```\r\n* My current environment:\r\ntransformers: 4.7.0.dev0\r\ntorch: 1.5.0\r\n\r\nCould you please help me to figure out the problem and give me some direction that I should start with? I don't have much experience with model parallelization, do I need to modify the `input_ids`?\r\n\r\nThanks in advance.\r\n",
"@bing0037 Hm, I tested with 4.7.0 now and the above code works for me. I noticed my initial set of commands was missing the critical `model.parallelize(device_map)` step, but looks like you made sure to include that? \r\n\r\nYou could double check that `model.encoder.first_device` returns the expected `'cuda:0'`, and then the code at https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py#L870 should make sure the embeddings are also on that same device, so you shouldn't get that line 897 error above.",
"@OyvindTafjord Thank you for your reply. The problem was the inconsistency of my command and the above command works well.\r\nBTW, the above command is for parallelized model **inference**, could you please give me some suggestions for parallelized model **training**? \r\nCurrently, I am trying to finetune **t5-large** model using `run_summarization.py` on multiple GPUs by using model parallelization. \r\n* My test 1: By adding ```model.parallieze()``` directly in `run_summarization.py`, but got the following error:\r\n```\r\n model = AutoModelForSeq2SeqLM.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n revision=model_args.model_revision,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n )\r\n \r\n\r\n+ device_map = {0: [0, 1, 2],\r\n+ 1: [3, 4, 5, 6, 7, 8, 9],\r\n+ 3: [10, 11, 12, 13, 14, 15, 16],\r\n+ 4: [17, 18, 19, 20, 21, 22, 23]}\r\n+ model.parallelize(device_map) # Splits the model across several devices\r\n\r\n model.resize_token_embeddings(len(tokenizer))\r\n\r\n if model.config.decoder_start_token_id is None:\r\n raise ValueError(\"Make sure that `config.decoder_start_token_id` is correctly defined\")\r\n```\r\n```\r\nTraceback (most recent call last):\r\n File \"run_summarization.py\", line 616, in <module>\r\n main()\r\n File \"run_summarization.py\", line 540, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/trainer.py\", line 1300, in train\r\n args.max_grad_norm,\r\n File \"/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/utils/clip_grad.py\", line 30, in clip_grad_norm_\r\n total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type) for p in parameters]), norm_type)\r\nRuntimeError: All input tensors must be on the same device. Received cuda:0 and cuda:7\r\n```\r\n\r\n* My test 2: I referred to this question: https://discuss.huggingface.co/t/transformers-4-6-0-on-sagemaker/6217/4, but still can't change the model parallesim status:\r\n```\r\npip install git+https://github.com/aws/sagemaker-python-sdk.git\r\npip install sagemaker\r\n```\r\n```\r\n>>> from transformers.file_utils import is_sagemaker_mp_enabled\r\n>>> is_sagemaker_mp_enabled()\r\nFalse\r\n```\r\nCould you give me some resources that I could refer to? Thank you!",
"@bing0037 I haven't tried the parallelize functionality in the context of training, so I'm not of much help on that. "
] | 1,620 | 1,624 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
As requested by @patrickvonplaten in conversation on issue https://github.com/huggingface/transformers/issues/9200, this fixes a crash when trying to use beam search on T5 models split across multiple GPUs using `model.parallelize()`. It uses the fix from https://github.com/huggingface/transformers/pull/9219, applied to the T5-specific code (also related is https://github.com/huggingface/transformers/pull/9596 which refactored the `_reorder_cache` functions).
I tested the fix on a t5-small model. Before:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("allenai/unifiedqa-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/unifiedqa-t5-small")
device_map = {0: range(0,3), 1: range(3, 6)}
input_string = "What was the color of the sky?\\nIt was a dark stormy night."
input_ids = tokenizer.encode(input_string,return_tensors="pt").to("cuda:0")
output = model.generate(input_ids, num_beams=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/transformers/generation_utils.py", line 1044, in generate
return self.beam_search(
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/transformers/generation_utils.py", line 1788, in beam_search
model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx)
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1635, in _reorder_cache
layer_past_state.index_select(0, beam_idx),
RuntimeError: Input, output and indices must be on the current device
```
After:
```
...
output = model.generate(input_ids, num_beams=2)
tokenizer.batch_decode(output, skip_special_tokens=True)
--> ['dark stormy']
```
As far as I know this small fix shouldn't have any adverse effects. As to why the tests added in https://github.com/huggingface/transformers/pull/9219 didn't catch this, possibly that's because they're not generally run in multi-GPU setups?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11717/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11717",
"html_url": "https://github.com/huggingface/transformers/pull/11717",
"diff_url": "https://github.com/huggingface/transformers/pull/11717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11717.patch",
"merged_at": 1620985443000
} |
https://api.github.com/repos/huggingface/transformers/issues/11716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11716/comments | https://api.github.com/repos/huggingface/transformers/issues/11716/events | https://github.com/huggingface/transformers/pull/11716 | 891,039,539 | MDExOlB1bGxSZXF1ZXN0NjQzOTcwMzMz | 11,716 | Refactor slow sentencepiece tokenizers. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`SentencePieceProcessor.decode` is doing \"the same but more than `SentencePieceProcessor.decode_pieces`. \r\nThat is why we replace `SentencePieceProcessor.decode_pieces` with `SentencePieceProcessor.decode` in this PR.\r\n\r\nSee here:\r\n\r\nhttps://github.com/google/sentencepiece/blob/6256ef243844e5848499cf519eb2a7e2755e75a1/python/src/sentencepiece/__init__.py#L307",
"rebased on upstrem/master",
"We need to rebase on master after PR #11737 has been merged.",
"Rebased on master - CI is green again. :-) ",
"Rebased on master to get integration tests - see #11737",
"Rebased on master",
"> I think generally speaking we'd like to have methods that are common to all tokenizers in the base class - but not methods that are common to some of them only. I'd also like to keep the number of abstraction layers to a minimum, tokenizers are already quite tough to understand.\r\n\r\n@LysandreJik \r\nYes. I also prefer a low number of abstraction layers. At the same time I like dry code. There is 100% duplicate code in the tokenizers impl. that has just been duplicated by copy & paste. IMO that should be removed by an refactoring. That is what I try to introduce here.",
"The general approach of the library is to keep the number of abstractions as low as possible, and to keep implementations as separate as possible from each other, hence the high amount of copy-pasted code.\r\n\r\nWe want users to be able to experiment with single models/tokenizers without their changes impacting other models or tokenizers - and we want them to be able to understand how a model or tokenizer behaves by simply checking a single file, rather than having to hop around multiple files.\r\n\r\nWe are failing at this with tokenizers as there are already two levels of abstraction, but adding a third one isn't really the direction we want to head to :)\r\n\r\nDoes that make sense?",
"> Does that make sense?\r\n\r\nYes. Sure. Your project, your call.\r\nI will revert my changes and keep it as simple as possible as discussed in the beginning.\r\n",
"@LysandreJik I have redone the PR. Everything is green and the changes are as simple as planned in the issue.\r\nThis is ready for review.\r\n\r\nAverything is tested by setting `test_sentencepiece = True` in the tokenizer test classes and by the following \r\ntestfunction: `TokenizerTesterMixin.test_sentencepiece_tokenize_and_convert_tokens_to_string`"
] | 1,620 | 1,626 | 1,626 | CONTRIBUTOR | null | PR for #11646
## ToDo
- [x] `AlbertTokenizer`
- [x] `BarthezTokenizer`
- [x] `BertGenerationTokenizer`
- [x] `BigBirdTokenizer`
- [x] `CamembertTokenizer`
- [x] `DebertaV2Tokenizer`
- [x] `M2M100Tokenizer`
- [x] `MarianTokenizer`
- [x] `MBart50Tokenizer`
- [x] `PegasusTokenizer`
- [x] `ReformerTokenizer`
- [x] `Speech2TextTokenizer`
- [x] `T5Tokenizer`
- [x] `XLMProphetNetTokenizer`
- [x] `XLM RoBERTa`
- [x] `XLNetTokenizer` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11716",
"html_url": "https://github.com/huggingface/transformers/pull/11716",
"diff_url": "https://github.com/huggingface/transformers/pull/11716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11716.patch",
"merged_at": 1626856111000
} |
https://api.github.com/repos/huggingface/transformers/issues/11715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11715/comments | https://api.github.com/repos/huggingface/transformers/issues/11715/events | https://github.com/huggingface/transformers/issues/11715 | 890,937,304 | MDU6SXNzdWU4OTA5MzczMDQ= | 11,715 | Request for feature for setting batch size in pipeline when inference | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | ```
from transformers import pipeline
from transformers import AutoModelWithLMHead, AutoTokenizer
model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
nlp = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer, device=0)
nlp(ds.df_train.sample(32)['content'].tolist(), max_length=300)
```
I am suing pipeline instance for inference on trunks of sentences.
When the size of the trunk data is small, like the 32, it is ok to fit into the GPU memory.
However, when increasing the size of this input, memory error comes out:
> RuntimeError: CUDA out of memory.
Is there any way to set the batch size inside the `nlp()` to automatically to fit into the GPU to make trunks of inferences? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11715/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11714/comments | https://api.github.com/repos/huggingface/transformers/issues/11714/events | https://github.com/huggingface/transformers/issues/11714 | 890,851,753 | MDU6SXNzdWU4OTA4NTE3NTM= | 11,714 | Blender 9B model | {
"login": "hyunwoongko",
"id": 38183241,
"node_id": "MDQ6VXNlcjM4MTgzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/38183241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyunwoongko",
"html_url": "https://github.com/hyunwoongko",
"followers_url": "https://api.github.com/users/hyunwoongko/followers",
"following_url": "https://api.github.com/users/hyunwoongko/following{/other_user}",
"gists_url": "https://api.github.com/users/hyunwoongko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyunwoongko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyunwoongko/subscriptions",
"organizations_url": "https://api.github.com/users/hyunwoongko/orgs",
"repos_url": "https://api.github.com/users/hyunwoongko/repos",
"events_url": "https://api.github.com/users/hyunwoongko/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyunwoongko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think @patil-suraj talked about it at some point?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"\r\n\r\nI ported myself :)\r\n"
] | 1,620 | 1,623 | 1,623 | CONTRIBUTOR | null | Are there any plans to release the blender 9B model in the transformers library? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11713/comments | https://api.github.com/repos/huggingface/transformers/issues/11713/events | https://github.com/huggingface/transformers/issues/11713 | 890,835,260 | MDU6SXNzdWU4OTA4MzUyNjA= | 11,713 | Unable to import transformers: ImportError: numpy>=1.17 is required for a normal functioning of this module, but found numpy==1.16.3 | {
"login": "laurence-lin",
"id": 19323465,
"node_id": "MDQ6VXNlcjE5MzIzNDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/19323465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laurence-lin",
"html_url": "https://github.com/laurence-lin",
"followers_url": "https://api.github.com/users/laurence-lin/followers",
"following_url": "https://api.github.com/users/laurence-lin/following{/other_user}",
"gists_url": "https://api.github.com/users/laurence-lin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laurence-lin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laurence-lin/subscriptions",
"organizations_url": "https://api.github.com/users/laurence-lin/orgs",
"repos_url": "https://api.github.com/users/laurence-lin/repos",
"events_url": "https://api.github.com/users/laurence-lin/events{/privacy}",
"received_events_url": "https://api.github.com/users/laurence-lin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What is your working environment? Is it a Colab notebook, a Linux machine?",
"I'm working on my PC with Windows 10 system.\r\n\r\n> What is your working environment? Is it a Colab notebook, a Linux machine?\r\n\r\n",
"Probably your environments have different versions as @LysandreJik mentioned. Can you run the following command **where you get the error message (your working environment)**, and assure you have the correct numpy version.\r\n\r\n pip freeze | findstr \"numpy\"\r\n\r\nYou may try the following, but without locating your correct environment these probably do not help much.\r\n\r\n pip install -I transformers --no-cache-dir --force-reinstall",
"> pip freeze | findstr \"numpy\"\r\n\r\nHello! I'd followed your work, and it returns this:\r\n\r\n\r\n\r\nWhy does this happen?",
"> Probably your environments have different versions as @LysandreJik mentioned. Can you run the following command **where you get the error message (your working environment)**, and assure you have the correct numpy version.\r\n> \r\n> ```\r\n> pip freeze | findstr \"numpy\"\r\n> ```\r\n> \r\n> You may try the following, but without locating your correct environment these probably do not help much.\r\n> \r\n> ```\r\n> pip install -I transformers --no-cache-dir --force-reinstall\r\n> ```\r\n\r\nYour second instruction works! It seems force-install may install all dependencies for transformer, but I still don't know why it couldn't run with the default numpy = 1.20.0\r\n\r\nThank you for your help!",
"Glad it solved your problem. We can close this issue then @laurence-lin.",
"> Glad it solved your problem. We can close this issue then @laurence-lin.\r\n\r\nOK, thank you!",
"ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ndaal4py 2021.5.0 requires daal==2021.4.0, which is not installed.\r\nconda-repo-cli 1.0.4 requires pathlib, which is not installed.\r\nanaconda-project 0.10.2 requires ruamel-yaml, which is not installed.\r\nmxnet 1.7.0.post2 requires numpy<1.17.0,>=1.8.2, but you have numpy 1.22.4 which is incompatible.\r\nmxnet 1.7.0.post2 requires requests<2.19.0,>=2.18.4, but you have requests 2.28.0 which is incompatible.\r\nnumba 0.55.1 requires numpy<1.22,>=1.18, but you have numpy 1.22.4 which is incompatible.\r\njupyter-server 1.13.5 requires pywinpty<2; os_name == \"nt\", but you have pywinpty 2.0.2 which is incompatible.\r\nd2l 0.17.5 requires matplotlib==3.5.1, but you have matplotlib 3.5.2 which is incompatible.\r\nd2l 0.17.5 requires numpy==1.21.5, but you have numpy 1.22.4 which is incompatible.\r\nd2l 0.17.5 requires requests==2.25.1, but you have requests 2.28.0 which is incompatible. \r\n\r\n\r\npls help me out",
"I have the same issue but with `fastai`. Check out my [Git Issue](https://github.com/fastai/fastai/issues/3708).",
"> ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. daal4py 2021.5.0 requires daal==2021.4.0, which is not installed. conda-repo-cli 1.0.4 requires pathlib, which is not installed. anaconda-project 0.10.2 requires ruamel-yaml, which is not installed. mxnet 1.7.0.post2 requires numpy<1.17.0,>=1.8.2, but you have numpy 1.22.4 which is incompatible. mxnet 1.7.0.post2 requires requests<2.19.0,>=2.18.4, but you have requests 2.28.0 which is incompatible. numba 0.55.1 requires numpy<1.22,>=1.18, but you have numpy 1.22.4 which is incompatible. jupyter-server 1.13.5 requires pywinpty<2; os_name == \"nt\", but you have pywinpty 2.0.2 which is incompatible. d2l 0.17.5 requires matplotlib==3.5.1, but you have matplotlib 3.5.2 which is incompatible. d2l 0.17.5 requires numpy==1.21.5, but you have numpy 1.22.4 which is incompatible. d2l 0.17.5 requires requests==2.25.1, but you have requests 2.28.0 which is incompatible.\r\n> \r\n> pls help me out\r\n\r\nI solved it this way:\r\n`pip install -I transformers --no-cache-dir --force-reinstall`, as suggested by @devrimcavusoglu \r\nThe this error appeared:\r\n`ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in [.../.../...']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.`\r\nI did `sudo pip3 uninstall numpy` twice, until no numpy version was found, and then it worked. Tbh I have no idea why, but as long as it's working it is fine.\r\n\r\nHope this helps",
"find the old version numpy , and delete the old version numpy by youself。",
"@devrimcavusoglu I'am facing with the same issue, but the following command does not help: `pip install -I transformers --no-cache-dir --force-reinstall`.\r\n\r\nI am on ubuntu, using a conda env named `py36`, and make sure I was operating in the correct env (as the `(py36)` line in the following logs).\r\n\r\n\r\n\r\n\r\n```\r\n(py36) \r\ncyx@c9 ~ \r\n % pip install -I transformers --no-cache-dir --force-reinstall \r\nxxxxxxx downloading logs xxxxxxxxxxx \r\n Installing collected packages: zipp, typing-extensions, urllib3, pyparsing, importlib-resources, importlib-metadata, idna, charset-normalizer, certifi, tqdm, six, requests, regex, pyyaml, packaging, joblib, filelock, click, tokenizers, sacremoses, numpy, hug gingface-hub, dataclasses, transformers\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nallennlp 1.0.0 requires jsonnet>=0.10.0; sys_platform != \"win32\", which is not installed. allennlp 1.0.0 requires jsonpickle, which is not installed.\r\nallennlp 1.0.0 requires tensorboardX>=1.2, which is not installed. bminf test requires cupy-cuda9<10,>=9, which is not installed.\r\ntorchvision 0.10.0 requires torch==1.9.0, but you have torch 1.10.0 which is incompatible. thinc 8.0.15 requires typing-extensions<4.0.0.0,>=3.7.4.1; python_version < \"3.8\", but you have typing-extensions 4.1.1 which is incompatible.\r\nsphinx-rtd-theme 0.5.2 requires docutils<0.17, but you have docutils 0.17.1 which is incompatible. spacy 3.2.3 requires typing-extensions<4.0.0.0,>=3.7.4; python_version < \"3.8\", but you have typing-extensions 4.1.1 which is incompatible.\r\npaddlepaddle-tiny 1.6.1 requires numpy<=1.16.4,>=1.12, but you have numpy 1.19.5 which is incompatible. \r\nflake8 5.0.4 requires importlib-metadata<4.3,>=1.1.0; python_version < \"3.8\", but you have importlib-metadata 4.8.3 which is incompatible.\r\ndatasets 1.2.0 requires tqdm<4.50.0,>=4.27, but you have tqdm 4.64.1 which is incompatible. \r\nargcomplete 1.11.1 requires importlib-metadata<2,>=0.23; python_version == \"3.6\", but you have importlib-metadata 4.8.3 which is incompatible. \r\nallennlp 1.0.0 requires filelock<3.1,>=3.0, but you have filelock 3.4.1 which is incompatible. \r\nallennlp 1.0.0 requires overrides==3.0.0, but you have overrides 6.1.0 which is incompatible.\r\nallennlp 1.0.0 requires spacy<2.3,>=2.1.0, but you have spacy 3.2.3 which is incompatible.\r\nallennlp 1.0.0 requires torch<1.6.0,>=1.5.0, but you have torch 1.10.0 which is incompatible.\r\nallennlp 1.0.0 requires transformers<2.12,>=2.9, but you have transformers 4.18.0 which is incompatible. \r\nopennmt-py 1.0.0 requires tqdm~=4.30.0, but you have tqdm 4.64.1 which is incompatible.\r\nSuccessfully installed certifi-2022.12.7 charset-normalizer-2.0.12 click-8.0.4 dataclasses-0.8 filelock-3.4.1 huggingface-hub-0.4.0 idna-3.4 importlib-metadata-4.8.3 importlib-resources-5.4.0 joblib-1.1.1 numpy-1.19.5 packaging-21.3 pyparsing-3.0.9 pyyaml-6.0 regex-2022.10.31 requests-2.27.1 sacremoses-0.0.53 six-1.16.0 tokenizers-0.12.1 tqdm-4.64.1 transformers-4.18.0 typing-extensions-4.1.1 urllib3-1.26.14 zipp-3.6.0\r\n```\r\n\r\nThen, I try to import transformers, and get the same error.\r\n```\r\n(py36)\r\ncyx@c9 ~ \r\n % python !10078\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import numpy as np\r\n>>> np.__version__\r\n'1.19.5'\r\n>>> np.__file__ '/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/numpy/__init__.py' \r\n>>> import transformers\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/__init__.py\", line 30, in <module>\r\n from . import dependency_versions_check\r\n File \"/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/dependency_versions_check.py\", line 41, in <module>\r\n require_version_core(deps[pkg])\r\n File \"/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/utils/versions.py\", line 120, in require_version_core\r\n return require_version(requirement, hint)\r\n File \"/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/utils/versions.py\", line 114, in require_version\r\n _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n File \"/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/utils/versions.py\", line 50, in _compare_versions\r\n f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\nImportError: numpy>=1.17 is required for a normal functioning of this module, but found numpy==1.16.4.\r\nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git main\r\n>>> \r\n```",
"\r\nI think I found the answer.\r\n\r\nWhile `pip list | grep numpy ` only returns 1.19.5 version, `conda list | grep numpy ` returns multiple versions:\r\n```\r\n(py36) \r\ncyx@c9 ~ \r\n % conda list | grep numpy \r\nnumpy 1.19.2 py36h54aff64_0 \r\nnumpy 1.16.4 <pip> \r\nnumpy 1.19.5 <pip> \r\nnumpy-base 1.19.2 py36hfa32c7d_0 \r\n```\r\nThen, I went to the conda env dir: `/data/home/cyx/.conda/envs/py36/lib/python3.6/site-packages`, and find there is a folder named `numpy-1.16.4.dist-info` and a folder named `numpy-1.19.5.dist-info`. After removing the 1.16.4 folder, I can import transformers correctly.\r\n\r\nI wonder maybe the version checking function could be updated?",
"I had the same issue and ran:\r\n\r\npip show numpy | grep Location\r\nrm -rvf /usr/local/lib/python3.11/site-packages/numpy\r\npython3.11 -m pip install numpy\r\n\r\nand this resolved it"
] | 1,620 | 1,681 | 1,621 | NONE | null | I'd installed transformers via `pip install transformers`
And made sure my python packages is up-to-date, however when I import transformers it shows the error message:
```
ImportError: numpy>=1.17 is required for a normal functioning of this module, but found numpy==1.16.3.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master
```
I'd made sure my numpy is version 1.20.1, and I'd tried as suggested in the message: `pip install transformers -U`
But it doesn't work. Please help me how could I import the package, thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11713/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11712/comments | https://api.github.com/repos/huggingface/transformers/issues/11712/events | https://github.com/huggingface/transformers/issues/11712 | 890,823,994 | MDU6SXNzdWU4OTA4MjM5OTQ= | 11,712 | Reformer for questions answering(squad) | {
"login": "bela572",
"id": 84122759,
"node_id": "MDQ6VXNlcjg0MTIyNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/84122759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bela572",
"html_url": "https://github.com/bela572",
"followers_url": "https://api.github.com/users/bela572/followers",
"following_url": "https://api.github.com/users/bela572/following{/other_user}",
"gists_url": "https://api.github.com/users/bela572/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bela572/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bela572/subscriptions",
"organizations_url": "https://api.github.com/users/bela572/orgs",
"repos_url": "https://api.github.com/users/bela572/repos",
"events_url": "https://api.github.com/users/bela572/events{/privacy}",
"received_events_url": "https://api.github.com/users/bela572/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It hasn't changed yet. The problem is not code, the problem is the lack of a decent pre-trained model.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | I want to use Reformer for Questions Answering. I tried to use pretrained model 'google/reformer-crime-and-punishment'. I was using this example https://huggingface.co/transformers/custom_datasets.html#qa-squad just replaced with reformer. I get the exception related with pad, cls tokens, sequence length and so on, but that does not matter now, I first want to know: is it even possible to get good results(model to answer questions more than 50 percent accuracy for example? Because I see that you working on this model now and some functions maybe are not implemented yet or so. I saw issues like this: https://github.com/huggingface/transformers/issues/5436 and you wrote that you be very surprised if it will give any good results using reformer for q&a. (but it was year ago, so maybe it changed)
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11712/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11711/comments | https://api.github.com/repos/huggingface/transformers/issues/11711/events | https://github.com/huggingface/transformers/issues/11711 | 890,820,446 | MDU6SXNzdWU4OTA4MjA0NDY= | 11,711 | How to accelerate the inference speed when using pipeline | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @yananchen1989 did you get a solution yet"
] | 1,620 | 1,646 | 1,624 | NONE | null | > transformers version: 4.6.0.dev0
> torch version:1.8.1+cu102
I am using the simple API - pipeline, to do inference, where the input are tens of thousands of sentences.
```
nlp = pipeline("text-generation", model= 'gpt2', device=0, return_full_text=False)
results = nlp(df_train['content'].tolist(), max_length=250, do_sample=True, top_p=0.9, top_k=0, \
repetition_penalty=1, num_return_sequences=64)
```
take the generation, for instance, I want to generate new synthesized samples from each sentence from the df_train.
The codes work well, but it is not fast enough. I mean the GPU usage is just 76% ~ 85%.
Is there any trick or parameters I can tune to speed up ?
Another question is that how can I eliminate the info:
> Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11711/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11711/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11710/comments | https://api.github.com/repos/huggingface/transformers/issues/11710/events | https://github.com/huggingface/transformers/issues/11710 | 890,819,417 | MDU6SXNzdWU4OTA4MTk0MTc= | 11,710 | AssertionError: internal model should be a reference to self.model | {
"login": "zhangxiann",
"id": 12131262,
"node_id": "MDQ6VXNlcjEyMTMxMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/12131262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangxiann",
"html_url": "https://github.com/zhangxiann",
"followers_url": "https://api.github.com/users/zhangxiann/followers",
"following_url": "https://api.github.com/users/zhangxiann/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangxiann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangxiann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangxiann/subscriptions",
"organizations_url": "https://api.github.com/users/zhangxiann/orgs",
"repos_url": "https://api.github.com/users/zhangxiann/repos",
"events_url": "https://api.github.com/users/zhangxiann/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangxiann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should update your version of Transformers to solve this issue.",
"thanks",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.7 CPU
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
When I run `trainer.train()` for the second time in Jyputer, it throws error:
```
AssertionError Traceback (most recent call last)
<ipython-input-7-3435b262f1ae> in <module>
----> 1 trainer.train()
~/data/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)
933
934 self.control = self.callback_handler.on_epoch_end(self.args, self.state, self.control)
--> 935 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
936
937 if self.args.tpu_metrics_debug or self.args.debug:
~/data/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch)
1006
1007 if self.control.should_save:
-> 1008 self._save_checkpoint(model, trial, metrics=metrics)
1009 self.control = self.callback_handler.on_save(self.args, self.state, self.control)
1010
~/data/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in _save_checkpoint(self, model, trial, metrics)
1012 # In all cases, including ddp/dp/deepspeed, self.model is always a reference to the model we
1013 # want to save.
-> 1014 assert _model_unwrap(model) is self.model, "internal model should be a reference to self.model"
1015
1016 # Save model checkpoint
AssertionError: internal model should be a reference to self.model
```
https://huggingface.co/transformers/training.html
The tasks I am working on is:
* [ ] sequence classification
* [ ] my own task
## To reproduce
Steps to reproduce the behavior:
1. run `trainer.train()`
2. run `trainer.train()` again.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11710/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11709/comments | https://api.github.com/repos/huggingface/transformers/issues/11709/events | https://github.com/huggingface/transformers/pull/11709 | 890,807,218 | MDExOlB1bGxSZXF1ZXN0NjQzNzczMDMy | 11,709 | Fix gpt-2 warnings | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | closes https://github.com/huggingface/transformers/issues/11707 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11709/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11709",
"html_url": "https://github.com/huggingface/transformers/pull/11709",
"diff_url": "https://github.com/huggingface/transformers/pull/11709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11709.patch",
"merged_at": 1620891345000
} |
https://api.github.com/repos/huggingface/transformers/issues/11708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11708/comments | https://api.github.com/repos/huggingface/transformers/issues/11708/events | https://github.com/huggingface/transformers/issues/11708 | 890,637,912 | MDU6SXNzdWU4OTA2Mzc5MTI= | 11,708 | stop at load tokenizer_config.json when run barthez for mrpc | {
"login": "gongzheng0",
"id": 83264840,
"node_id": "MDQ6VXNlcjgzMjY0ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/83264840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gongzheng0",
"html_url": "https://github.com/gongzheng0",
"followers_url": "https://api.github.com/users/gongzheng0/followers",
"following_url": "https://api.github.com/users/gongzheng0/following{/other_user}",
"gists_url": "https://api.github.com/users/gongzheng0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gongzheng0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gongzheng0/subscriptions",
"organizations_url": "https://api.github.com/users/gongzheng0/orgs",
"repos_url": "https://api.github.com/users/gongzheng0/repos",
"events_url": "https://api.github.com/users/gongzheng0/events{/privacy}",
"received_events_url": "https://api.github.com/users/gongzheng0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No problem.It is just loading too long as 9 minute."
] | 1,620 | 1,620 | 1,620 | NONE | null | run code
```shell
python examples/text-classification/run_glue_tune.py --model_name_or_path /home2/zhenggo1/checkpoint/barthez_mrpc --task_name $TASK_NAME --do_eval --tune --max_seq_length 512 --output_dir /home2/zhenggo1/checkpoint/barthez_mrpc --tuned_checkpoint="/home2/zhenggo1/checkpoint/barthez_mrpc"
```
stop at here
```shell
[INFO|tokenization_utils_base.py:1618] 2021-05-13 10:11:25,859 >> Model name '/home2/zhenggo1/checkpoint/barthez_mrpc' not found in model shortcut name list (moussaKam/mbarthez, moussaKam/barthez, moussaKam/barthez-orangesum-title). Assuming '/home2/zhenggo1/checkpoint/barthez_mrpc' is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1651] 2021-05-13 10:11:25,860 >> Didn't find file /home2/zhenggo1/checkpoint/barthez_mrpc/tokenizer.json. We won't load it.
[INFO|tokenization_utils_base.py:1651] 2021-05-13 10:11:25,860 >> Didn't find file /home2/zhenggo1/checkpoint/barthez_mrpc/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file /home2/zhenggo1/checkpoint/barthez_mrpc/sentencepiece.bpe.model
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file None
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file None
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file /home2/zhenggo1/checkpoint/barthez_mrpc/special_tokens_map.json
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file /home2/zhenggo1/checkpoint/barthez_mrpc/tokenizer_config.json
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11708/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11707/comments | https://api.github.com/repos/huggingface/transformers/issues/11707/events | https://github.com/huggingface/transformers/issues/11707 | 890,571,899 | MDU6SXNzdWU4OTA1NzE4OTk= | 11,707 | Loading Basic GPT-2 model gives warning that attention layers weren't loaded from pre-trained weights | {
"login": "pgarz",
"id": 3752277,
"node_id": "MDQ6VXNlcjM3NTIyNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3752277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pgarz",
"html_url": "https://github.com/pgarz",
"followers_url": "https://api.github.com/users/pgarz/followers",
"following_url": "https://api.github.com/users/pgarz/following{/other_user}",
"gists_url": "https://api.github.com/users/pgarz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pgarz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pgarz/subscriptions",
"organizations_url": "https://api.github.com/users/pgarz/orgs",
"repos_url": "https://api.github.com/users/pgarz/repos",
"events_url": "https://api.github.com/users/pgarz/events{/privacy}",
"received_events_url": "https://api.github.com/users/pgarz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"These warnings mention that buffers are not loaded, which is normal - they're created during the model initialization.\r\n\r\nThere was an attribute missing on the `GPT2Model` which led the warnings to still be raised, I'm fixing this in #11709!",
"Installing from source should resolve the warnings issue :) "
] | 1,620 | 1,620 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-debian-10.9
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes. One GPU on Google Cloud Compute
- Using distributed or parallel set-up in script?: No
Tagging the following people for assistance
- gpt2: @patrickvonplaten, @LysandreJik
## Information
I'm using the GPT-2 vanilla model. Locally on my MacbookPro, my code runs as expected. When I try to run my Jupyter notebook on GCP, I however encounter an unseen warning. I provided the env details of said GCP instance above. I'm simply trying to load the vanilla GPT-2 model, but I keep getting the warning that the attention layers are not being initialized from the pre-trained weights as intended.
I get the following warning message:
`Some weights of GPT2Model were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.1.attn.masked_bias', 'h.8.attn.masked_bias', 'h.2.attn.masked_bias', 'h.10.attn.masked_bias', 'h.7.attn.masked_bias', 'h.4.attn.masked_bias', 'h.11.attn.masked_bias', 'h.9.attn.masked_bias', 'h.0.attn.masked_bias', 'h.3.attn.masked_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.`
This happens when I attempt to execute the super simple model loading statement:
` pretrained_transformer = GPT2Model.from_pretrained('gpt2')`
I've seen similar issues people have posted, however, they tend to be loading a model and trying to load the weights to a different type of model. Here I am strictly trying to load a GPT2 model weight set and then configure the same GPT2 model with said weights.
I'm worried these warnings are real and it seems my experiments on GCP are not looking the same as the ones locally due to the weights not being loaded properly.
## To reproduce
Steps to reproduce the behavior:
1. Make GCP notebook
2. Try to load GPT-2 model
## Expected behavior
The behaviour I desire is no warning messages and being able to use all the fully trained pre-trained weights as I'm able to do locally.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11707/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11706/comments | https://api.github.com/repos/huggingface/transformers/issues/11706/events | https://github.com/huggingface/transformers/pull/11706 | 890,188,034 | MDExOlB1bGxSZXF1ZXN0NjQzMjQ2MDMz | 11,706 | Add Cloud details to README | {
"login": "marcvanzee",
"id": 180100,
"node_id": "MDQ6VXNlcjE4MDEwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcvanzee",
"html_url": "https://github.com/marcvanzee",
"followers_url": "https://api.github.com/users/marcvanzee/followers",
"following_url": "https://api.github.com/users/marcvanzee/following{/other_user}",
"gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions",
"organizations_url": "https://api.github.com/users/marcvanzee/orgs",
"repos_url": "https://api.github.com/users/marcvanzee/repos",
"events_url": "https://api.github.com/users/marcvanzee/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcvanzee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
Clarifies the date and timezone for retrieving the prices to avoid future complaints about incorrectness.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11706/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11706",
"html_url": "https://github.com/huggingface/transformers/pull/11706",
"diff_url": "https://github.com/huggingface/transformers/pull/11706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11706.patch",
"merged_at": 1621000285000
} |
https://api.github.com/repos/huggingface/transformers/issues/11705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11705/comments | https://api.github.com/repos/huggingface/transformers/issues/11705/events | https://github.com/huggingface/transformers/pull/11705 | 890,141,760 | MDExOlB1bGxSZXF1ZXN0NjQzMjA2NzIz | 11,705 | [Lazy init] Force fall back to slow init for composite models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Thanks to the great issue #11704 it was discovered that fast initialization currently breaks for all models whose `XXXPreTrainedModel` does not implement a `_init_weights` function and for which parts of the weights are missing when using `.from_pretrained(...)`. This includes essentially all composite models, being `Rag` and `EncoderDecoder`.
This PR does the vanilla fix of forcing those models to fall back on `_slow_init` since a better fix requires a careful re-design which is left for a future PR.
## Future PR
- [ ] Remove hacky `from_pretrained(...)` methods in RAG and EncoderDecoder
- [ ] Refactor the way "fast_init" calls `model._init_weights` for composite models. For Composite models, each part has to be called directly =>
```python
model.encoder._init_weigths(all_missing_keys_of_encoder)
model.decoder._init_weigths(all_missing_keys_of_decoder)
```
- [ ] Add more tests for RAG & EncoderDecoder
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11705/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11705",
"html_url": "https://github.com/huggingface/transformers/pull/11705",
"diff_url": "https://github.com/huggingface/transformers/pull/11705.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11705.patch",
"merged_at": 1620831174000
} |
https://api.github.com/repos/huggingface/transformers/issues/11704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11704/comments | https://api.github.com/repos/huggingface/transformers/issues/11704/events | https://github.com/huggingface/transformers/issues/11704 | 890,078,931 | MDU6SXNzdWU4OTAwNzg5MzE= | 11,704 | [RAG] official facebook example code for RAG is not working anymore. | {
"login": "giobin",
"id": 3843501,
"node_id": "MDQ6VXNlcjM4NDM1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3843501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giobin",
"html_url": "https://github.com/giobin",
"followers_url": "https://api.github.com/users/giobin/followers",
"following_url": "https://api.github.com/users/giobin/following{/other_user}",
"gists_url": "https://api.github.com/users/giobin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giobin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giobin/subscriptions",
"organizations_url": "https://api.github.com/users/giobin/orgs",
"repos_url": "https://api.github.com/users/giobin/repos",
"events_url": "https://api.github.com/users/giobin/events{/privacy}",
"received_events_url": "https://api.github.com/users/giobin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | NONE | null | ## Environment info
- `transformers` version: 4.6.0.dev0
- Platform: Linux-3.10.0-1127.10.1.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
rag: @patrickvonplaten, @lhoestq
Models:
RAG model
## Information
Model I am using RAG
The problem arises when using the official example scripts from https://huggingface.co/facebook/rag-sequence-nq, which I copied here:
```
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("how many countries are in europe", return_tensors="pt")
generated = model.generate(input_ids=input_dict["input_ids"])
print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])
```
The tasks I am working on is:
I am trying to ran the sample code from above. Note that this same error occurs also when finetuning rag with the official code in transformers/examples/research_projects/rag/finetune_rag.sh
## To reproduce
Steps to reproduce the behavior:
1. run the sample code above
The error is:
```
Traceback (most recent call last):
File "prova_rag.py", line 5, in <module>
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever, _fast_init=False)
File "/nlu/users/giovanni_bonetta/transformers/src/transformers/modeling_utils.py", line 1208, in from_pretrained
model, state_dict, pretrained_model_name_or_path
File "/nlu/users/giovanni_bonetta/transformers/src/transformers/modeling_utils.py", line 1278, in _load_state_dict_into_model
model._init_weights(module)
File "/nlu/users/giovanni_bonetta/miniconda2/envs/hf_venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 948, in __getattr__
type(self).__name__, name))
AttributeError: 'RagSequenceForGeneration' object has no attribute '_init_weights'
```
Looking at last commits i suppose that the error was introduced in "Pytorch - Lazy initialization of models #11471" a couple of weeks ago, where the line `model._init_weights(module)` was introduced.
## Expected behavior
It should initialize the model without errors.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11704/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11704/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11703/comments | https://api.github.com/repos/huggingface/transformers/issues/11703/events | https://github.com/huggingface/transformers/pull/11703 | 890,035,061 | MDExOlB1bGxSZXF1ZXN0NjQzMTIxMTI3 | 11,703 | remove defaults to None if optional | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | PR to fix #11687
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11703/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11703",
"html_url": "https://github.com/huggingface/transformers/pull/11703",
"diff_url": "https://github.com/huggingface/transformers/pull/11703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11703.patch",
"merged_at": 1620825071000
} |
https://api.github.com/repos/huggingface/transformers/issues/11702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11702/comments | https://api.github.com/repos/huggingface/transformers/issues/11702/events | https://github.com/huggingface/transformers/issues/11702 | 890,022,646 | MDU6SXNzdWU4OTAwMjI2NDY= | 11,702 | channel_len specified but not used | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@LysandreJik or @sgugger could you please check this before it gets closed by the \"stale bot\"?",
"cc @kssteven418 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,626 | 1,626 | CONTRIBUTOR | null | Here `channel_len` is specified but not used. Smells like a possible bug.
https://github.com/huggingface/transformers/blob/f063c56d942737d2c7aac93895cd8310afd9c7a4/src/transformers/models/ibert/quant_modules.py#L133 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11701/comments | https://api.github.com/repos/huggingface/transformers/issues/11701/events | https://github.com/huggingface/transformers/pull/11701 | 890,002,758 | MDExOlB1bGxSZXF1ZXN0NjQzMDkzNjY4 | 11,701 | [Flax] Updates README and fixes bug | {
"login": "marcvanzee",
"id": 180100,
"node_id": "MDQ6VXNlcjE4MDEwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcvanzee",
"html_url": "https://github.com/marcvanzee",
"followers_url": "https://api.github.com/users/marcvanzee/followers",
"following_url": "https://api.github.com/users/marcvanzee/following{/other_user}",
"gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions",
"organizations_url": "https://api.github.com/users/marcvanzee/orgs",
"repos_url": "https://api.github.com/users/marcvanzee/repos",
"events_url": "https://api.github.com/users/marcvanzee/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcvanzee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds information about costs/pricing for Flax Bert Text Classification example.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11701/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11701",
"html_url": "https://github.com/huggingface/transformers/pull/11701",
"diff_url": "https://github.com/huggingface/transformers/pull/11701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11701.patch",
"merged_at": 1620823973000
} |
https://api.github.com/repos/huggingface/transformers/issues/11700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11700/comments | https://api.github.com/repos/huggingface/transformers/issues/11700/events | https://github.com/huggingface/transformers/issues/11700 | 889,975,269 | MDU6SXNzdWU4ODk5NzUyNjk= | 11,700 | Offline installation of the transformers repo (error message) | {
"login": "hannesoehler",
"id": 72496477,
"node_id": "MDQ6VXNlcjcyNDk2NDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/72496477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hannesoehler",
"html_url": "https://github.com/hannesoehler",
"followers_url": "https://api.github.com/users/hannesoehler/followers",
"following_url": "https://api.github.com/users/hannesoehler/following{/other_user}",
"gists_url": "https://api.github.com/users/hannesoehler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hannesoehler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannesoehler/subscriptions",
"organizations_url": "https://api.github.com/users/hannesoehler/orgs",
"repos_url": "https://api.github.com/users/hannesoehler/repos",
"events_url": "https://api.github.com/users/hannesoehler/events{/privacy}",
"received_events_url": "https://api.github.com/users/hannesoehler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: kaggle
- Python version: 3.7
- PyTorch version (GPU?): 1.7.0 (yes)
- Tensorflow version (GPU?): NA
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Upload github repo as a kaggle dataset
2. Turn off internet
3. Run pip installation in notebook: !pip install /kaggle/input/transformersgithub/transformers
4. Error message with respect to setuptools. However, setuptools already installed: Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (49.6.0.post20210108)
Processing /kaggle/input/transformersgithub/transformers
Installing build dependencies ... error
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python3.7 /opt/conda/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-b04ltufk/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel
cwd: None
Complete output (7 lines):
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22390>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22790>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22ad0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22e10>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba16d10>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
ERROR: Could not find a version that satisfies the requirement setuptools>=40.8.0
ERROR: No matching distribution found for setuptools>=40.8.0
----------------------------------------
WARNING: Discarding file:///kaggle/input/transformersgithub/transformers. Command errored out with exit status 1: /opt/conda/bin/python3.7 /opt/conda/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-b04ltufk/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel Check the logs for full command output.
ERROR: Command errored out with exit status 1: /opt/conda/bin/python3.7 /opt/conda/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-b04ltufk/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel Check the logs for full command output.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11700/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11699/comments | https://api.github.com/repos/huggingface/transformers/issues/11699/events | https://github.com/huggingface/transformers/issues/11699 | 889,965,255 | MDU6SXNzdWU4ODk5NjUyNTU= | 11,699 | Mixed precision training : link broken | {
"login": "IvanoLauriola",
"id": 6576632,
"node_id": "MDQ6VXNlcjY1NzY2MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6576632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvanoLauriola",
"html_url": "https://github.com/IvanoLauriola",
"followers_url": "https://api.github.com/users/IvanoLauriola/followers",
"following_url": "https://api.github.com/users/IvanoLauriola/following{/other_user}",
"gists_url": "https://api.github.com/users/IvanoLauriola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvanoLauriola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvanoLauriola/subscriptions",
"organizations_url": "https://api.github.com/users/IvanoLauriola/orgs",
"repos_url": "https://api.github.com/users/IvanoLauriola/repos",
"events_url": "https://api.github.com/users/IvanoLauriola/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvanoLauriola/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, it's here: https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification#mixed-precision-training\r\n\r\n",
"Cool, thank you!"
] | 1,620 | 1,620 | 1,620 | NONE | null | A few weeks ago this link
https://github.com/huggingface/transformers/tree/master/examples/text-classification#mixed-precision-training
showed a comparison between models trained with and without mixed precision for a bunch of sequence/text classification tasks.
Now I think that link is broken and something (examples and similar) has been moved but I do not find the new location of this comparison.
Do you know where can I find that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11699/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11698/comments | https://api.github.com/repos/huggingface/transformers/issues/11698/events | https://github.com/huggingface/transformers/pull/11698 | 889,942,899 | MDExOlB1bGxSZXF1ZXN0NjQzMDQyODI2 | 11,698 | Support ViT model in EncoderDecoder | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"The idea is to use ViT as the encoder and then a LM as the decoder for Image captioning generation, *e.g.*? @patil-suraj and I were also thinking of using `EncoderDecoder` for Speech2Text. \r\n\r\nAt the moment, I see two solutions: -> adapt `EncoderDecoder` to be usable for all modalities not just text2text **or** create new classes, *e.g.* a `SpeechEncoderDecoder` and a `VisionEncoderDecoder` since I'm not sure `EncoderDecoder` will be able to handle all the new use-cases. *E.g.* we might end up ending way too many if-else statements that would make the code unreadable... @patil-suraj @abhi1thakur what do you think? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"Huhu,\r\nany update at this ? :) \r\na VisionEncoderDecoderModel would be a great also for models which follow in future\r\nfor example this one: [TrOCR](https://arxiv.org/abs/2109.10282)",
"We'll add a new `VisionEncoderDecoder` class for this actually",
"@patrickvonplaten nice !\r\nIs there any way or need to contribute ? :)",
"Closing this as [VisionEncoderDecoder](https://huggingface.co/docs/transformers/main/en/model_doc/vision-encoder-decoder#vision-encoder-decoder-models) now exists."
] | 1,620 | 1,681 | 1,651 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11698/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11698",
"html_url": "https://github.com/huggingface/transformers/pull/11698",
"diff_url": "https://github.com/huggingface/transformers/pull/11698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11698.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/11697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11697/comments | https://api.github.com/repos/huggingface/transformers/issues/11697/events | https://github.com/huggingface/transformers/pull/11697 | 889,937,006 | MDExOlB1bGxSZXF1ZXN0NjQzMDM4MDAx | 11,697 | add the chinese ref in place to tackle the memory issue | {
"login": "orctom",
"id": 128631,
"node_id": "MDQ6VXNlcjEyODYzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/128631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orctom",
"html_url": "https://github.com/orctom",
"followers_url": "https://api.github.com/users/orctom/followers",
"following_url": "https://api.github.com/users/orctom/following{/other_user}",
"gists_url": "https://api.github.com/users/orctom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orctom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orctom/subscriptions",
"organizations_url": "https://api.github.com/users/orctom/orgs",
"repos_url": "https://api.github.com/users/orctom/repos",
"events_url": "https://api.github.com/users/orctom/events{/privacy}",
"received_events_url": "https://api.github.com/users/orctom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Both two ways work well on my own process, but `add_columns` is 4x faster than the original method.\r\n\r\nLGTM!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | # What does this PR do?
@wlhgtc
It eats my 200+GB memory when adding the chinese refs, OOM killed, can't go further.
My train corpus has 17067704 lines (size: 1GB)
datasets 1.6.2 come with a new in-place add_column function, which saves the heavy coping. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11697/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11697/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11697",
"html_url": "https://github.com/huggingface/transformers/pull/11697",
"diff_url": "https://github.com/huggingface/transformers/pull/11697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11697.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11696/comments | https://api.github.com/repos/huggingface/transformers/issues/11696/events | https://github.com/huggingface/transformers/pull/11696 | 889,935,876 | MDExOlB1bGxSZXF1ZXN0NjQzMDM3MDcw | 11,696 | [CLIP] fix example in config doc | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11696/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11696",
"html_url": "https://github.com/huggingface/transformers/pull/11696",
"diff_url": "https://github.com/huggingface/transformers/pull/11696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11696.patch",
"merged_at": 1620827333000
} |
https://api.github.com/repos/huggingface/transformers/issues/11695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11695/comments | https://api.github.com/repos/huggingface/transformers/issues/11695/events | https://github.com/huggingface/transformers/pull/11695 | 889,888,023 | MDExOlB1bGxSZXF1ZXN0NjQyOTk1MDEw | 11,695 | [Flax] Fix BERT initialization & token_type_ids default | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes initialization of FLAX models by disabling `return_dict` since this can sometimes lead to problems in distributed settings. Also `token_type_ids` should be initialized to 0.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11695/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11695",
"html_url": "https://github.com/huggingface/transformers/pull/11695",
"diff_url": "https://github.com/huggingface/transformers/pull/11695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11695.patch",
"merged_at": 1620899899000
} |
https://api.github.com/repos/huggingface/transformers/issues/11694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11694/comments | https://api.github.com/repos/huggingface/transformers/issues/11694/events | https://github.com/huggingface/transformers/pull/11694 | 889,865,709 | MDExOlB1bGxSZXF1ZXN0NjQyOTc1OTE1 | 11,694 | Fix clip docs | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11694/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11694",
"html_url": "https://github.com/huggingface/transformers/pull/11694",
"diff_url": "https://github.com/huggingface/transformers/pull/11694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11694.patch",
"merged_at": 1620813510000
} |
https://api.github.com/repos/huggingface/transformers/issues/11693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11693/comments | https://api.github.com/repos/huggingface/transformers/issues/11693/events | https://github.com/huggingface/transformers/issues/11693 | 889,855,361 | MDU6SXNzdWU4ODk4NTUzNjE= | 11,693 | Flag to disable shuffling for data loader | {
"login": "hasansalimkanmaz",
"id": 49716619,
"node_id": "MDQ6VXNlcjQ5NzE2NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasansalimkanmaz",
"html_url": "https://github.com/hasansalimkanmaz",
"followers_url": "https://api.github.com/users/hasansalimkanmaz/followers",
"following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions",
"organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs",
"repos_url": "https://api.github.com/users/hasansalimkanmaz/repos",
"events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think this is a suitable feature, so I would recommend you override and subclass `get_train_dataloader` to simply return a training dataloader without using a sampler or `shuffle=True`."
] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | # 🚀 Feature request
Currently, Trainer is shuffling the train_dataset by default and there is no flag to enable/disable it.
@sgugger
## Motivation
Even if shuffling the dataset brings a lot of benefits like preventing overfitting, at some point, one can need to disable it for experimental motivation. It isn't possible to do it without overwriting the `_get_train_sampler` method of Trainer. :(
## Your contribution
I can work on this issue (maybe next month) if this issue gets positive feedback. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11693/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11692/comments | https://api.github.com/repos/huggingface/transformers/issues/11692/events | https://github.com/huggingface/transformers/pull/11692 | 889,825,851 | MDExOlB1bGxSZXF1ZXN0NjQyOTQyMTI2 | 11,692 | fix url for CLIP doc | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11692/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11692",
"html_url": "https://github.com/huggingface/transformers/pull/11692",
"diff_url": "https://github.com/huggingface/transformers/pull/11692.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11692.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11691/comments | https://api.github.com/repos/huggingface/transformers/issues/11691/events | https://github.com/huggingface/transformers/pull/11691 | 889,777,643 | MDExOlB1bGxSZXF1ZXN0NjQyOTAxMjgz | 11,691 | BertForSemanticSimilarity | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,651 | 1,624 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11691/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11691",
"html_url": "https://github.com/huggingface/transformers/pull/11691",
"diff_url": "https://github.com/huggingface/transformers/pull/11691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11691.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/11690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11690/comments | https://api.github.com/repos/huggingface/transformers/issues/11690/events | https://github.com/huggingface/transformers/pull/11690 | 889,560,638 | MDExOlB1bGxSZXF1ZXN0NjQyNzA4OTYy | 11,690 | add --validation_split_percentage for custom dataset | {
"login": "kornosk",
"id": 15230011,
"node_id": "MDQ6VXNlcjE1MjMwMDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/15230011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kornosk",
"html_url": "https://github.com/kornosk",
"followers_url": "https://api.github.com/users/kornosk/followers",
"following_url": "https://api.github.com/users/kornosk/following{/other_user}",
"gists_url": "https://api.github.com/users/kornosk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kornosk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kornosk/subscriptions",
"organizations_url": "https://api.github.com/users/kornosk/orgs",
"repos_url": "https://api.github.com/users/kornosk/repos",
"events_url": "https://api.github.com/users/kornosk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kornosk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | # What does this PR do?
In the current version of the example, the `--validation_split_percentage` only works for datasets loaded from the hub but not for custom datasets. If `--do_eval` is set for custom dataset, it requires `--validation_file`.
This PR makes `--validation_split_percentage` works for a custom dataset if `--validation_file` is not set.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
<!-- Fixes # (issue) -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11690/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11690",
"html_url": "https://github.com/huggingface/transformers/pull/11690",
"diff_url": "https://github.com/huggingface/transformers/pull/11690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11690.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11689/comments | https://api.github.com/repos/huggingface/transformers/issues/11689/events | https://github.com/huggingface/transformers/issues/11689 | 889,275,357 | MDU6SXNzdWU4ODkyNzUzNTc= | 11,689 | DeBERTa pretraining data preparation | {
"login": "mansimane",
"id": 23171195,
"node_id": "MDQ6VXNlcjIzMTcxMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23171195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansimane",
"html_url": "https://github.com/mansimane",
"followers_url": "https://api.github.com/users/mansimane/followers",
"following_url": "https://api.github.com/users/mansimane/following{/other_user}",
"gists_url": "https://api.github.com/users/mansimane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansimane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansimane/subscriptions",
"organizations_url": "https://api.github.com/users/mansimane/orgs",
"repos_url": "https://api.github.com/users/mansimane/repos",
"events_url": "https://api.github.com/users/mansimane/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansimane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: 4.6.0.dev0
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y
### Who can help
--> @LysandreJik @BigBird01
## Information
Model I am using (Bert, XLNet ...): DeBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): MLM + SQUAD 1
* [ ] my own task or dataset: (give details below)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am pretraining DeBERTa Base from scratch on Wikipedia + Book Corpus dataset. After pretraining for 500K steps I observe SQUAD 1.1 score of 76 which is much less than Figure 1(b) in paper (although figure 1b reports squad 2.0 numbers) squad 1.1 numbers would be much better than that as it is easier task. Using same hyperparameters as reported in paper. I would like to confirm the preprocessing steps that authors took to prepare pretraining data.
1. In section 4.4.1, authors report that they used Megatron code base to deduplicate the data. The code provided performs deduplication based on urls. https://github.com/NVIDIA/Megatron-LM/tree/main/tools/openwebtext Was the deduplication performed on url -> document set or on shards of dataset?
2. [This](https://github.com/NVIDIA/Megatron-LM/blob/main/tools/openwebtext/cleanup_dataset.py) codebase also cleans up the dataset based and removes non-english characters. Were these data cleanup steps performed on pretraining data?
3. Is it possible to provide scripts used to generate pretraining data? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11689/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11689/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11688/comments | https://api.github.com/repos/huggingface/transformers/issues/11688/events | https://github.com/huggingface/transformers/issues/11688 | 888,496,598 | MDU6SXNzdWU4ODg0OTY1OTg= | 11,688 | Trainer skips training when continuing training with model.from_pretrained() | {
"login": "elenif",
"id": 44734526,
"node_id": "MDQ6VXNlcjQ0NzM0NTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44734526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elenif",
"html_url": "https://github.com/elenif",
"followers_url": "https://api.github.com/users/elenif/followers",
"following_url": "https://api.github.com/users/elenif/following{/other_user}",
"gists_url": "https://api.github.com/users/elenif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elenif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elenif/subscriptions",
"organizations_url": "https://api.github.com/users/elenif/orgs",
"repos_url": "https://api.github.com/users/elenif/repos",
"events_url": "https://api.github.com/users/elenif/events{/privacy}",
"received_events_url": "https://api.github.com/users/elenif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm really sorry to may have bothered you with that. Turns out, it works when using a different GPU :) \r\n\r\nIt struck me when I tested the same code with less data on my local machine, the trainer logged epochs (which it did not before), so I tested it on several devices. I was just a bit confused to be honest, since there were no errors/warnings thrown neither by tensorflow nor the transformers library even when using trainer (debug=True).",
"Hey! Don't worry about it - TFTrainer is currently not very maintained, and we're looking at switching away from it to a pure Keras framework, so some bits of it can be quite confusing right now. Don't be shy about letting us know if you run into other issues, especially if they look like they might be bugs at our end!"
] | 1,620 | 1,620 | 1,620 | NONE | null | I'm fine-tuning the gpt2 model using TFTrainer.
As I have to save some computational power, I needed to part the data set and train it one after another.
The first training (from TFGPT2LMHeadModel.from_pretrained('gpt2')) works well. After storing the model in a folder under a different name, and reloading it again to continue training, the trainer basically skips the training and stores the old model in the new model folder.
I don't understand the reason for this behavior.
transformers: 4.5.0
tensorflow:2.4.1
@Rocketknight1
Training code:
```
def train(file, file_id, batch_size, num_epochs):
tokenizer, vocab_size = load_tokenizer()
max_seq_length = 1000
args = TFTrainingArguments(output_dir='out_gpt',
num_train_epochs=num_epochs,
do_train=True,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=2,
max_grad_norm=1.0)
data = prepare_pre_training_data_set(file, tokenizer, max_seq_length)
print("Datasets loaded...")
with args.strategy.scope():
model = TFGPT2LMHeadModel.from_pretrained('gpt2_trained_' + str(file_id))
optimizer = tf.keras.optimizers.Adam(lr=0.0005)
cat_loss = tf.losses.CategoricalCrossentropy()
model.compile(optimizer=optimizer, loss=cat_loss)
trainer = TFTrainer(model=model,
train_dataset=data,
args=args)
trainer.train()
trainer.save_model("gpt2_trained_+str(file_id+1)")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11688/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11687/comments | https://api.github.com/repos/huggingface/transformers/issues/11687/events | https://github.com/huggingface/transformers/issues/11687 | 888,209,359 | MDU6SXNzdWU4ODgyMDkzNTk= | 11,687 | Remove "`optional`, defaults to :obj:`None`" | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please go ahead if you want to clean this! A quick serach shows me 17 \\`optional\\`, defaults to :obj:\\`None\\` and two `optional`, defaults to None",
"ok - see #11703 @sgugger "
] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | There are some docstrings with "`optional`, defaults to :obj:`None`" arguments.
According to @sgugger this should be avoided: https://github.com/huggingface/transformers/pull/11417#discussion_r629320375
PS: I can provide a PR if wanted... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11687/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11686/comments | https://api.github.com/repos/huggingface/transformers/issues/11686/events | https://github.com/huggingface/transformers/issues/11686 | 887,866,289 | MDU6SXNzdWU4ODc4NjYyODk= | 11,686 | Routing Transformers / Add Google PG-19 Models | {
"login": "GenTxt",
"id": 22547261,
"node_id": "MDQ6VXNlcjIyNTQ3MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/22547261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GenTxt",
"html_url": "https://github.com/GenTxt",
"followers_url": "https://api.github.com/users/GenTxt/followers",
"following_url": "https://api.github.com/users/GenTxt/following{/other_user}",
"gists_url": "https://api.github.com/users/GenTxt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GenTxt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GenTxt/subscriptions",
"organizations_url": "https://api.github.com/users/GenTxt/orgs",
"repos_url": "https://api.github.com/users/GenTxt/repos",
"events_url": "https://api.github.com/users/GenTxt/events{/privacy}",
"received_events_url": "https://api.github.com/users/GenTxt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"There is an open-source pytorch implementation already - https://github.com/lucidrains/routing-transformer\r\nCan't we adapt RT @lucidrains wrote to HF? ",
"I've checked the repo before and was hoping with the release of the models\nthis would be possible.\n\nThe original models may be tf1 and not tf2 format. This requires a custom\nconversion script to pytorch.\n\nPerhaps coders with advanced python skills will show interest in solving\nthe above issues.\n\nOn Wed, Jul 14, 2021 at 7:53 AM vblagoje ***@***.***> wrote:\n\n> There is an open-source pytorch implementation already -\n> https://github.com/lucidrains/routing-transformer\n> Can't we adapt RT @lucidrains <https://github.com/lucidrains> wrote to HF?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/11686#issuecomment-879827190>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFMAWPKXKV5UXM2HFEQ57U3TXV3ERANCNFSM44WG7YGA>\n> .\n>\n"
] | 1,620 | 1,631 | null | NONE | null | # 🌟 New model addition - Google PG-19 Models
## Model description
Model checkpoints finally released as discussed in "Efficient Content-Based Sparse Attention with Routing Transformers'
Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier (https://arxiv.org/abs/2003.05997)
## Open source status
* [X ] the model implementation is available: (same link as below)
* [ X] the model weights are available: ( https://github.com/google-research/google-research/tree/master/routing_transformer)
* [X ] who are the authors: (see above)
Note: These tf2 models require proper conversion to pytorch versions and modifications to scripts to enable training and inference.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11686/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11685/comments | https://api.github.com/repos/huggingface/transformers/issues/11685/events | https://github.com/huggingface/transformers/pull/11685 | 887,815,656 | MDExOlB1bGxSZXF1ZXN0NjQxMDI0NjI5 | 11,685 | [WIP] Add flax generate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,621 | 1,621 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11685/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11685",
"html_url": "https://github.com/huggingface/transformers/pull/11685",
"diff_url": "https://github.com/huggingface/transformers/pull/11685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11685.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11684/comments | https://api.github.com/repos/huggingface/transformers/issues/11684/events | https://github.com/huggingface/transformers/pull/11684 | 887,797,422 | MDExOlB1bGxSZXF1ZXN0NjQxMDA3MjA1 | 11,684 | Add new model RoFormer (use rotary position embedding ) | {
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj I have updated some codes, please review again. Thanks~",
"@patil-suraj \r\n- I fixed the docstrings format and the build_doc tests pass\r\n- I have resolved the merge conflicts\r\n- I have run make style and make quality\r\n\r\nThank you for reviewing on this PR. ∩▂∩\r\n",
"@patrickvonplaten i have done it,thanks;)",
"Tests are fine I think (PyTorch times out :-/).\r\nGood to merge for me",
"Thanks a lot @JunnYu, fantastic addition!"
] | 1,620 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Add new model RoFormer
[RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
The original code can be found [here](https://github.com/ZhuiyiTechnology/roformer).
## The abstract from the paper is the following:
*Position encoding in transformer architecture provides supervision for dependency modeling between elements at
different positions in the sequence. We investigate various methods to encode positional information in
transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The
proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative
position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of
being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and
capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced
transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We
release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing
experiment for English benchmark will soon be updated.*
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11684/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11684/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11684",
"html_url": "https://github.com/huggingface/transformers/pull/11684",
"diff_url": "https://github.com/huggingface/transformers/pull/11684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11684.patch",
"merged_at": 1621512034000
} |
https://api.github.com/repos/huggingface/transformers/issues/11683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11683/comments | https://api.github.com/repos/huggingface/transformers/issues/11683/events | https://github.com/huggingface/transformers/issues/11683 | 887,754,100 | MDU6SXNzdWU4ODc3NTQxMDA= | 11,683 | Issue getting prediction_scores from TransfoXLHeadLM model when labels are provided | {
"login": "RainIwakura",
"id": 8593585,
"node_id": "MDQ6VXNlcjg1OTM1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8593585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RainIwakura",
"html_url": "https://github.com/RainIwakura",
"followers_url": "https://api.github.com/users/RainIwakura/followers",
"following_url": "https://api.github.com/users/RainIwakura/following{/other_user}",
"gists_url": "https://api.github.com/users/RainIwakura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RainIwakura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RainIwakura/subscriptions",
"organizations_url": "https://api.github.com/users/RainIwakura/orgs",
"repos_url": "https://api.github.com/users/RainIwakura/repos",
"events_url": "https://api.github.com/users/RainIwakura/events{/privacy}",
"received_events_url": "https://api.github.com/users/RainIwakura/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @RedneckedCrake,\r\n\r\nI think I agree with you here! It should be a very easy fix simply by change this line: https://github.com/huggingface/transformers/blob/6ee1a4fd3e80feef8fe7dc65aabb4c5270524f8a/src/transformers/models/transfo_xl/modeling_transfo_xl.py#L1100. Would you like to give it a try to fix it?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Colab
- Python version: 3.8.1
- PyTorch version (GPU?): 1.8.1+cu101 (Yes), same bug reproduced on CPU side (windows 10)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TransfoXL
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Slightly modified example from TransfoXL docs (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel)
## To reproduce
Steps to reproduce the behavior:
```python
import torch
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
with torch.no_grad():
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt",)
print(inputs)
outputs = model(inputs["input_ids"],return_dict=True,labels=inputs["input_ids"])
print(outputs['prediction_scores'])
```
## Expected behavior
```outputs['prediction_scores']``` should return a torch.FloatTensor, not ```()```. In this example
```
tensor([[[ -4.4980, -4.7363, -3.8697, ..., -18.4604, -20.6320, -15.2920],
[ -4.0868, -3.7895, -2.9193, ..., -19.4917, -20.0318, -15.7870],
[ -4.4769, -4.7728, -1.5619, ..., -21.3586, -22.2751, -18.7071],
[ -6.1670, -6.8841, -0.6857, ..., -21.4503, -22.3682, -19.5937],
[ -7.3567, -3.1381, -2.7641, ..., -18.3717, -20.6145, -17.4109],
[ -7.1151, -6.4929, -0.9753, ..., -21.8517, -21.9864, -20.3518]]])
```
is returned correctly when ```labels=None``` but not when ```labels=inputs["input_ids"]```. I've tested almost identical example in GPT2 and it did return (albeit unnormalized) logits regardless of whether labels are provided or not. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11683/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11682/comments | https://api.github.com/repos/huggingface/transformers/issues/11682/events | https://github.com/huggingface/transformers/pull/11682 | 887,693,479 | MDExOlB1bGxSZXF1ZXN0NjQwOTA4Nzky | 11,682 | Test checkpointing | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | COLLABORATOR | null | # What does this PR do?
This fixes how weights are loading when resuming training from a checkpoint, in the instances some weights are tied with other (and thus not saved). It also adds a test in the common tests to make sure the mechanism used is not broken by mistake.
Fixes #11666 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11682/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11682",
"html_url": "https://github.com/huggingface/transformers/pull/11682",
"diff_url": "https://github.com/huggingface/transformers/pull/11682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11682.patch",
"merged_at": 1620748968000
} |
https://api.github.com/repos/huggingface/transformers/issues/11681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11681/comments | https://api.github.com/repos/huggingface/transformers/issues/11681/events | https://github.com/huggingface/transformers/issues/11681 | 887,637,650 | MDU6SXNzdWU4ODc2Mzc2NTA= | 11,681 | Cannot reproduce results from zeroshot demo app | {
"login": "buhrmann",
"id": 190342,
"node_id": "MDQ6VXNlcjE5MDM0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/190342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buhrmann",
"html_url": "https://github.com/buhrmann",
"followers_url": "https://api.github.com/users/buhrmann/followers",
"following_url": "https://api.github.com/users/buhrmann/following{/other_user}",
"gists_url": "https://api.github.com/users/buhrmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buhrmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buhrmann/subscriptions",
"organizations_url": "https://api.github.com/users/buhrmann/orgs",
"repos_url": "https://api.github.com/users/buhrmann/repos",
"events_url": "https://api.github.com/users/buhrmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/buhrmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's subtle but frustratingly important: your hypothesis template needs to have a period at the end. `This text is about {}.` Try that and let me know.",
"With the period at the end (and the proper argument name `hypothesis_template`), I'm getting\r\n\r\n```\r\n[\r\n ('science and electronics', 0.1435241997241974),\r\n ('science and cryptography', 0.13319259881973267),\r\n ('science and space', 0.13153083622455597),\r\n ('religion or christianity or atheism', 0.12961038947105408),\r\n ('politics and middle east', 0.12887051701545715),\r\n ('science and medicine', 0.11486212909221649),\r\n ('politics', 0.11052283644676208),\r\n ('politics and guns', 0.10788644850254059)\r\n]\r\n```\r\n\r\nThe code for the streamlit app is not public, is it? To see what other differences there may be...\r\n\r\n(In general, but that's perhaps just the nature of this approach to zeroshot, the results seem quite sensitive to how the hypothesis is formulated. Even seemingly harmless changes may mean that the religious category is suddenly the most probable, which is kind of surprising for the sample text, but yeah that's another story...)",
"Btw, I'm assuming this message can be ignored\r\n\r\n```\r\nSome weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nbecause of how the original model is repurposed?",
"Yes you can disregard that message safely. And it's not 100% up to date but the code for the demo is public: https://github.com/joeddav/zero-shot-demo",
"Great, thanks! For some reason I didn't find that repo in your account when I looked. In any case, the only difference I can see is in the creation of the pipeline (the more manual creation using model and tokenizer instances instead of simply the model name). But that doesn't affect the result at all in my tests. So unless the app is somehow using a different version of the model I assume the difference is in how it is executed in different environments.",
"Hmm so the code in my repo is out of date because we now use the inference API as the backend and it looks like there's a discrepancy between inference API outputs and the pipeline outputs. I'll look into it.",
"Ok, let me know if I can test anything (small-scale) here to help.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi, I am dealing too with significant differences between streamlit example and my local testing, is there any update regarding this issue?",
"@shimonhaf How significant are the changes? It appears that this might be due to the quantization done by the inference API which the demo uses. cc @Narsil ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,626 | 1,626 | NONE | null | Using the same text and the same labels I cannot "exactly" reproduce the result from the zeroshot app here https://huggingface.co/zero-shot/, using the "XLM Roberta XNLI" option. See below for my attempt to get the same result.
## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.6.0-1055-oem-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@joeddav
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
``` python
from transformers import pipeline
model = "joeddav/xlm-roberta-large-xnli"
classifier = pipeline("zero-shot-classification", model=model, framework="pt")
txt = "There SHOULD be a connection of the GROUND wire to a ground in the breaker box. There also should be a connection of the NEUTRAL wire to a ground in the breaker box. There should be no other place in the building where such a connection occurs (i.e. not in any of the outlet boxes). The NEUTRAL (white) wire is a 'grounding conductor' for the plug, and is NOT safe to touch, while the GROUND (green) wire is a 'protective ground' and carries no current unless some kind of electrical fault has occurred. It's safe to touch the protective ground, but not to touch the grounding conductor (because there is current in the grounding conductor, its outlet-box end will not be at the same ground potential as its breaker-box end)."
template = "This text is about {}"
custom_labels = [
"politics",
"politics and guns",
"politics and middle east",
"religion or christianity or atheism",
"science and cryptography",
"science and electronics",
"science and medicine",
"science and space"
]
res = classifier(txt, candidate_labels=custom_labels, template=template, multi_label=False)
list(zip(res["labels"], res["scores"]))
```
```
[
('science and electronics', 0.17324578762054443),
('religion or christianity or atheism', 0.15423095226287842),
('politics and middle east', 0.12779277563095093),
('science and space', 0.1238853707909584),
('science and cryptography', 0.12293272465467453),
('science and medicine', 0.10926352441310883),
('politics', 0.09960934519767761),
('politics and guns', 0.08903954923152924)
]
```
## Expected behavior
I was hoping to get something similar to the result from pasting the same text and labels into the app, namely
```
[
('science and electronics', 14.4%),
('politics', 14%),
('science and space', 13.7%),
('politics and guns', 13.1%)
('politics and middle east', 12.7%),
('science and medicine', 11.8%),
('science and cryptography', 10.7%),
('religion or christianity or atheism', 9.6%)
]
```
Small differences would be expectable I guess, because of potential platform/framework differences etc., but the fact that the "religion or christianity or atheism" category leads to such different results make me wonder if I'm not using the same model as the app, or a different prompt perhaps?
Neither of the two gives particularly great results in this case, but knowing the origin of this difference would be useful for better evaluating the pipeline.
EDIT: I've noticed I've passed the template with the wrong argument name (which didn't fail since `__call__` accepts arbitrary **kwargs), but using the correct one doesn't make the result any more similar to the one from the app. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11681/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11680/comments | https://api.github.com/repos/huggingface/transformers/issues/11680/events | https://github.com/huggingface/transformers/pull/11680 | 887,453,769 | MDExOlB1bGxSZXF1ZXN0NjQwNjg0NzMw | 11,680 | [TokenClassification] Label realignment for subword aggregation | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you @Narsil, I'll take a look!\r\n\r\n@francescorubbo, @elk-cloner, this PR originated from yours and is based on the same approach. If you approve of this PR, can I add you as co-authors, as you've greatly contributed to its current shape? ",
"> @francescorubbo, @elk-cloner, this PR originated from yours and is based on the same approach. If you approve of this PR, can I add you as co-authors, as you've greatly contributed to its current shape?\r\n\r\nSure!\r\n",
"> Thank you @Narsil, I'll take a look!\r\n> \r\n> @francescorubbo, @elk-cloner, this PR originated from yours and is based on the same approach. If you approve of this PR, can I add you as co-authors, as you've greatly contributed to its current shape?\r\n\r\nSure!",
"Great, I will! \r\n\r\nPinging also @joshdevins and @cceyda for feedback",
"Looking good, thank you for all the work! I'm wondering if you can include test cases for the original 3 examples provided in https://github.com/huggingface/transformers/issues/10263#issue-811193366 ? The new test examples here look correct but I'm not sure they cover the scope of the first examples. Maybe just stub out a real model and test with the labels for each sub-word token as provided in the example. This will exercise just the aggregation logic then as in theory a model could output any of these example labels.",
"@joshdevins \r\n\r\nDo you mind giving an example displaying the issue ?\r\n\r\nI tried this, but I don't think it exhibits what you mention in the original issue: ( https://github.com/huggingface/transformers/issues/10263#issue-811193366)\r\n\r\n```python\r\n NER_MODEL = \"elastic/distilbert-base-cased-finetuned-conll03-english\"\r\n model = AutoModelForTokenClassification.from_pretrained(NER_MODEL)\r\n tokenizer = AutoTokenizer.from_pretrained(NER_MODEL, use_fast=True)\r\n sentence = \"\"\"Accenture is a company. Max Mustermann is someone, Elasticsearch is something.\"\"\"\r\n nlp_ner = pipeline(\"ner\", model=model, tokenizer=tokenizer)\r\n output = nlp_ner(sentence)\r\n print(output)\r\n self.assertEqual(\r\n nested_simplify(output),\r\n [\r\n {\"entity\": \"B-PER\", \"score\": 0.9953969, \"index\": 9, \"word\": \"Max\", \"start\": 24, \"end\": 27},\r\n {\"entity\": \"I-PER\", \"score\": 0.9773876, \"index\": 10, \"word\": \"Must\", \"start\": 28, \"end\": 32},\r\n {\"entity\": \"I-PER\", \"score\": 0.9924896, \"index\": 11, \"word\": \"##erman\", \"start\": 32, \"end\": 37},\r\n {\"entity\": \"I-PER\", \"score\": 0.9860034, \"index\": 12, \"word\": \"##n\", \"start\": 37, \"end\": 38},\r\n {\"entity\": \"B-ORG\", \"score\": 0.99201995, \"index\": 16, \"word\": \"El\", \"start\": 51, \"end\": 53},\r\n {\"entity\": \"B-ORG\", \"score\": 0.99391395, \"index\": 17, \"word\": \"##astic\", \"start\": 53, \"end\": 58},\r\n {\"entity\": \"B-ORG\", \"score\": 0.9962443, \"index\": 18, \"word\": \"##sea\", \"start\": 58, \"end\": 61},\r\n {\"entity\": \"B-ORG\", \"score\": 0.9924281, \"index\": 19, \"word\": \"##rch\", \"start\": 61, \"end\": 64},\r\n ],\r\n )\r\n```",
"I think we should just wait for the test with `elastic` model and we' re good to go.",
"@Narsil I've since retrained that model (by labelling all sub-word tokens instead of padding) and it appears to work better with new domain data like \"Elasticsearch\".\r\n\r\nThe point was more to decouple the fix from a specific model and to be robust to possible outputs of a model particularly for words/sub-words that are out-of-domain for a model and relying on sub-word token classification which can be less predictable (in my experience). This was why I suggested stubbing out the model and just putting in the sub-word labels directly to the aggregator to see if the expected behaviour matches the actual new behaviour.",
"@joshdevins Yes, then I think I already added those tests here:\r\n\r\n```python\r\n def test_aggregation_strategy_example2(self):\r\n model_name = self.small_models[0]\r\n tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)\r\n nlp = pipeline(task=\"ner\", model=model_name, tokenizer=tokenizer, framework=\"pt\")\r\n # Just to understand scores indexes in this test\r\n self.assertEqual(\r\n nlp.model.config.id2label,\r\n {0: \"O\", 1: \"B-MISC\", 2: \"I-MISC\", 3: \"B-PER\", 4: \"I-PER\", 5: \"B-ORG\", 6: \"I-ORG\", 7: \"B-LOC\", 8: \"I-LOC\"},\r\n )\r\n example = [\r\n {\r\n # Necessary for AVERAGE\r\n \"scores\": np.array([0, 0.55, 0, 0.45, 0, 0, 0, 0, 0, 0]),\r\n \"is_subword\": False,\r\n \"index\": 1,\r\n \"word\": \"Ra\",\r\n \"start\": 0,\r\n \"end\": 2,\r\n },\r\n {\r\n \"scores\": np.array([0, 0, 0, 0.2, 0, 0, 0, 0.8, 0, 0]),\r\n \"is_subword\": True,\r\n \"word\": \"##ma\",\r\n \"start\": 2,\r\n \"end\": 4,\r\n \"index\": 2,\r\n },\r\n {\r\n # 4th score will have the higher average\r\n # 4th score is B-PER for this model\r\n # It's does not correspond to any of the subtokens.\r\n \"scores\": np.array([0, 0, 0, 0.4, 0, 0, 0.6, 0, 0, 0]),\r\n \"is_subword\": True,\r\n \"word\": \"##zotti\",\r\n \"start\": 11,\r\n \"end\": 13,\r\n \"index\": 3,\r\n },\r\n ]\r\n self.assertEqual(\r\n nlp.aggregate(example, AggregationStrategy.NONE),\r\n [\r\n {\"end\": 2, \"entity\": \"B-MISC\", \"score\": 0.55, \"start\": 0, \"word\": \"Ra\", \"index\": 1},\r\n {\"end\": 4, \"entity\": \"B-LOC\", \"score\": 0.8, \"start\": 2, \"word\": \"##ma\", \"index\": 2},\r\n {\"end\": 13, \"entity\": \"I-ORG\", \"score\": 0.6, \"start\": 11, \"word\": \"##zotti\", \"index\": 3},\r\n ],\r\n )\r\n\r\n self.assertEqual(\r\n nlp.aggregate(example, AggregationStrategy.FIRST),\r\n [{\"entity_group\": \"MISC\", \"score\": 0.55, \"word\": \"Ramazotti\", \"start\": 0, \"end\": 13}],\r\n )\r\n self.assertEqual(\r\n nlp.aggregate(example, AggregationStrategy.MAX),\r\n [{\"entity_group\": \"LOC\", \"score\": 0.8, \"word\": \"Ramazotti\", \"start\": 0, \"end\": 13}],\r\n )\r\n self.assertEqual(\r\n nested_simplify(nlp.aggregate(example, AggregationStrategy.AVERAGE)),\r\n [{\"entity_group\": \"PER\", \"score\": 0.35, \"word\": \"Ramazotti\", \"start\": 0, \"end\": 13}],\r\n )\r\n```\r\n",
"@Narsil Ah cool, I missed those examples in my read-through. LGTM 🎉",
"@LysandreJik I changed the co-authors, I'll merge after you check I've done it correctly.",
"Looks good to me, feel free to merge!"
] | 1,620 | 1,621 | 1,621 | CONTRIBUTOR | null | # What does this PR do?
Tentative to replace #11622
- Added `AggregationStrategy`
- `ignore_subwords` and `grouped_entities` arguments are now fused
into `aggregation_strategy`. It makes more sense anyway because
`ignore_subwords=True` with `grouped_entities=False` did not have a
meaning anyway.
- Added 2 new ways to aggregate which are MAX, and AVERAGE
- AVERAGE requires a bit more information than the others, for now this
case is slightly specific, we should keep that in mind for future
changes.
- Testing has been modified to reflect new argument, and to check the
correct deprecation and the new aggregation_strategy.
- Put the testing argument and testing results for aggregation_strategy,
close together, so that readers can understand what is supposed to
happen.
- `aggregate` is now only tested on a small model as it does not mean
anything to test it globally for all models.
- Previous tests are unchanged in desired output.
- Added a new test case that showcases better the difference between the
FIRST, MAX and AVERAGE strategies.
Fixes #10263, #10763
See also #10568
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11680/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11680",
"html_url": "https://github.com/huggingface/transformers/pull/11680",
"diff_url": "https://github.com/huggingface/transformers/pull/11680.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11680.patch",
"merged_at": 1621324401000
} |
https://api.github.com/repos/huggingface/transformers/issues/11679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11679/comments | https://api.github.com/repos/huggingface/transformers/issues/11679/events | https://github.com/huggingface/transformers/pull/11679 | 887,371,072 | MDExOlB1bGxSZXF1ZXN0NjQwNjA3MDU2 | 11,679 | Grammar and style edits for the frontpage README | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | I'm one of those people who always spots apostrophes out of place, I'm sorry! I went through the frontpage README and fixed things up. I also reran the code examples when I had to change the text inside them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11679/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11679",
"html_url": "https://github.com/huggingface/transformers/pull/11679",
"diff_url": "https://github.com/huggingface/transformers/pull/11679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11679.patch",
"merged_at": 1620744574000
} |
https://api.github.com/repos/huggingface/transformers/issues/11678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11678/comments | https://api.github.com/repos/huggingface/transformers/issues/11678/events | https://github.com/huggingface/transformers/issues/11678 | 887,105,950 | MDU6SXNzdWU4ODcxMDU5NTA= | 11,678 | Zeroshot pipeline performance worse on CPU when processing multiple texts as "batch" | {
"login": "buhrmann",
"id": 190342,
"node_id": "MDQ6VXNlcjE5MDM0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/190342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buhrmann",
"html_url": "https://github.com/buhrmann",
"followers_url": "https://api.github.com/users/buhrmann/followers",
"following_url": "https://api.github.com/users/buhrmann/following{/other_user}",
"gists_url": "https://api.github.com/users/buhrmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buhrmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buhrmann/subscriptions",
"organizations_url": "https://api.github.com/users/buhrmann/orgs",
"repos_url": "https://api.github.com/users/buhrmann/repos",
"events_url": "https://api.github.com/users/buhrmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/buhrmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The reason is likely padding. When passed through as a batch, the shorter sequences have to be padded to the length of the longest sequence. On GPU, you'll still get a speedup because batching allows so much more parallel computing to happen that it makes up for the padding. But on CPU you just end up with more pad tokens to be processed without as much parallelization speedup. I bet if your 5 sequences were all approx the same length, the difference in compute times would be far smaller or the batched might even be faster.",
"That makes perfect sense. Thanks for the quick response! May be a good idea to try breaking up larger texts into similarly sized chunks then I guess. I'll try that if I get around to it.",
"I think on CPU, the simplest and best solution is probably going to be to just pass each sequence one at a time (at least if you have wildly variable-length sequences like in your example).",
"That's what I'm doing for now, thanks. I think you can close this issue then (unless you want to keep it around to add something to the docs at some point, but I guess low-resource inference using CPU only is not the typical use case anyway)."
] | 1,620 | 1,620 | 1,620 | NONE | null | Hi, I'm getting weird performance results using the zeroshot pipeline on a laptop with CPU. Essentially piping 5 texts through it at the same time is about 3x _slower_ than just iterating over the texts one by one:
``` python
texts = ...
classifier = pipeline("zero-shot-classification", model="joeddav/xlm-roberta-large-xnli")
t0 = time()
res1 = classifier(texts, labels, template, multi_label=False)
t1 = time()
res2 = [classifier(txt, labels, template, multi_label=False) for txt in texts]
t2 = time()
print(t1-t0, t2-t1)
```
```
>>> 85.13976335525513 27.092346906661987
```
The results are the same (other than some decimals in probabilities). In both cases 4 CPUs are utilized pretty much constantly at 100%. I don't know the code, but perhaps there is an attempt to parallelize at the level of texts, which is being blocked by the GIL or something? Perhaps it's just a documentation issue and batch processing is not supported on CPU?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.6.0-1055-oem-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@joeddav
## Information
Model I am using: joeddav/xlm-roberta-large-xnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
See example above. For reference, the 5 texts are random samples from the 20 newsgroups dataset:
```python
['I have to disagree with you on this one. It is anything BUT common. In the 4 or 5 years I have been watching hockey I have NEVER seen this happen EVER. I am not sure what league you have been watching. :-) Anyone else agree with this?',
'About a month ago there was a photo posted on alt.binaries.pictures.misc of a 17.5-inch Northern Pike which had been caught on a lure made of 256K SIMMs. --',
"You can't. But good luck trying.",
": The cops/feds do *not* need to be able to get hold of your private key to : listen in to cellular conversations. Encryption is not end-to-end, but : cellphone to base-station - it *has* to be this way so that cellular users : and fixed installations can talk to each other. For cellular to cellular : calls, the transmission is decrypted at the base-station, passed to another : base-station and re-encrypted. The cops/feds can listen to the unscrambled : call *provided* they get a warrant to tap into the cellular provider's : equipment. The only reason for wanting a crackable system is so they can : listen without having to obtain a warrant. : But, maybe the Clipper system is secure, and they really do need a warrant : to get the key out of escrow before they can listen in using a scanner (see : above - they don't *have* to go down this route anyway). I have my doubts, : but even if true once they have the key they will *never* again need a : warrant to tap into that particular phone whenever they want. `Well, Judge, : it appears he wasn't a drug-dealer after all, so naturally we'll stop : listening in'... That was true for the UK Paul, but I'm fairly sure they're talking about building end-to-end encryption phones out of this chip. It's *not* for cellular (though it certainly could be used there in the way you suggest)",
'I am trying to get a copy of the _official_ rules of baseball. Someone once sent me the ISBN number of it, but I have since lost it. Can anyone give me this information, or tell me where I can find the book? None of my local bookstores have it.']
```
## Expected behavior
Piping multiple texts through the pipeline should be at least as fast, and ideally faster, than iterating over individual texts.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11678/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11677/comments | https://api.github.com/repos/huggingface/transformers/issues/11677/events | https://github.com/huggingface/transformers/pull/11677 | 886,760,248 | MDExOlB1bGxSZXF1ZXN0NjQwMDI1NDI1 | 11,677 | Identify issue in slow torch tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 2991663546,
"node_id": "MDU6TGFiZWwyOTkxNjYzNTQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Testing",
"name": "Testing",
"color": "19A601",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"And one more possible venue of just tracing start/stop of each test, so perhaps this can help to identify which test doesn't complete.\r\n\r\nHere is a poor man's start/stop tracer\r\n\r\n```\r\n# add to conftest.py\r\nimport pytest\r\nimport os\r\ntrace = os.environ.get('TRACE_START_STOP', \"\")\r\[email protected](tryfirst=True, hookwrapper=True)\r\ndef pytest_runtest_makereport(item, call):\r\n outcome = yield\r\n res = outcome.get_result()\r\n file_name, _, test_name = res.location\r\n test = f\"{file_name} {test_name}\"\r\n if res.when == \"setup\" and res.passed:\r\n if len(trace):\r\n print(f\"\\nTRACE {test} start\")\r\n elif res.when == \"call\" and not res.passed:\r\n pass\r\n elif res.when == \"teardown\":\r\n if len(trace):\r\n print(f\"\\nTRACE {test} stop\")\r\n```\r\nnow run as:\r\n```\r\nTRACE_START_STOP=1 pytest tests/test_trainer.py\r\n```\r\noutput:\r\n```\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_can_resume_training start\r\n.\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_can_resume_training stop\r\n\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_custom_optimizer start\r\n.\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_custom_optimizer stop\r\n\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_data_is_not_parallelized_when_model_is_parallel start\r\n.\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_data_is_not_parallelized_when_model_is_parallel stop\r\n\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_dynamic_shapes start\r\n.\r\nTRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_dynamic_shapes stop\r\n```",
"Update: from further investigation, it was identified that the main culprit was the `test_trainer_seq2seq.py` file which is based on the `cnn_dailymail` dataset.\r\n\r\nThe issue is that this dataset contains a lot of examples and that it was cached on the shared disk, which is not necessarily in the same region as the machine. My intuition tells me reading large files such as model files is fine as the download/upload speed to the disk should be good - however, I doubt the latency holds up when looking for a lot of different small files. When processing the dataset, the machine did it at a rate of 10 examples per second - vs my laptop PC which handles them at a rate of 12,000 examples per second. Maybe @lhoestq has already encountered such an issue in the past.\r\n\r\nProposal to resolve the issue:\r\n\r\nRight now I have patched this test by processing the dataset directly on the machine's disk, then moved it to the shared disk. When re-running the test, the machine picks the preprocessed dataset from the shared disk and passes the test in a total of 53 seconds, which is great.\r\n\r\nWhat we learned with this endeavor is that:\r\n- Having clean test outputs with :white_check_mark: everywhere is nice in theory, but when we have an issue we at least need the test names to be able to identify where it hangs\r\n- Having the `pytest-timeout` dependency is a lifesaver as it can automatically kill the hanging test, like in this case.\r\n\r\nI propose we keep the setup as it currently is, and find a way for the Slack CI feedback to say explicitly when there was a timeout. This would help to identify cases such as this one - and if such cases happen often, then we should re-think how the CI handles dataset storage, shared disk storage, or both.",
"Great work, @LysandreJik, at sorting it out!\r\n\r\n> * Having clean test outputs with white_check_mark everywhere is nice in theory, but when we have an issue we at least need the test names to be able to identify where it hangs\r\n\r\nWe actually weren't getting this working fully, since pytest only prints the test name once at least one of the tests has completed. So for example if you get a pytest crash it will never print the test name if it was the first test in it.\r\n\r\nSo we should probably keep this one handy: https://github.com/huggingface/transformers/pull/11677#issuecomment-838846226\r\nsince it prints the test name as soon as the test starts (may even need to add a flush should it be buffered but usually the pytest print is unbuffered) \r\n\r\nBut also using `pytest -sv` will also start with a printout of each full test name, before the test is run, albeit it'd be very very noisy. But in a pinch that is a possible quick solution if you want to know which test started and hasn't finished.",
"I haven't experienced such speed differences (12 000 vs 10 samples per sec) on my side.\r\nNote that the recent patch updates (1.6.1 and 1.6.2) fixed memory issues that could have led to slowdowns in some cases, have you tried updating `datasets` ?\r\n\r\nAlso let me know if I can help you on this",
"This should be closed as resolved due to revamped testing infrastructure (#15725, #15726, #15727, #15728, #15729)."
] | 1,620 | 1,651 | 1,651 | MEMBER | null | Pull request to try and identify the source of the hangs in the torch slow CI. Torch slow CI was taking three hours per run until a few days ago, and has since jumped to 6+ hours, for an unknown reason. The job ends up being killed as it goes over the timeout, so the resulting time might end up being even larger than six hours.
Example of run that took 3 hours (April 20, 2021): https://github.com/huggingface/transformers/actions/runs/765376348
Example of run that took 6+ hours (April 21, 2021: https://github.com/huggingface/transformers/actions/runs/768949009
Here is an example of a run that took 6+ hours, while completing the full common tests: https://github.com/huggingface/transformers/runs/2443524960?check_suite_focus=true
The common tests took 5h56 minutes to complete, and the pipeline tests took more than 4 hours to complete before being apparently killed by CI, so there was clearly something going wrong here.
In order to investigate the root cause of the issue, opening a PR here. Tests will be conducted on a testing machine with the exact same configuration as the other CI machines. Investigating on a single run, on a single GPU machine.
The approach is discussed with @stas00, who is helping out and offered some of the steps below.
## Step 1
The first step is ensuring this is not an error linked to the machine itself, so we first start by running the job on the machine without changing anything to it. We only add a 240-minute timeout so that it can go on to step 2 if it goes over the 4 hour mark (as we know it should take less than 3 hours to complete)
See run for first step here: https://github.com/huggingface/transformers/runs/2554755801
Edit: First run errored out at 6 hours like on other machines. I do not think it is a setup issue.
## Step 2 (if step 1 doesn't resolve the issue)
The second step is twofold: removing `pytest-xdist` as we do not leverage it (we're using a single worker), and adding `pytest-timeout` with a timeout of 300 seconds.
See run for second step here: https://github.com/huggingface/transformers/runs/2554760360
## Step 3 (if step 1 & 2 don't resolve the issue)
Do a manual run - at the 3 hour mark, it should be hanging.
As it is hanging, try and retrieve information about what is information. For example, with the following:
```
pip install py-spy
# dumps traceback for each thread
sudo py-spy dump --pid PID
```
## Step 4 (if no step above resolves the issue)
The diff between the two jobs (3hr and 6hr) doesn't seem to have anything that would make the tests hang - but reverting to the previous repository state could help us identify the culprit. Diff: https://github.com/huggingface/transformers/compare/95037a1..95dab34
Additionally, Stas identified two difference in dependencies between the two runs:
```
-datasets-1.5.0
+datasets-1.6.0
-nltk-3.6.1
+nltk-3.6.2
```
Those should be investigated at the same time. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11677/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11677",
"html_url": "https://github.com/huggingface/transformers/pull/11677",
"diff_url": "https://github.com/huggingface/transformers/pull/11677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11677.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11676/comments | https://api.github.com/repos/huggingface/transformers/issues/11676/events | https://github.com/huggingface/transformers/pull/11676 | 886,710,669 | MDExOlB1bGxSZXF1ZXN0NjM5OTc4MTUw | 11,676 | Merge strings that are being concatenated in the same line | {
"login": "orestisfl",
"id": 5778622,
"node_id": "MDQ6VXNlcjU3Nzg2MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5778622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orestisfl",
"html_url": "https://github.com/orestisfl",
"followers_url": "https://api.github.com/users/orestisfl/followers",
"following_url": "https://api.github.com/users/orestisfl/following{/other_user}",
"gists_url": "https://api.github.com/users/orestisfl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orestisfl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orestisfl/subscriptions",
"organizations_url": "https://api.github.com/users/orestisfl/orgs",
"repos_url": "https://api.github.com/users/orestisfl/repos",
"events_url": "https://api.github.com/users/orestisfl/events{/privacy}",
"received_events_url": "https://api.github.com/users/orestisfl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | After fa84ae26d6, some strings are needlessly concatenated even though
they are not wrapped to multiple lines.
This change makes the code slightly less confusing and more grepable.
# What does this PR do?
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11676/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11676",
"html_url": "https://github.com/huggingface/transformers/pull/11676",
"diff_url": "https://github.com/huggingface/transformers/pull/11676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11676.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11675/comments | https://api.github.com/repos/huggingface/transformers/issues/11675/events | https://github.com/huggingface/transformers/pull/11675 | 886,532,114 | MDExOlB1bGxSZXF1ZXN0NjM5ODA4NTQz | 11,675 | Fix TF Roberta for mixed precision training | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It looks good to me too. Thanks for the PR!"
] | 1,620 | 1,686 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the TF Roberta model for mixed precision training and is now aligned with the other models.
# Fixes
#11282 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11675/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11675",
"html_url": "https://github.com/huggingface/transformers/pull/11675",
"diff_url": "https://github.com/huggingface/transformers/pull/11675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11675.patch",
"merged_at": 1620748864000
} |
https://api.github.com/repos/huggingface/transformers/issues/11674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11674/comments | https://api.github.com/repos/huggingface/transformers/issues/11674/events | https://github.com/huggingface/transformers/pull/11674 | 886,440,000 | MDExOlB1bGxSZXF1ZXN0NjM5NzIyMDk0 | 11,674 | Add MacOS TF version | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As far as I have been tested until now, yes looks working quite well! I would even say that the work done by the Apple team on it is very impressive!!\r\n\r\nI will push new PRs if I encounter new issues with it :)",
"Great, thanks a lot @jplu :) "
] | 1,620 | 1,686 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
This PR adds the MacOS TensorFlow version mostly for the M1 Apple laptop that is the recommended version to use. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11674/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/11674/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11674",
"html_url": "https://github.com/huggingface/transformers/pull/11674",
"diff_url": "https://github.com/huggingface/transformers/pull/11674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11674.patch",
"merged_at": 1620726141000
} |
https://api.github.com/repos/huggingface/transformers/issues/11673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11673/comments | https://api.github.com/repos/huggingface/transformers/issues/11673/events | https://github.com/huggingface/transformers/pull/11673 | 886,241,363 | MDExOlB1bGxSZXF1ZXN0NjM5NTMzOTU3 | 11,673 | Add --text_column to run_summarization_no_trainer | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there and thanks for the PR! It doesn't really make sense to have `text_column` without `summary_column` to go with it. Could you add this one too?",
"Hi @sgugger ! `summary_column` is already there, it's only `text_column` missing. 😄 "
] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
Add the `--text_column` option to `run_summarization_no_trainer.py`
(mostly copy from `run_summarization.py`)
Also removed a duplicated line:
`padding = "max_length" if args.pad_to_max_length else False`
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11673/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11673",
"html_url": "https://github.com/huggingface/transformers/pull/11673",
"diff_url": "https://github.com/huggingface/transformers/pull/11673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11673.patch",
"merged_at": 1620734318000
} |
https://api.github.com/repos/huggingface/transformers/issues/11672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11672/comments | https://api.github.com/repos/huggingface/transformers/issues/11672/events | https://github.com/huggingface/transformers/pull/11672 | 886,211,473 | MDExOlB1bGxSZXF1ZXN0NjM5NTA1NjUw | 11,672 | Fix docstring of description about input_ids | {
"login": "nxznm",
"id": 55944993,
"node_id": "MDQ6VXNlcjU1OTQ0OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/55944993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nxznm",
"html_url": "https://github.com/nxznm",
"followers_url": "https://api.github.com/users/nxznm/followers",
"following_url": "https://api.github.com/users/nxznm/following{/other_user}",
"gists_url": "https://api.github.com/users/nxznm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nxznm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nxznm/subscriptions",
"organizations_url": "https://api.github.com/users/nxznm/orgs",
"repos_url": "https://api.github.com/users/nxznm/repos",
"events_url": "https://api.github.com/users/nxznm/events{/privacy}",
"received_events_url": "https://api.github.com/users/nxznm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes a docstring of description about input_ids in `DistilBertForSequenceClassification` class.
Fixes #11659
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11672/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11672",
"html_url": "https://github.com/huggingface/transformers/pull/11672",
"diff_url": "https://github.com/huggingface/transformers/pull/11672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11672.patch",
"merged_at": 1620735122000
} |
https://api.github.com/repos/huggingface/transformers/issues/11671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11671/comments | https://api.github.com/repos/huggingface/transformers/issues/11671/events | https://github.com/huggingface/transformers/issues/11671 | 886,097,812 | MDU6SXNzdWU4ODYwOTc4MTI= | 11,671 | Why run_translation.py automatically runs on CPU? | {
"login": "ustcwhy",
"id": 47163302,
"node_id": "MDQ6VXNlcjQ3MTYzMzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/47163302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ustcwhy",
"html_url": "https://github.com/ustcwhy",
"followers_url": "https://api.github.com/users/ustcwhy/followers",
"following_url": "https://api.github.com/users/ustcwhy/following{/other_user}",
"gists_url": "https://api.github.com/users/ustcwhy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ustcwhy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ustcwhy/subscriptions",
"organizations_url": "https://api.github.com/users/ustcwhy/orgs",
"repos_url": "https://api.github.com/users/ustcwhy/repos",
"events_url": "https://api.github.com/users/ustcwhy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ustcwhy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of https://github.com/huggingface/transformers/issues/11548#issuecomment-831159016\r\n\r\nCould you please answer to \r\n\r\n> Hi! Is your CUDA environment correctly set up? What is the output of the following in your environment?\r\n> \r\n> `python -c \"import torch;print(torch.cuda.is_available())\"`\r\n\r\nand how do you identify that it's not running on GPU? Could you put the accompanying logs? At which point do you see it running on CPU when you think it should be running on GPU?",
"> Duplicate of [#11548 (comment)](https://github.com/huggingface/transformers/issues/11548#issuecomment-831159016)\r\n> \r\n> Could you please answer to\r\n> \r\n> > Hi! Is your CUDA environment correctly set up? What is the output of the following in your environment?\r\n> > `python -c \"import torch;print(torch.cuda.is_available())\"`\r\n> \r\n> and how do you identify that it's not running on GPU? Could you put the accompanying logs? At which point do you see it running on CPU when you think it should be running on GPU?\r\n\r\nTrue…\r\nWhile the program is running, the GPU utilization is close to 0%, and the CPU utilization is close to 10%. After loading weights, the cmd window keeps showing ? it/s"
] | 1,620 | 1,620 | 1,620 | NONE | null | I use examples/pytorch/translation/run_translation.py fine-tune mbart-large-cc25 on my datasets, it automatically runs on CPU. I have 2 GPU, but only one is Nvidia.It is RTX 2080super.
python main.py \
--model_name_or_path facebook/mbart-large-cc25 \
--do_train \
--do_eval \
--source_lang en_XX \
--target_lang zh_CN \
--train_file /data/2WangHongyu/bioNMT_WHY/train.json \
--validation_file /data/2WangHongyu/bioNMT_WHY/dev.json \
--output_dir /output \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--cache_dir /model/2WangHongyu/mbart-large
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11670/comments | https://api.github.com/repos/huggingface/transformers/issues/11670/events | https://github.com/huggingface/transformers/issues/11670 | 885,915,266 | MDU6SXNzdWU4ODU5MTUyNjY= | 11,670 | license missing for xlm-roberta-large, and bert-base-spanish-wwm models | {
"login": "beomseok-lee",
"id": 13184386,
"node_id": "MDQ6VXNlcjEzMTg0Mzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/13184386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beomseok-lee",
"html_url": "https://github.com/beomseok-lee",
"followers_url": "https://api.github.com/users/beomseok-lee/followers",
"following_url": "https://api.github.com/users/beomseok-lee/following{/other_user}",
"gists_url": "https://api.github.com/users/beomseok-lee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beomseok-lee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beomseok-lee/subscriptions",
"organizations_url": "https://api.github.com/users/beomseok-lee/orgs",
"repos_url": "https://api.github.com/users/beomseok-lee/repos",
"events_url": "https://api.github.com/users/beomseok-lee/events{/privacy}",
"received_events_url": "https://api.github.com/users/beomseok-lee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"xlm-roberta should be licensed under MIT like all models available through [fairseq](https://github.com/pytorch/fairseq#license). @aconneau could you please confirm?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | CONTRIBUTOR | null | Hi HuggingFace team,
I am gratefully using many of huggingface models, but I have found 'xlm-roberta-large' model is missing its license policy.
also 'dccuchile/bert-base-spanish-wwm-uncased' and 'dccuchile/bert-base-spanish-wwm-cased'
[xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
[dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased)
[dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased)
Could you please add license information using readme or model card or this thread?
Thanks ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11670/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11669/comments | https://api.github.com/repos/huggingface/transformers/issues/11669/events | https://github.com/huggingface/transformers/issues/11669 | 885,432,554 | MDU6SXNzdWU4ODU0MzI1NTQ= | 11,669 | [Question] How to serialize and load a trained RoBERTa model? | {
"login": "hapazv",
"id": 70067770,
"node_id": "MDQ6VXNlcjcwMDY3Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/70067770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hapazv",
"html_url": "https://github.com/hapazv",
"followers_url": "https://api.github.com/users/hapazv/followers",
"following_url": "https://api.github.com/users/hapazv/following{/other_user}",
"gists_url": "https://api.github.com/users/hapazv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hapazv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hapazv/subscriptions",
"organizations_url": "https://api.github.com/users/hapazv/orgs",
"repos_url": "https://api.github.com/users/hapazv/repos",
"events_url": "https://api.github.com/users/hapazv/events{/privacy}",
"received_events_url": "https://api.github.com/users/hapazv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | First, I apologize for my English, I am learning the language, I am also learning a little about the networks transformers, tensorflow, keras, Bert and Roberta, I am new to this.
For a kaggle challenge, I wrote a code based on RoBERTa, the results were very good, but wanting to replicate the same code in Colab Pro, it was not possible due to the version of TPU's found in the tool, so I decided to save the weights from Kaggle's training to load them into Colab, but I haven't been able to do it, I'm doing something wrong and I don't understand what's going on.
At the end I will leave the link of the code in Kaggle, so soon I will put in broad strokes as it is written:
`
```
import numpy as np
import pandas as pd
from transformers import AutoTokenizer, TFAutoModel
import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow.keras.layers import GlobalAveragePooling1D, Dense
from tensorflow.keras import Model
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
`
`
modelo = 'joeddav/xlm-roberta-large-xnli'
tokenizer = AutoTokenizer.from_pretrained(modelo)
def token(x):
tokens = list(tokenizer.tokenize(x))
tokens.append('</s>')
t = tokenizer.convert_tokens_to_ids(tokens)
return t
def roberta_encode(hypotheses, premises, tokenizer):
Pad = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
sentence1 = tf.ragged.constant([token(s) for s in np.array(hypotheses)], dtype=tf.int32)
sentence2 = tf.ragged.constant([token(s) for s in np.array(premises)], dtype=tf.int32)
cls = [tokenizer.convert_tokens_to_ids([tokenizer.cls_token])]*sentence1.shape[0]
tokens = tf.concat([cls, sentence1, sentence2], axis=-1)
tokens = tokens[:, :max_len] #quitar para la version full
tokens = tokens.to_tensor(default_value=Pad)
pad = max_len - tf.shape(tokens)[1]
tokens = tf.pad(tokens, [[0, 0], [0, pad]], constant_values=Pad)
input_word_ids = tf.reshape(tokens, [-1, max_len])
input_mask = tf.cast(input_word_ids != Pad, tf.int32)
input_mask = tf.reshape(input_mask, [-1, max_len])
input_type_ids = tf.concat([tf.zeros_like(cls), tf.zeros_like(sentence1), tf.ones_like(sentence2)],
axis=-1).to_tensor()
```
`
`
```
inputs = {
'input_word_ids': input_word_ids,
'input_mask': input_mask,
'input_type_ids': input_type_ids}
return inputs
def build_dataset(x, y, mode, batch_size):#función vista en varios notebook's
if mode == "train":
dataset = (
tf.data.Dataset
.from_tensor_slices((x, y))
.repeat()
.shuffle(5678)
.batch(batch_size)
.prefetch(tf.data.experimental.AUTOTUNE)
)
elif mode == "valid":
dataset = (
tf.data.Dataset
.from_tensor_slices((x, y))
.batch(batch_size)
.cache()
.prefetch(tf.data.experimental.AUTOTUNE)
)
elif mode == "test":
dataset = (
tf.data.Dataset
.from_tensor_slices(x)
.batch(batch_size)
)
else:
raise NotImplementedError
return dataset
`
def build_model(model, max_len):
tf.keras.backend.clear_session()
tf.random.set_seed(0)
with strategy.scope():
input_word_ids = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
model = TFAutoModel.from_pretrained(modelo)
roberta = model([input_word_ids])[0]
output = GlobalAveragePooling1D()(roberta)
output = Dense(3, activation='softmax')(output)
model = Model(inputs=[input_word_ids], outputs = output)
model.compile(optimizer=Adam(lr=1e-5), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
return model
model = build_model(modelo, max_len)
Some layers from the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing TFXLMRobertaModel: ['classifier']
- This IS expected if you are initializing TFXLMRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFXLMRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFXLMRobertaModel were initialized from the model checkpoint at joeddav/xlm-roberta-large-xnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFXLMRobertaModel for predictions without further training.
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_word_ids (InputLayer) [(None, 120)] 0
_________________________________________________________________
tfxlm_roberta_model (TFXLMRo TFBaseModelOutputWithPool 559890432
_________________________________________________________________
global_average_pooling1d (Gl (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 3) 3075
=================================================================
Total params: 559,893,507
Trainable params: 559,893,507
Non-trainable params: 0
________________________________________________________
steps_per_epoch = len(x_train) // batch_size
stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', verbose=1, patience=2, mode='min', restore_best_weights=True)
model.fit(train_dataset, validation_data=valid_dataset, steps_per_epoch=steps_per_epoch, epochs=4, callbacks=[stop])
Epoch 1/4
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py:595: UserWarning: Input dict contained keys ['input_mask', 'input_type_ids'] which did not match any model input. They will be ignored by the model.
[n for n in tensors.keys() if n not in ref_input_names])
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/indexed_slices.py:430: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 256002048 elements. This may consume a large amount of memory.
num_elements)
2487/2487 [==============================] - 852s 284ms/step - loss: 0.2515 - accuracy: 0.9074 - val_loss: 2.3169 - val_accuracy: 0.4474
Epoch 2/4
2487/2487 [==============================] - 683s 275ms/step - loss: 0.1742 - accuracy: 0.9391 - val_loss: 2.1128 - val_accuracy: 0.4446
Epoch 3/4
2487/2487 [==============================] - 685s 276ms/step - loss: 0.1359 - accuracy: 0.9527 - val_loss: 2.6941 - val_accuracy: 0.4377
Epoch 4/4
2487/2487 [==============================] - 685s 276ms/step - loss: 0.1070 - accuracy: 0.9631 - val_loss: 2.6835 - val_accuracy: 0.4423
Restoring model weights from the end of the best epoch.
Epoch 00004: early stopping
<tensorflow.python.keras.callbacks.History at 0x7f51f47b7150>`
```
try saving the weights like this:
`model.save('RobertaClasi.hdf5')`
but once it gave me an error message and on another occasion when wanting to load the model it was not possible.
I really appreciate any indication and at the end I leave the link of the code in kaggle.
[https://www.kaggle.com/hugoarmandopazvivas/contradictory-my-dear-watson-hapv?scriptVersionId=62195702](url)
@Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11669/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11668/comments | https://api.github.com/repos/huggingface/transformers/issues/11668/events | https://github.com/huggingface/transformers/issues/11668 | 884,681,155 | MDU6SXNzdWU4ODQ2ODExNTU= | 11,668 | KeyError: 'bigbird_pegasus' | {
"login": "loretoparisi",
"id": 163333,
"node_id": "MDQ6VXNlcjE2MzMzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loretoparisi",
"html_url": "https://github.com/loretoparisi",
"followers_url": "https://api.github.com/users/loretoparisi/followers",
"following_url": "https://api.github.com/users/loretoparisi/following{/other_user}",
"gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions",
"organizations_url": "https://api.github.com/users/loretoparisi/orgs",
"repos_url": "https://api.github.com/users/loretoparisi/repos",
"events_url": "https://api.github.com/users/loretoparisi/events{/privacy}",
"received_events_url": "https://api.github.com/users/loretoparisi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @loretoparisi,\r\n\r\nIt's working perfectly for me when running this:\r\n\r\n```shell\r\npip3 uninstall transformers\r\npip3 install git+https://github.com/huggingface/transformers@master\r\n```\r\n\r\n```python\r\nfrom transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer, AutoModelForSeq2SeqLM\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/bigbird-pegasus-large-arxiv\")\r\n# or\r\nmodel = BigBirdPegasusForConditionalGeneration.from_pretrained(\"google/bigbird-pegasus-large-arxiv\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/bigbird-pegasus-large-arxiv\")\r\n```\r\n\r\nBigBird pegasus is not having `BigBirdPegasusTokenizer` so use `AutoTokenizer` only.\r\n",
"@vasudevgupta7 thank you it worked, they was `pip3 uninstall transformers`.",
"@vasudevgupta7 sorry, I'm a bit confused with the masked model, like in the case of BERT/RoBERTa:\r\n\r\n```python\r\n# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64\r\nmodel = BigBirdModel.from_pretrained(\"google/bigbird-roberta-large\", \r\n block_size=64, \r\n num_random_blocks=3,\r\n cache_dir=os.getenv(\"cache_dir\", \"../../models\"))\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/bigbird-pegasus-large-arxiv\",\r\n cache_dir=os.getenv(\"cache_dir\", \"../../models\"))\r\n\r\ntext = \"Paris is the [MASK] of France.\"\r\nencoded_input = tokenizer(text, return_tensors='pt')\r\noutput = model(**encoded_input)\r\nprint(output)\r\ndecoded = tokenizer.decode(tokenizer.convert_tokens_to_ids(output))\r\nprint(decoded)\r\n```\r\n\r\nright way to decode model's output?\r\nThank you!",
"@loretoparisi, you are using a `BigBird Roberta` model as the model and `BigBird Pegagus` as the tokenizer -> those are two different checkpoints.\r\n\r\nAlso, it would be very nice if you could use the [forum](https://discuss.huggingface.co/) for \"How to do ....\" questions as we try to keep the github issues for actual issues with the models. Thank you :-)",
"@patrickvonplaten typo in the code thanks. My two cents: models cards are missing the decoding part, while it should be there because it is not trivial.",
"> Hey @loretoparisi,\r\n> \r\n> It's working perfectly for me when running this:\r\n> \r\n> ```shell\r\n> pip3 uninstall transformers\r\n> pip3 install git+https://github.com/huggingface/transformers@master\r\n> ```\r\n> \r\n> ```python\r\n> from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer, AutoModelForSeq2SeqLM\r\n> model = AutoModelForSeq2SeqLM.from_pretrained(\"google/bigbird-pegasus-large-arxiv\")\r\n> # or\r\n> model = BigBirdPegasusForConditionalGeneration.from_pretrained(\"google/bigbird-pegasus-large-arxiv\")\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"google/bigbird-pegasus-large-arxiv\")\r\n> ```\r\n> \r\n> BigBird pegasus is not having `BigBirdPegasusTokenizer` so use `AutoTokenizer` only.\r\n\r\n\r\nI have the same problem\r\nI tried the code\r\n`pip3 uninstall transformers pip3 install git+https://github.com/huggingface/transformers@master`\r\nand then get\r\n\r\n```\r\nWARNING: Did not find branch or tag 'master', assuming revision or ref.\r\nRunning command git checkout -q master\r\nerror: pathspec 'master' did not match any file(s) known to git\r\nerror: subprocess-exited-with-error\r\n\r\n× git checkout -q master did not run successfully.\r\n│ exit code: 1\r\n╰─> See above for output.\r\n\r\nnote: This error originates from a subprocess, and is likely not a problem with pip.\r\nerror: subprocess-exited-with-error\r\n\r\n× git checkout -q master did not run successfully.\r\n│ exit code: 1\r\n╰─> See above for output.\r\n\r\nnote: This error originates from a subprocess, and is likely not a problem with pip.\r\n```\r\nAfter running the code, the problem is still there",
"Relaunch my notebook, problem solved 😐"
] | 1,620 | 1,651 | 1,620 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: Linux-5.10.25-linuxkit-x86_64-with-debian-10.1
- Python version: 3.7.4
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: (False)
- Using distributed or parallel set-up in script?: none
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): `google/bigbird-pegasus-large-arxiv`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
```python
mport os
import torch
from datasets import load_dataset
from transformers import pipeline
from transformers import AutoTokenizer, AutoModel
dataset = load_dataset("patrickvonplaten/scientific_papers_dummy", "arxiv",
cache_dir=os.getenv("cache_dir", "../../models"))
paper = dataset["validation"]["article"][1]
tokenizer = AutoTokenizer.from_pretrained(
'google/bigbird-pegasus-large-arxiv',
cache_dir=os.getenv("cache_dir", "../../models"))
model = AutoModel.from_pretrained(
'google/bigbird-pegasus-large-arxiv',
cache_dir=os.getenv("cache_dir", "../../models"))
summarizer = pipeline(
'summarization',
model=model,
tokenizer=tokenizer)
```
Steps to reproduce the behavior:
1. Run the provided script
2. output:
```
2021-05-10 17:11:53.523744: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-05-10 17:11:53.523858: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Reusing dataset scientific_papers (models/scientific_papers/arxiv/1.1.1/051d70b9811c81480cbf2a238b499f7713ba4e19acdaeeb92320007d68b6d098)
Traceback (most recent call last):
File "src/bigbird/run.py", line 17, in <module>
cache_dir=os.getenv("cache_dir", "../../models"))
File "/usr/local/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 398, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 421, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'bigbird_pegasus'
```
I have also tried this import
```python
from transformers import BigBirdPegasusForConditionalGeneration, BigBirdPegasusTokenizer
```
as described in the docs [here](https://huggingface.co/google/bigbird-pegasus-large-arxiv), but in this case I get another error:
```
from transformers import BigBirdPegasusForConditionalGeneration, BigBirdPegasusTokenizer
ImportError: cannot import name 'BigBirdPegasusForConditionalGeneration' from 'transformers' (unknown location)
```
## Expected behavior
no error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11668/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11667/comments | https://api.github.com/repos/huggingface/transformers/issues/11667/events | https://github.com/huggingface/transformers/pull/11667 | 884,578,223 | MDExOlB1bGxSZXF1ZXN0NjM3OTQ4ODQ1 | 11,667 | [BigBird Pegasus] Add config to auto tokenizer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds BigBirdPegasus to auto tokenizer
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11667/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11667",
"html_url": "https://github.com/huggingface/transformers/pull/11667",
"diff_url": "https://github.com/huggingface/transformers/pull/11667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11667.patch",
"merged_at": 1620664697000
} |
https://api.github.com/repos/huggingface/transformers/issues/11666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11666/comments | https://api.github.com/repos/huggingface/transformers/issues/11666/events | https://github.com/huggingface/transformers/issues/11666 | 884,572,891 | MDU6SXNzdWU4ODQ1NzI4OTE= | 11,666 | GPTNeoForCausalLM: resuming Trainer from checkpoint causes Missing key(s) in state_dict: "lm_head.weight" | {
"login": "xusky69",
"id": 22157766,
"node_id": "MDQ6VXNlcjIyMTU3NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/22157766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xusky69",
"html_url": "https://github.com/xusky69",
"followers_url": "https://api.github.com/users/xusky69/followers",
"following_url": "https://api.github.com/users/xusky69/following{/other_user}",
"gists_url": "https://api.github.com/users/xusky69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xusky69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xusky69/subscriptions",
"organizations_url": "https://api.github.com/users/xusky69/orgs",
"repos_url": "https://api.github.com/users/xusky69/repos",
"events_url": "https://api.github.com/users/xusky69/events{/privacy}",
"received_events_url": "https://api.github.com/users/xusky69/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@xusky69 , does the fix work for you? I'm still getting an error upon Gpt-neo training:\r\n```bash\r\n...\r\n File \"huggingface/transformers_local/src/transformers/trainer.py\", line 1366, in train\r\n self.model.load_state_dict(state_dict)\r\n File \"huggingface-SJGCx2Wk/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1224, in load_state_dict\r\n self.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for GPTNeoForCausalLM:\r\n Missing key(s) in state_dict: \"lm_head.weight\".\r\n```\r\n\r\n\r\n",
"The traceback shows you are not using a version of the library that has the fix (the line `self.model.load_state_dict(state_dict)` has been changed in the PR mentioned). Make sure to use a source install or upgrade to the latest release (4.6.0).",
"@sgugger , thanks for the response! I use the latest version, therefore the script fails in a different place (line 1365 – when the best model is loaded, but the PR fixes initial loading from the checkpoint). I've created a [new one](https://github.com/huggingface/transformers/pull/11718) – could you please take a look? "
] | 1,620 | 1,620 | 1,620 | NONE | null | ## Environment info
- `transformers` version: 4.6.0.dev0 (also happens with pip 4.5.1)
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic (Google Colab)
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): Not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- gpt2: @patrickvonplaten, @LysandreJik
- trainer: @sgugger
## Information
Resuming training from a `Trainer` checkpoint for `GPTNeoForCausalLM` causes the following runtime error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-3b03205cdcc2> in <module>()
2 ### %%%%%%%%%%%%%%%%%%%%%%%% TRAINING %%%%%%%%%%%%%%%%%%%%%%%%% ###
3 ### %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ###
----> 4 trainer.train(checkpoint)
1 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1222 if len(error_msgs) > 0:
1223 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1224 self.__class__.__name__, "\n\t".join(error_msgs)))
1225 return _IncompatibleKeys(missing_keys, unexpected_keys)
1226
RuntimeError: Error(s) in loading state_dict for GPTNeoForCausalLM:
Missing key(s) in state_dict: "lm_head.weight".
```
This happens with the 125M model, havent tested with 1.3b an 2.7b. Loadding the model manually using `.from_pretrained()` and commenting the following lines in `/transformers/trainer.py`
```
else:
# We load the model state dict on the CPU to avoid an OOM error.
state_dict = torch.load(os.path.join(resume_from_checkpoint, WEIGHTS_NAME), map_location="cpu")
# If the model is on the GPU, it still works!
self.model.load_state_dict(state_dict)
```
Allows me to resume training.
## To reproduce
Steps to reproduce the behavior:
1. Initialize training via `Trainer` for `GPTNeoForCausalLM` and save a checkpoint
2. Reset env and try to resume training from such checkpoint
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
For the training to resume correctly | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11666/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11665/comments | https://api.github.com/repos/huggingface/transformers/issues/11665/events | https://github.com/huggingface/transformers/issues/11665 | 884,522,679 | MDU6SXNzdWU4ODQ1MjI2Nzk= | 11,665 | [Question] How to move and reuse preprocessed dataset? | {
"login": "AtmaHou",
"id": 15045402,
"node_id": "MDQ6VXNlcjE1MDQ1NDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AtmaHou",
"html_url": "https://github.com/AtmaHou",
"followers_url": "https://api.github.com/users/AtmaHou/followers",
"following_url": "https://api.github.com/users/AtmaHou/following{/other_user}",
"gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions",
"organizations_url": "https://api.github.com/users/AtmaHou/orgs",
"repos_url": "https://api.github.com/users/AtmaHou/repos",
"events_url": "https://api.github.com/users/AtmaHou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AtmaHou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @lhoestq the preprocessed dataset should be cached, right?",
"Hi ! Yes it should, as long as you didn't change any parameter passed to the `map` function. They must be exactly the same.\r\n\r\nCould you open an issue on the `datasets` repo at https://github.com/huggingface/datasets if you want to discuss this caching issue in more details ?",
"> Hi ! Yes it should, as long as you didn't change any parameter passed to the `map` function. They must be exactly the same.\r\n> \r\n> Could you open an issue on the `datasets` repo at https://github.com/huggingface/datasets if you want to discuss this caching issue in more details ?\r\n\r\nSure thanks, the new issue is at here:\r\nhttps://github.com/huggingface/datasets/issues/2345",
"> Hi ! Yes it should, as long as you didn't change any parameter passed to the `map` function. They must be exactly the same.\r\n> \r\n> Could you open an issue on the `datasets` repo at https://github.com/huggingface/datasets if you want to discuss this caching issue in more details ?\r\n\r\nI tried to re-run the example [script ](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) after a success running (preprocess finished and start training) it still re-preprocess all data.\r\n\r\n**Details:**\r\n(1) It re-preprocess data even after showing: `05/11/2021 11:47:01 - WARNING - datasets.builder - Reusing dataset text (/home/cache/text/default-7083a0557f2cff9e/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)`\r\n(2) I didn't --overwrite_cache\r\n\r\nSo, how to reuse preprocessed data? Is there any option I need to open for the scripts? @lhoestq \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,623 | 1,623 | NONE | null | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
1. copy `path_to_cache_dir/datasets` to `new_cache_dir/datasets`
2. set `export HF_DATASETS_CACHE="new_cache_dir/"`
but the program still re-preprocess the whole dataset without loading cache.
I also tried to `torch.save(lm_datasets, fw)`, but the saved file is only 14M.
What is the proper way to do this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11665/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11664/comments | https://api.github.com/repos/huggingface/transformers/issues/11664/events | https://github.com/huggingface/transformers/issues/11664 | 884,502,370 | MDU6SXNzdWU4ODQ1MDIzNzA= | 11,664 | RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC | {
"login": "kamil-bentounes",
"id": 45245189,
"node_id": "MDQ6VXNlcjQ1MjQ1MTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45245189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamil-bentounes",
"html_url": "https://github.com/kamil-bentounes",
"followers_url": "https://api.github.com/users/kamil-bentounes/followers",
"following_url": "https://api.github.com/users/kamil-bentounes/following{/other_user}",
"gists_url": "https://api.github.com/users/kamil-bentounes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamil-bentounes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamil-bentounes/subscriptions",
"organizations_url": "https://api.github.com/users/kamil-bentounes/orgs",
"repos_url": "https://api.github.com/users/kamil-bentounes/repos",
"events_url": "https://api.github.com/users/kamil-bentounes/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamil-bentounes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @Kamilbentounes,\r\n\r\nIt looks like your `config.vocab_size` does not match the `config.vocab_size` of the fine-tuned French Wav2Vec2 model. It looks like you want to initialize the model with 132 characters, but the original vocab size is 123. Could you try to align the vocab size to the one of the fine-tuned model? :-) \r\n\r\nIf this doesn't fix the problem, please ping me here again",
"Hey @patrickvonplaten \r\n\r\nThanks for your quick reply. Yes, before writing this issue I tried it but I had an error during training phase :/ ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,624 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.5.0
- Platform: Ubuntu 18.04
- Python version: 3.7.4
### Who can help
- @patrickvonplaten
## Information
Model Im using is Wav2vec 2.0.
The problem arises when loading the pretrained/fintuned model:
```
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([123, 768]) from checkpoint, the shape in current model is torch.Size([132, 768]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([123]) from checkpoint, the shape in current model is torch.Size([132]).
```
The tasks I am working on is:
* Finetuning Wav2vec 2.0 on my own data from the finetuned xlsr french model.
## To reproduce
I followed the steps mentioned [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). I started to finetune the base model on my own dataset. Then, I tried to finetune it from the finetuned french xlsr model.
With fairseq, to fix this problem, we must add the `--restore` argument to say that this model is a finetuned model from a X-architecture.
Any idea about how can we do it using Transformers ?
Here is the whole error:
```
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([123, 768]) from checkpoint, the shape in current model is torch.Size([132, 768]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([123]) from checkpoint, the shape in current model is torch.Size([132]).
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11664/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11663/comments | https://api.github.com/repos/huggingface/transformers/issues/11663/events | https://github.com/huggingface/transformers/pull/11663 | 884,381,728 | MDExOlB1bGxSZXF1ZXN0NjM3NzY4ODU5 | 11,663 | Save scaler state dict when checkpointing | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | COLLABORATOR | null | # What does this PR do?
One last thing was missing for resuming with checkpoints and have exactly the same results as a complete training: the gradient scaler state when using mixed precision with AMP in PyTorch. This PR addresses that.
Fixes #11323 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11663/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11663",
"html_url": "https://github.com/huggingface/transformers/pull/11663",
"diff_url": "https://github.com/huggingface/transformers/pull/11663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11663.patch",
"merged_at": 1620658710000
} |
https://api.github.com/repos/huggingface/transformers/issues/11662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11662/comments | https://api.github.com/repos/huggingface/transformers/issues/11662/events | https://github.com/huggingface/transformers/issues/11662 | 884,365,157 | MDU6SXNzdWU4ODQzNjUxNTc= | 11,662 | IBERT: Testing the speedup | {
"login": "fdlci",
"id": 73292708,
"node_id": "MDQ6VXNlcjczMjkyNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/73292708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdlci",
"html_url": "https://github.com/fdlci",
"followers_url": "https://api.github.com/users/fdlci/followers",
"following_url": "https://api.github.com/users/fdlci/following{/other_user}",
"gists_url": "https://api.github.com/users/fdlci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdlci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdlci/subscriptions",
"organizations_url": "https://api.github.com/users/fdlci/orgs",
"repos_url": "https://api.github.com/users/fdlci/repos",
"events_url": "https://api.github.com/users/fdlci/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdlci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You may find the discussion in https://github.com/huggingface/transformers/issues/11312 by @kssteven418 interesting!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I want to test IBERT's, and I have done exactly what is said in https://huggingface.co/kssteven/ibert-roberta-base. For the quantization part, when I set quant_mode to true and run the evaluation again, I get a much low accuracy model. What am I doing wrong?"
] | 1,620 | 1,635 | 1,623 | NONE | null | Hi,
I want to test IBERT's speedup, and I have done exactly what is said in https://huggingface.co/kssteven/ibert-roberta-base. For the quantization part, when I set quant_mode to true and run the evaluation again, I get a much slower model. What am I doing wrong?
Thank you for your reply! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11662/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11661/comments | https://api.github.com/repos/huggingface/transformers/issues/11661/events | https://github.com/huggingface/transformers/pull/11661 | 884,178,831 | MDExOlB1bGxSZXF1ZXN0NjM3NTg0NzM3 | 11,661 | Update pretrained_models.rst | {
"login": "mnskim",
"id": 48931129,
"node_id": "MDQ6VXNlcjQ4OTMxMTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/48931129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnskim",
"html_url": "https://github.com/mnskim",
"followers_url": "https://api.github.com/users/mnskim/followers",
"following_url": "https://api.github.com/users/mnskim/following{/other_user}",
"gists_url": "https://api.github.com/users/mnskim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnskim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnskim/subscriptions",
"organizations_url": "https://api.github.com/users/mnskim/orgs",
"repos_url": "https://api.github.com/users/mnskim/repos",
"events_url": "https://api.github.com/users/mnskim/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnskim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @patil-suraj , \r\nI added the (N encoder and decoder layers) to the existing descriptions for facebook/bart-base and facebook/bart-large. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,623 | 1,623 | NONE | null | # What does this PR do?
Updates the description of facebook/bart-base and facebook/bart-large in Pretrained models to specify the number of encoder and decoder layers according to #11574 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11661/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11661",
"html_url": "https://github.com/huggingface/transformers/pull/11661",
"diff_url": "https://github.com/huggingface/transformers/pull/11661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11661.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11660/comments | https://api.github.com/repos/huggingface/transformers/issues/11660/events | https://github.com/huggingface/transformers/pull/11660 | 884,087,772 | MDExOlB1bGxSZXF1ZXN0NjM3NTAyNDI2 | 11,660 | run_text_classification.py fix | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | MEMBER | null | This is the fix to TF run_text_classification.py suggested by @bhadreshpsavani in #10482 . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11660/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11660",
"html_url": "https://github.com/huggingface/transformers/pull/11660",
"diff_url": "https://github.com/huggingface/transformers/pull/11660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11660.patch",
"merged_at": 1620653285000
} |
https://api.github.com/repos/huggingface/transformers/issues/11659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11659/comments | https://api.github.com/repos/huggingface/transformers/issues/11659/events | https://github.com/huggingface/transformers/issues/11659 | 884,070,670 | MDU6SXNzdWU4ODQwNzA2NzA= | 11,659 | [Doc] Something wrong in description of 'DistilBertForSequenceClassification' in doc | {
"login": "nxznm",
"id": 55944993,
"node_id": "MDQ6VXNlcjU1OTQ0OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/55944993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nxznm",
"html_url": "https://github.com/nxznm",
"followers_url": "https://api.github.com/users/nxznm/followers",
"following_url": "https://api.github.com/users/nxznm/following{/other_user}",
"gists_url": "https://api.github.com/users/nxznm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nxznm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nxznm/subscriptions",
"organizations_url": "https://api.github.com/users/nxznm/orgs",
"repos_url": "https://api.github.com/users/nxznm/repos",
"events_url": "https://api.github.com/users/nxznm/events{/privacy}",
"received_events_url": "https://api.github.com/users/nxznm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's correct, thanks for spotting! Could you open a PR to fix this?\r\n\r\nThanks!"
] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | I think the description of input_ids (one of the parameters) of [DistilBertForSequenceClassification](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertforsequenceclassification) is not correct.
I think the input_ids should be `torch.LongTensor of shape (batch_size, sequence_length)`, rather than `torch.LongTensor of shape (batch_size, num_choices)`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11659/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11658/comments | https://api.github.com/repos/huggingface/transformers/issues/11658/events | https://github.com/huggingface/transformers/issues/11658 | 883,995,692 | MDU6SXNzdWU4ODM5OTU2OTI= | 11,658 | NCLL No space left on device Error while training with deepspeed | {
"login": "hasansalimkanmaz",
"id": 49716619,
"node_id": "MDQ6VXNlcjQ5NzE2NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasansalimkanmaz",
"html_url": "https://github.com/hasansalimkanmaz",
"followers_url": "https://api.github.com/users/hasansalimkanmaz/followers",
"following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions",
"organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs",
"repos_url": "https://api.github.com/users/hasansalimkanmaz/repos",
"events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am closing the issue, because I see that it is a docker shm issue that is not related to HF. I have solved the issue thanks to [this stackoverflow issue](https://stackoverflow.com/a/46434614/11758585)"
] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.4.0
- Platform: docker
- Python version: 3.8
- PyTorch version (GPU?): pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html
- Using distributed or parallel set-up in script?: DDP
Models:
- LayoutLMForTokenClassification
Library:
- deepspeed: @stas00
The problem arises when using:
I add my deepspeed config file to `TrainingArguments` when initialising args object.
During the training, I got a strange error from NCLL backend. Error message is `No space left on device` but it is not possible.
Here is complete traceback.
```
PyTorch version 1.7.1+cu101 available.
TensorFlow version 2.2.1 available.
Successfully imported onnx version 1.7.0
2021-05-10 10:56:19 DEBUG tensorflow Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
2021-05-10 10:56:24 INFO __main__ Training in distributed mode...
[2021-05-10 10:56:26,344] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-05-10 10:56:26,360] [INFO] [runner.py:360:main] cmd = /usr/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 nlp_ner_layoutlm/train_pipeline/training_step/training_script.py --local_example_folder /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data --model_dir /mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model --window_length 512 --batch_size 8 --weight_decay 0.0 --adam_epsilon 1e-08 --learning_rate 2e-05 --epochs 200 --seed 11046060 --bit_precision_fp16 1 --tagging_scheme BILOU --profile_logs /mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs --patience 50 --gradient_accumulation_steps 2 --warmup_steps 300 --composite 0 --n_transformer_layers 1 --composite_loss_weight 0.5 --self_training 0 --base_model /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface
[2021-05-10 10:56:28,227] [INFO] [launch.py:73:main] 0 NCCL_DEBUG INFO
[2021-05-10 10:56:28,227] [INFO] [launch.py:73:main] 0 NCCL_VERSION 2.7.8
[2021-05-10 10:56:28,227] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2021-05-10 10:56:28,227] [INFO] [launch.py:86:main] nnodes=1, num_local_procs=2, node_rank=0
[2021-05-10 10:56:28,227] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2021-05-10 10:56:28,227] [INFO] [launch.py:102:main] dist_world_size=2
[2021-05-10 10:56:28,227] [INFO] [launch.py:104:main] Setting CUDA_VISIBLE_DEVICES=0,1
2021-05-10 10:56:30 DEBUG tensorflow Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
2021-05-10 10:56:30 DEBUG tensorflow Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
PyTorch version 1.7.1+cu101 available.
PyTorch version 1.7.1+cu101 available.
TensorFlow version 2.2.1 available.
TensorFlow version 2.2.1 available.
Successfully imported onnx version 1.7.0
Successfully imported onnx version 1.7.0
2021-05-10 10:56:32 INFO common_utils.utils Received the following cli arguments: ['nlp_ner_layoutlm/train_pipeline/training_step/training_script.py', '--local_rank=0', '--local_example_folder', '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', '--model_dir', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', '--window_length', '512', '--batch_size', '8', '--weight_decay', '0.0', '--adam_epsilon', '1e-08', '--learning_rate', '2e-05', '--epochs', '200', '--seed', '11046060', '--bit_precision_fp16', '1', '--tagging_scheme', 'BILOU', '--profile_logs', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', '--patience', '50', '--gradient_accumulation_steps', '2', '--warmup_steps', '300', '--composite', '0', '--n_transformer_layers', '1', '--composite_loss_weight', '0.5', '--self_training', '0', '--base_model', '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface']
2021-05-10 10:56:32 INFO common_utils.utils Received the following cli arguments: ['nlp_ner_layoutlm/train_pipeline/training_step/training_script.py', '--local_rank=1', '--local_example_folder', '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', '--model_dir', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', '--window_length', '512', '--batch_size', '8', '--weight_decay', '0.0', '--adam_epsilon', '1e-08', '--learning_rate', '2e-05', '--epochs', '200', '--seed', '11046060', '--bit_precision_fp16', '1', '--tagging_scheme', 'BILOU', '--profile_logs', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', '--patience', '50', '--gradient_accumulation_steps', '2', '--warmup_steps', '300', '--composite', '0', '--n_transformer_layers', '1', '--composite_loss_weight', '0.5', '--self_training', '0', '--base_model', '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface']
2021-05-10 10:56:32 INFO common_utils.utils Parsed the following parameters: {'local_example_folder': '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', 'model_dir': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', 'window_length': 512, 'batch_size': 8, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'learning_rate': 2e-05, 'epochs': 200, 'seed': 11046060, 'bit_precision_fp16': 1, 'tagging_scheme': 'BILOU', 'profile_logs': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', 'patience': 50, 'base_model': '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface', 'gradient_accumulation_steps': 2, 'warmup_steps': 300, 'old_model_dir': None, 'local_rank': 0, 'sampling_lambda': 0.0, 'self_training': 0, 'composite': 0, 'n_transformer_layers': 1, 'composite_loss_weight': 0.5}
2021-05-10 10:56:32 INFO common_utils.utils Parsed the following parameters: {'local_example_folder': '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', 'model_dir': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', 'window_length': 512, 'batch_size': 8, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'learning_rate': 2e-05, 'epochs': 200, 'seed': 11046060, 'bit_precision_fp16': 1, 'tagging_scheme': 'BILOU', 'profile_logs': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', 'patience': 50, 'base_model': '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface', 'gradient_accumulation_steps': 2, 'warmup_steps': 300, 'old_model_dir': None, 'local_rank': 1, 'sampling_lambda': 0.0, 'self_training': 0, 'composite': 0, 'n_transformer_layers': 1, 'composite_loss_weight': 0.5}
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/added_tokens.json. We won't load it.
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/added_tokens.json. We won't load it.
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer.json. We won't load it.
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer.json. We won't load it.
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/vocab.txt
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/vocab.txt
loading file None
loading file None
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/special_tokens_map.json
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/special_tokens_map.json
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer_config.json
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer_config.json
loading file None
loading file None
2021-05-10 10:56:32 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:32 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.trainers Using base model from /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface
2021-05-10 10:56:35 INFO transformers.configuration_utils loading configuration file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/config.json
2021-05-10 10:56:35 INFO transformers.configuration_utils Model config LayoutLMConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8",
"9": "LABEL_9",
"10": "LABEL_10",
"11": "LABEL_11",
"12": "LABEL_12",
"13": "LABEL_13",
"14": "LABEL_14",
"15": "LABEL_15",
"16": "LABEL_16",
"17": "LABEL_17",
"18": "LABEL_18",
"19": "LABEL_19",
"20": "LABEL_20",
"21": "LABEL_21",
"22": "LABEL_22",
"23": "LABEL_23",
"24": "LABEL_24"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_10": 10,
"LABEL_11": 11,
"LABEL_12": 12,
"LABEL_13": 13,
"LABEL_14": 14,
"LABEL_15": 15,
"LABEL_16": 16,
"LABEL_17": 17,
"LABEL_18": 18,
"LABEL_19": 19,
"LABEL_2": 2,
"LABEL_20": 20,
"LABEL_21": 21,
"LABEL_22": 22,
"LABEL_23": 23,
"LABEL_24": 24,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8,
"LABEL_9": 9
},
"layer_norm_eps": 1e-12,
"max_2d_position_embeddings": 1024,
"max_position_embeddings": 512,
"model_type": "layoutlm",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.4.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.trainers Using base model from /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface
2021-05-10 10:56:35 INFO transformers.modeling_utils loading weights file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/pytorch_model.bin
2021-05-10 10:56:35 INFO transformers.configuration_utils loading configuration file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/config.json
2021-05-10 10:56:35 INFO transformers.configuration_utils Model config LayoutLMConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8",
"9": "LABEL_9",
"10": "LABEL_10",
"11": "LABEL_11",
"12": "LABEL_12",
"13": "LABEL_13",
"14": "LABEL_14",
"15": "LABEL_15",
"16": "LABEL_16",
"17": "LABEL_17",
"18": "LABEL_18",
"19": "LABEL_19",
"20": "LABEL_20",
"21": "LABEL_21",
"22": "LABEL_22",
"23": "LABEL_23",
"24": "LABEL_24"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_10": 10,
"LABEL_11": 11,
"LABEL_12": 12,
"LABEL_13": 13,
"LABEL_14": 14,
"LABEL_15": 15,
"LABEL_16": 16,
"LABEL_17": 17,
"LABEL_18": 18,
"LABEL_19": 19,
"LABEL_2": 2,
"LABEL_20": 20,
"LABEL_21": 21,
"LABEL_22": 22,
"LABEL_23": 23,
"LABEL_24": 24,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8,
"LABEL_9": 9
},
"layer_norm_eps": 1e-12,
"max_2d_position_embeddings": 1024,
"max_position_embeddings": 512,
"model_type": "layoutlm",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.4.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
2021-05-10 10:56:35 INFO transformers.modeling_utils loading weights file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/pytorch_model.bin
2021-05-10 10:56:50 INFO transformers.modeling_utils All model checkpoint weights were used when initializing LayoutLMModel.
2021-05-10 10:56:50 INFO transformers.modeling_utils All the weights of LayoutLMModel were initialized from the model checkpoint at /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LayoutLMModel for predictions without further training.
2021-05-10 10:56:50 INFO transformers.modeling_utils All model checkpoint weights were used when initializing LayoutLMModel.
2021-05-10 10:56:50 INFO transformers.modeling_utils All the weights of LayoutLMModel were initialized from the model checkpoint at /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LayoutLMModel for predictions without further training.
2021-05-10 10:56:53 INFO nlp_ner_layoutlm.layoutlm.trainers training on cuda
2021-05-10 10:56:53 INFO nlp_ner_layoutlm.layoutlm.trainers training on cuda
2021-05-10 10:56:56 INFO transformers.training_args PyTorch: setting up devices
2021-05-10 10:56:56 INFO transformers.training_args PyTorch: setting up devices
[2021-05-10 10:56:56,398] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
[2021-05-10 10:56:56,429] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO Bootstrap : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO NET/Socket : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO Bootstrap : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO NET/Socket : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO Using network Socket
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 00/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 01/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Setting affinity for GPU 1 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Setting affinity for GPU 0 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 00 : 0[100000] -> 1[200000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Channel 00 : 1[200000] -> 0[100000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 01 : 0[100000] -> 1[200000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Channel 01 : 1[200000] -> 0[100000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO comm 0x7ff1ec001060 rank 0 nranks 2 cudaDev 0 busId 100000 - Init COMPLETE
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO Launch mode Parallel
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO comm 0x7ff5d8001060 rank 1 nranks 2 cudaDev 1 busId 200000 - Init COMPLETE
2021-05-10 10:56:59 INFO transformers.trainer Using amp fp16 backend
2021-05-10 10:56:59 INFO transformers.trainer Using amp fp16 backend
2021-05-10 10:56:59 INFO nlp_ner_layoutlm.layoutlm.utils_train Starting to train...
2021-05-10 10:56:59 INFO nlp_ner_layoutlm.layoutlm.utils_train Starting to train...
2021-05-10 10:56:59 INFO transformers.integrations Keeping the `fp16` config from nlp_ner_layoutlm/toplevel_configs/ds_config.json intact, ignoring any fp16-specific cl args
[2021-05-10 10:56:59,844] [WARNING] [config.py:79:_sanity_check] DeepSpeedConfig: cpu_offload is deprecated. Please use offload_optimizer.
[2021-05-10 10:56:59,891] [INFO] [utils.py:11:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2
2021-05-10 10:56:59 INFO transformers.integrations Keeping the `fp16` config from nlp_ner_layoutlm/toplevel_configs/ds_config.json intact, ignoring any fp16-specific cl args
[2021-05-10 10:56:59,932] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.16, git-hash=unknown, git-branch=unknown
[2021-05-10 10:56:59,932] [WARNING] [config.py:79:_sanity_check] DeepSpeedConfig: cpu_offload is deprecated. Please use offload_optimizer.
[2021-05-10 10:56:59,954] [INFO] [utils.py:11:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Channel 00/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Channel 01/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO Setting affinity for GPU 1 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Setting affinity for GPU 0 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Channel 00 : 0[100000] -> 1[200000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO Channel 00 : 1[200000] -> 0[100000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] include/shm.h:28 NCCL WARN Call to posix_fallocate failed : No space left on device
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO include/shm.h:41 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] include/shm.h:48 NCCL WARN Error while creating shared memory segment nccl-shm-recv-8f2537dafaac0775-1-0-1 (size 9637888)
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO transport/shm.cc:101 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO transport.cc:30 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO transport.cc:49 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO init.cc:766 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO init.cc:840 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO group.cc:73 -> 2 [Async thread]
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] include/shm.h:28 NCCL WARN Call to posix_fallocate failed : No space left on device
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO include/shm.h:41 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] include/shm.h:48 NCCL WARN Error while creating shared memory segment nccl-shm-recv-c43b846667d22574-1-1-0 (size 9637888)
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO transport/shm.cc:101 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO transport.cc:30 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO transport.cc:49 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO init.cc:766 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO init.cc:840 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO group.cc:73 -> 2 [Async thread]
2021-05-10 10:56:59 ERROR __main__ NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
2021-05-10 10:56:59 ERROR __main__ NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Killing subprocess 97
Killing subprocess 98
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/launcher/launch.py", line 171, in <module>
main()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/launcher/launch.py", line 161, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/usr/local/lib/python3.8/dist-packages/deepspeed/launcher/launch.py", line 139, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', 'nlp_ner_layoutlm/train_pipeline/training_step/training_script.py', '--local_rank=1', '--local_example_folder', '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', '--model_dir', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', '--window_length', '512', '--batch_size', '8', '--weight_decay', '0.0', '--adam_epsilon', '1e-08', '--learning_rate', '2e-05', '--epochs', '200', '--seed', '11046060', '--bit_precision_fp16', '1', '--tagging_scheme', 'BILOU', '--profile_logs', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', '--patience', '50', '--gradient_accumulation_steps', '2', '--warmup_steps', '300', '--composite', '0', '--n_transformer_layers', '1', '--composite_loss_weight', '0.5', '--self_training', '0', '--base_model', '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface']' returned non-zero exit status 1.
2021-05-10 10:57:04 INFO common_utils.message_utils Loading initial config...
2021-05-10 10:57:04 INFO common_utils.message_utils Injecting secrets...
2021-05-10 10:57:05 INFO common_utils.message_utils Done injecting keyvault into config...
2021-05-10 10:57:05 DEBUG common_utils.kafka Initializing kafka producer
2021-05-10 10:57:05 DEBUG common_utils.message_utils Sending exception to Kafka...
2021-05-10 10:57:05 DEBUG common_utils.kafka Message delivered to dev-train-result [1] @ 228
2021-05-10 10:57:05 DEBUG common_utils.message_utils Exception sent to Kafka.
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/layoutlm_train_model.py", line 251, in <module>
run_step(
File "/app/common_utils/kubeflow_utils.py", line 261, in run_step
step_callback(**args.__dict__)
File "nlp_ner_layoutlm/train_pipeline/training_step/layoutlm_train_model.py", line 130, in train_and_save_layoutLM_model
raise InternalError(
common_utils.errors.InternalError: Something went wrong while training in distributed mode. Process finished with Exit Code 1
```
When I set environment variable to `NCCL_SHM_DISABLE=1`, this error doesn't happen but deepspeed doesn't produce better results and I can't train with bigger batches.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11658/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11657/comments | https://api.github.com/repos/huggingface/transformers/issues/11657/events | https://github.com/huggingface/transformers/issues/11657 | 883,919,773 | MDU6SXNzdWU4ODM5MTk3NzM= | 11,657 | Memory Leak in Deberta (v1) Base | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @alexvaca0, thank you for opening an issue! Is there a way for you to provide a collab with a reproducer so that we may take a look at the memory issue?\r\n\r\nRegarding the very bad performance, and your query that you hope the \"architecture itself is not wrongly coded\" - rest assured, the architecture was contributed by the author of the model. I believe DeBERTa has historically been hard to pretrain, as I've heard similar reports in the past. Pinging @BigBird01, the author of the model.\r\n\r\nPengcheng, do you have some tips regarding pretaining the DeBERTa model?\r\n\r\nI believe the original repository also contains code for model pretraining: https://github.com/microsoft/DeBERTa\r\nHave you taken a look at the pretraining script in that repository?",
"Yes. We already released our code for pre-training and fine-tuning(SiFT) in our public repo. Please take a look at it. By saying it's hard to pre-train, what do you refer to? Do you mean instability or accuracy of the model?\n\n\nThanks!\nPengcheng\n\n\nFrom: Lysandre Debut ***@***.***>\nSent: Monday, May 10, 2021 12:37 PM\nTo: huggingface/transformers ***@***.***>\nCc: Pengcheng He ***@***.***>; Mention ***@***.***>\nSubject: Re: [huggingface/transformers] Memory Leak in Deberta (v1) Base (#11657)\n\n\nHello @alexvaca0<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Falexvaca0&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260776631%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=rBueqZcT0kOvwb%2FjxKIs%2B0V2yyfdcWDo3lq2FwyeHOc%3D&reserved=0>, thank you for opening an issue! Is there a way for you to provide a collab with a reproducer so that we may take a look at the memory issue?\n\nRegarding the very bad performance, and your query that you hope the \"architecture itself is not wrongly coded\" - rest assured, the architecture was contributed by the author of the model. I believe DeBERTa has historically been hard to pretrain, as I've heard similar reports in the past. Pinging @BigBird01<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBigBird01&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260786585%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=fUm4si06Lmfwyv4bWSHWzM%2FCJXPrJpR2E1q6nIuYDEk%3D&reserved=0>, the author of the model.\n\nPengcheng, do you have some tips regarding pretaining the DeBERTa model?\n\nI believe the original repository also contains code for model pretraining: https://github.com/microsoft/DeBERTa<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2FDeBERTa&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260786585%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TZlKvOgHNmHzuE9orUxy%2BtIG8GGxxdq%2Bfl%2BBOiG8Ctk%3D&reserved=0>\nHave you taken a look at the pretraining script in that repository?\n\n-\nYou are receiving this because you were mentioned.\nReply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F11657%23issuecomment-837210289&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260796544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2FuCIjvI9f%2Bbub6XqcvVLkH0Kdt3Pc83Z1u6X5t9jilw%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAJDNDRXY2CW5JCM7FBNM3MLTNAYVXANCNFSM44Q2XEOA&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260796544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Wbspih0vbzvWLFW7wgPzfCCh2i3Bka1%2Bj8bZpRBfvGo%3D&reserved=0>.\n",
"@BigBird01 @LysandreJik Hi, thanks for the quick response to both of you, I really appreciate your help :) \r\nCurrently I don't think I can find the time to prepare a reproducer, maybe if you have a script for training a model with several configurations in a loop or using Optuna with the hyperparameter search API from Trainer (it also happens there), you can just replace the model string you were using with microsoft/deberta-base. Using one of the example collabs from Transformers would also be useful, as you'd only have to replace the model name. https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb\r\n\r\nI'm glad to know that there are no mistakes in the implementation itself, and therefore the only issue to solve is this memory leak. \r\n\r\nI've taken a look at Deberta repository, but I don't find a pre-training script; where exactly can I find it?. However, in order not to waste all the money already spent in training the model, I think it'd be more appropriate to continue using Transformers code. I've followed all hyperparameters stated in the paper for Deberta-base for pre-training, these doesn't change in your pre-training script , do they? @BigBird01 \r\nAnother issue is that there is no SpanWholeWordMaskCollator in Transformers, therefore we are training with Whole Word Masking... do you think this will severely affect the model performance? On the other hand, if you have code for collating batches with Span Whole Word Masking, do you think it would be possible to put that in transformers data_collator.py code and continue training using that new collator? Or this may lead to divergence of the model? \r\n\r\nThank you again, and sorry about all the questions, I've many doubts regarding this subject.\r\n\r\nRegards,\r\n\r\nAlejandro",
"@BigBird01 other people have had issues with pretraining, this issue comes to mind: https://github.com/huggingface/transformers/issues/11689",
"@alexvaca0 The Transformers library is not intended to become a hsot for data collators specific to all possible tasks so we probably won't add this `SpanWholeWordMaskCollator`. You can however copy it in any of your script and use it.",
"@sgugger I don't think that collator is so rare, in fact many models such as SpanBERT, ALBERT and DEBERTA use this pre-training setup... ",
"Any updates regarding the memory leak? I'm still experiencing it...",
"Hi @alexvaca0, I am trying to reproduce the memory leak you mention but I do not manage to obtain it. Within a loop I create a new model, `TrainingArgument` and `Trainer`, start the training and look at the metrics.\r\n\r\nI also tried only running `trainer.train()` within the loop, and the second iteration gets a slight increase in GPU memory usage but it stabilizes right after.\r\n\r\nI've tried with the hyper-parameter search as well (using `optuna`) but have not managed to replicate the memory increase you mention.\r\n\r\nIf you have a script (even a large one as long as I can run it locally) or a notebook that reproduces the leak, I would happily take a look at it.",
"I could prepare one, as I cannot share my pre-trained deberta model with you... But I think we could replicate it the following way: retraining the deberta-base english model for some steps more, saving that checkpoint, and then using a hyperparameter search with optuna from that checkpoint, not the official deberta base checkpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I'm still experiencing this issue. For example, if you initialize a Trainer in google colab with deberta-base, and then try to change the trainer for using other model, the gpu memory used by deberta is not released, I mean, if the trainer with deberta used 16GB, when I try to change the trainer and set bert-base, for example, the object is not replaced. This brings me to the same conclusion I claimed above: there must be a memory leak in deberta code, it leaves objects in the gpu that cannot be released. @patrickvonplaten @LysandreJik @sgugger ",
"Hello @alexvaca0! As mentioned above, I have tried to reproduce but I have failed to do so. Would you happen to have a script handy so that we may take a look? You do not need to share your model if you do no wish to any DeBERTa model on the hub should do.\r\n\r\nThank you.",
"Could this be a fix to your issue? https://github.com/huggingface/transformers/pull/12718",
"Hi @LysandreJik , as soon as I can I'll try to re-install transformers from source and see if #12718 fixes my issue, although it seems to be related to cpu memory, not gpu memory; moreover, I didn't experience this with any other model but deberta-base, with BERT for example it worked smoothly. I'll also prepare a notebook for you to reproduce, as soon as the workload I have enables me to do so. Thanks! :) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,630 | 1,630 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: Linux-5.4.0-1047-aws-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@patrickvonplaten @LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...):
I am using a Deberta-base. First I've pre-trained it on >630M texts in spanish, with a BPE tokenizer trained on the same corpus, which in total is 590M (I've performed more than one epoch), using MLM-WWM. Then, I'm trying to use this model on fine-tuning, but I'm facing some issues.
First of all, Deberta is supposed to be much better than Bert and Roberta, however I'm experiencing a very bad performance, when compared to the other spanish model: dccuchile/bert-base-spanish-cased (BETO from now on), which supposedly has a worse architecture and which is trained only slightly more than my model. I've tried with many different hyperparameters, following the recommendations in Deberta paper, without improving results. For reference, in a problem where BETO (the model above) achieves 0.97, I'm achieving 0.91 at best. Moreover, as I'm training many models for hyperparameter search (without using your hyperparameter search api), I see that with each new Deberta model the GPU memory usage increases, which doesn't happen with BETO. I think this is a memory leak in the implementation of Deberta, or at least in the token classification and sequence classification layers of deberta. I don't know if this inefficient implementation leading to this memory leak can have any relationship with the poor performance of the model. Could you please take a look at it?
I hope the architecture itself is not wrongly coded, because otherwise we've spent thousands of dollars in training a spanish model from scratch for nothing. Please, if I can give any further information that can help in clear this out, let me know. I'm a little anxious over here because the results aren't as expected and because there are clear signs that the Deberta implementation has, at least, a memory management problem.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below): My script consist of a loop for training different versions of my spanish Deberta Model on a dataset (each version is the same model with different hyperparameters).
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below): I've tried with PAWS-X, ConLL2002, Ehealth_kd, Muchocine. All these datasets were downloaded from the datasets library.
## To reproduce
Steps to reproduce the behavior:
1. Use the deberta-base model and fine-tuning on a given dataset (it doesn't matter which one)
2. Create a hyperparameter dictionary and get the list of hyperparameters for each run with list(sklearn.ParameterGrid(search_dic))
3. Train the model with trainer using in each run the hyperparameters from the above list. As each model is trained, you will see an increment in memory usage even after doing torch.cuda.empty_cache().
## Expected behavior
it is expected, given the results reported on Deberta paper, that Deberta-base works better than Bert-base with less training (the architecture of BETO), therefore I wouldn't expect that after training for almost as long as BETO we have much worse results than it. Also, it is expected that after each run with trainer, and after deleting the trainer from memory with del Trainer, and releasing gpu memory with torch.cuda.empty_cache(), the gpu memory usage is not increased from run to run, as with other model architectures this doesn't happen, and with Deberta it does. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11657/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11656/comments | https://api.github.com/repos/huggingface/transformers/issues/11656/events | https://github.com/huggingface/transformers/issues/11656 | 883,908,018 | MDU6SXNzdWU4ODM5MDgwMTg= | 11,656 | DISTILBERT: run_squad.py not working | {
"login": "fdlci",
"id": 73292708,
"node_id": "MDQ6VXNlcjczMjkyNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/73292708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdlci",
"html_url": "https://github.com/fdlci",
"followers_url": "https://api.github.com/users/fdlci/followers",
"following_url": "https://api.github.com/users/fdlci/following{/other_user}",
"gists_url": "https://api.github.com/users/fdlci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdlci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdlci/subscriptions",
"organizations_url": "https://api.github.com/users/fdlci/orgs",
"repos_url": "https://api.github.com/users/fdlci/repos",
"events_url": "https://api.github.com/users/fdlci/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdlci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This script is a legacy script that we will not be maintaining anymore. Have you tried with the `run_qa.py` script available [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering)? This script should be simpler to understand and more complete.\r\n\r\nLet us know if you run in any issues, thanks.",
"Thank You! I will try this script then!\r\n",
"Hi!\r\n\r\nI followed your advice and ran DistilBERT with run_qa.py on squad_v2 with the following arguments:\r\n\r\npython transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path distilbert-base-uncased --dataset_name squad_v2 --do_train --do_eval --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --max_seq_length 384 --doc_stride 128 --output_dir output_distilbert\r\n\r\nUnfortunately, for the evaluation I get the following error:\r\nValueError: max() arg is an empty sequence\r\n\r\nDid I forget to add one parameter? Thank you for your answer\r\n",
"Could you provide the full stack trace? Thank you!\r\n\r\n@sgugger ",
"Yes of course! Thank you\r\n\r\nTraceback (most recent call last):\r\n File \"transformers/examples/pytorch/question-answering/run_qa.py\", line 613, in <module>\r\n main()\r\n File \"transformers/examples/pytorch/question-answering/run_qa.py\", line 581, in main\r\n metrics = trainer.evaluate()\r\n File \"/home/ines/Ibert/transformers/examples/pytorch/question-answering/trainer_qa.py\", line 56, in evaluate\r\n metrics = self.compute_metrics(eval_preds)\r\n File \"transformers/examples/pytorch/question-answering/run_qa.py\", line 543, in compute_metrics\r\n return metric.compute(predictions=p.predictions, references=p.label_ids)\r\n File \"/home/ines/Ibert/venv_ibert/lib/python3.7/site-packages/datasets/metric.py\", line 402, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"/home/ines/.cache/huggingface/modules/datasets_modules/metrics/squad/513bf9facd7f12b0871a3d74c6999c866ce28196c9cdb151dcf934848655d77e/squad.py\", line 109, in _compute\r\n score = evaluate(dataset=dataset, predictions=pred_dict)\r\n File \"/home/ines/.cache/huggingface/modules/datasets_modules/metrics/squad/513bf9facd7f12b0871a3d74c6999c866ce28196c9cdb151dcf934848655d77e/evaluate.py\", line 67, in evaluate\r\n exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths)\r\n File \"/home/ines/.cache/huggingface/modules/datasets_modules/metrics/squad/513bf9facd7f12b0871a3d74c6999c866ce28196c9cdb151dcf934848655d77e/evaluate.py\", line 52, in metric_max_over_ground_truths\r\n return max(scores_for_ground_truths)\r\nValueError: max() arg is an empty sequence",
"Ah, since you're using the squad V2 dataset I believe you must also tell the script that it should understand examples that don't have an answer. For this, you can add the `--version_2_with_negative` argument when running your script.\r\n\r\nDoes that help?",
"Yes it does, I thought it was taken into accound with the name of the dataset, but I was wrong. Thank you!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,623 | 1,623 | NONE | null | Hi,
I am trying to use the transformers/examples/legacy/question-answering/run_squad.py script to train and evaluate DistilBERT on squad2.0. Unfortunately it throws the following error:
**TypeError: forward() got an unexpected keyword argument 'token_type_ids'**
I used the following tokenizer: distilbert-base-uncased
Is there a way for me to fix this issue? Thank you for your reply! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11656/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11655/comments | https://api.github.com/repos/huggingface/transformers/issues/11655/events | https://github.com/huggingface/transformers/pull/11655 | 883,818,743 | MDExOlB1bGxSZXF1ZXN0NjM3MjU4NzY5 | 11,655 | Fine-tuning the Entire RAG Architecture (including DPR retriever) | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Great, I think we are very close to merging this PR :-)\r\n> \r\n> Could we add a test in both `test_modeling_rag` and `test_retrieval_rag` ? We should test there that the model behaves correctly when `set_context_encoder_for_training` is set\r\n\r\non it!",
"Hello, I tried running the code in this pull request because the methodology is something I'm very interested in and I ran into a few issues. These may be things you are aware of but I just wanted to mention them in case you hadn't run into them.\r\n1. When launching the code in distributed mode (CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8) I received the error:\r\n\r\n handle = worker.core_worker.get_named_actor_handle(name)\r\n File \"python/ray/_raylet.pyx\", line 1496, in ray._raylet.CoreWorker.get_named_actor_handle\r\n File \"python/ray/_raylet.pyx\", line 157, in ray._raylet.check_status\r\nValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.\r\n\r\nBased on the traceback it seems to be related to line 733 in finetune_rag.py \r\nnamed_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]\r\nIf I run it with the normal python launch it goes fine however. This is with ray version 1.3.0, pytorch-lightning 1.2.10 and transformers version 4.7.0.dev0 which matches the requirements.txt file. This is an issue because I don't believe the sharded_ddp plugin works without the distributed launch.\r\n\r\n2. pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='val_em') not found in the returned metrics: ['val_loss', 'val_avg_loss', 'val_avg_em', 'val_avg_gen_time', 'val_avg_gen_len', 'step_count', 'loss']. HINT: Did you call self.log('val_em', value) in the LightningModule?\r\n\r\nThis happens at the first validation step after training epoch 0. I believe that the values being passed on line 421 in the finetune_rag.py script are not named correctly? The metrics all have \"_avg_\" in the name however the monitored metric doesn't seem to have that and is just \"val_em\".\r\n\r\n3. It seems that in some locations the ctx_encoder_tokenizer is a required keyword argument in some locations and not in others. I had change line 367 in retrieval_rag.py to: def __init__(self, config, question_encoder_tokenizer, ctx_encoder_tokenizer, generator_tokenizer, index=None, init_retrieval=True): adding the ctx_encoder_tokenizer otherwise it said it was missing the keyword argument.\r\n\r\n4. I had to change the line 528 in modeling_rag.py from \"self.context_encoder_training=False\" to \"self.context_encoder_training=True\" in order to get it to properly do the context_encoding. I could have messed something else up in the training to cause it to not properly set this to True when setting the context encoder but I couldn't get it to work without doing this (threw the error KeyError: 'tokenized_doc_ids')\r\n\r\n5. I had to add \"from transformers import PreTrainedTokenizer\" to retrieval_rag.py also because that doesn't seem to be imported anywhere in the file but is used in the file on line 552. This could be an issue with my transformer version but I still believe it would have to be in the import statements anyways no?\r\n\r\nAny or all of these could be issues with how I'm running it but I figured I'd bring them to your attention because these were all the things I had to change in order to get it to run. I can provide more info on any/all of these if you would like but I figured I would give you a list of things I ran into since it hasn't been merged yet so i'd imagine not many people have tried to run it end to end. Thanks for adding this feature though; it's definitely going to be a big upgrade for those of us who are using the model on different datasets and use cases.\r\n",
"@calderma \r\n\r\nYou have done an amazing exploration. I am really sorry :( that apart from the first issue, all other things being already fixed, but I did not push the latest version since I am conducting some experiments to update the README (original RAG vs Updated RAG). \r\n\r\n\r\n\r\nIn the latest version, I added a dummy training example which makes things a lot more clear. \r\n\r\nFor the first issue, I actually do not have a distributed system to test. I think it is something to do with PyTorch-lightning initialization. Try to add distributed parameters to the lightning trainer. **(Trainer(gpus=8, accelerator='ddp', num_nodes=4))**\r\n\r\n**named_actors = [ray.get_actor(\"retrieval_worker_{}\".format(i)) for i in range(args.num_retrieval_workers)]**\r\n\r\nDuring the initialization of RAY workers, we create them only on the master DDP process. So the above line is there to get the retrieval workers for other ddp processes (let's say cuda 1, 2, ..) that have already created during the master process [(check this line)](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L712). \r\n\r\nAs the error shows in your issue, what has happened is the initialization across the nodes somehow hasn't [shared](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L712). So obviously ray gives an error saying, it can't find an already initialized worker by its name. I think this has something to do with how pytorch-lightning execute multi-node training. So I would suggest you to follow their guidelines. Something as follows.\r\n\r\n\r\n\r\nPlease let me know if this works, So I can include it in the latest version. \r\n\r\n\r\n",
"Sure I should be able to try it tomorrow in the early afternoon. I will post in this thread afterwards.",
"> Sure I should be able to try it tomorrow in the early afternoon. I will post in this thread afterwards.\r\n\r\nSince I have only one HPC cluster, I just run the process with CUDA_VISIBLE_DEVICES-0,1,2,3... bash script. It works fine for me. Can you please elaborate a little bit on this statement \" I don't believe the sharded_ddp plugin works without the distributed launch.\" ?",
"Sure I was referencing the plugin for fairscale with pytorch lightning:\r\nhttps://medium.com/pytorch/pytorch-lightning-1-1-model-parallelism-training-and-more-logging-options-7d1e47db7b0b\r\nI was under the impression to use that plugin it had to be launched with the distributed pytorch launch but honestly I've never tried it with just launching it via python. When I trained on the old RAG model i used the distributed command and passed --plugins ddp_sharded but i suppose it might just work with regular python launching. I don't currently have the ability to test it until tomorrow though.",
"I believe I had to alter the original RAG code to work with pytorch lightning 1.1 as it was based on 1.0 but I needed to use fairscale to use a batch size larger than 1. Unfortunately I no longer have access to the repository I was pushing to at that time, which was a few months ago.",
"@calderma Exactly, I also think it is something to do with the launching of the script with multiple nodes. \r\n\r\nI am asking this because PL has changed a lot compared to the version that used in the original RAG. Especially plugins. In the upgraded RAG, I had to remove [this method with plugins](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag/finetune_rag.py#L84) and use Callbacks since PL at the moment is not that much encouraging us to use custom pluggings. \r\n\r\n\r\n",
"I'd imagine that the distributed launch isn't worth worrying about then but I can still test it tomorrow if you would like to see if it makes the ray workers behave.",
"Thanks a lot.",
"Hello again. I pulled your latest changes and tried running it again. I didn't get the distributed training to work with those changes but I'm by no means an expert at that so maybe someone better at distributed pytorch-lightning could take a look at it. I did notice a few other things with regards to training.\r\n1. It seems the --data_cache_dir option doesn't exist in the code anymore? looking through finetune_rag.py i didn't see it anywhere but when i looked back at previous commits it was there in the arguments.\r\n2. I had to manually create the kb_shards directory to get it to run. Could have easily been an issue on my end regarding permissions or something.\r\n3. I got the error:\r\n callbacks_rag.py\", line 44, in get_checkpoint_callback\r\n every_n_val_epochs=1, # maybe save a checkpoint every time val is run, not just end of epoch.\r\nTypeError: __init__() got an unexpected keyword argument 'every_n_val_epochs'\r\n\r\nThere was an issue with my cluster apparently that should be resolved soon so I'll try again but I don't believe it was related to these errors. ",
"@calderma \r\n\r\n1. I did swamp the data_cache_dir with cache_dir parameter given in lightning_base.py.\r\n2. Now you do not need to create kb-shards directory manually, check [this line, which creates them automatically](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L689)\r\n3. I think you need to update PL. Or you can run the code by [swapping those parameters with **period**](https://pytorch-lightning.readthedocs.io/en/latest/extensions/generated/pytorch_lightning.callbacks.ModelCheckpoint.html#pytorch_lightning.callbacks.ModelCheckpoint).\r\n\r\n\r\nFor the distributed running, I checked with PyTorch lightning examples. Seem like it depends on how you have constructed the cluster. Code-wise we do not need to change anything other than adding num_nodes and accelerator parameters to the trainer. \r\n**trainer = Trainer(gpus=8, num_nodes=4, accelerator='ddp')**, which I already did. [ See these examples, it seems pretty straightforward](https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster). \r\n\r\nNow you can also run the original [RAG with new PyTorch lightning.](https://github.com/huggingface/transformers/pull/11806)\r\n\r\n\r\n",
"great sounds good. The only thing that might need to be switched with regard to the first one is I believe the finetune_rag_ray_end2end.sh script still passes \"--data_cache_dir\". Regarding the PL version I was going by what was in the requirements file. my pip list shows pytorch-lightning == 1.2.10 which seems to be what's in the requirements. Thanks for the help with the distributed PL!",
"Omg yeah. My bad! Thanks a lot highlighting them. I will quickly change it\nand push it.\n\nOn Sat, May 22, 2021, 11:09 calderma ***@***.***> wrote:\n\n> great sounds good. The only thing that might need to be switched with\n> regard to the first one is I believe the finetune_rag_ray_end2end.sh script\n> still passes \"--data_cache_dir\". Regarding the PL version I was going by\n> what was in the requirements file. my pip list shows pytorch-lightning ==\n> 1.2.10 which seems to be what's in the requirements. Thanks for the help\n> with the distributed PL!\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/11655#issuecomment-846303671>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGT7TIRQFUPWMOCHBSTTO3R3DANCNFSM44QW2KSA>\n> .\n>\n",
"Great! I'm going to run it over the weekend so I will let you know if I hit any other roadblocks or if it finishes without issue.",
"\r\n\r\nHi Patrick and Quentin ,\r\n**I added the testing files in the test_run folder.** [test_rag_new_features.sh ](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/test_run/test_rag_new_features.sh#L6 )tests if the new functions are working and test_finetune.sh trains with a dummy dataset.\r\n\r\nAdditionally, We also did a small experiment with the SQuAD dataset using all context passages as the knowledge base. The increase in the EM-Scores was better than we expected. Users also can compare these two methods.\r\nCheers, @patrickvonplaten @lhoestq ",
"Just to close the loop on this, my test run of the end2end went smoothly and I had no additional roadblocks. Thanks!",
"@calderma , I had some problems with version control. So I created a revamp pull request .. can you just run in and let me know :) \r\nhttps://github.com/huggingface/transformers/pull/11893\r\n",
"sure i'll run it today",
"In this version I added the following:\r\n\r\n1. Added test functions as requested in [this comment](https://github.com/huggingface/transformers/pull/11655#pullrequestreview-658830866). \r\n2. Added results section to the README.\r\n\r\n",
"Closing in favor of #11893",
"@calderma \r\n\r\n\r\n> 1. When launching the code in distributed mode (CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8) I received the error:\r\n> handle = worker.core_worker.get_named_actor_handle(name)\r\n> File \"python/ray/_raylet.pyx\", line 1496, in ray._raylet.CoreWorker.get_named_actor_handle\r\n> File \"python/ray/_raylet.pyx\", line 157, in ray._raylet.check_status\r\n> ValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.\r\n> \r\n\r\n\r\n@calderma I think We found the problem when running RAG with RAY on distributed systems. \r\n\r\nIn some distributed systems, **os.environ[\"NODE_RANK\"]** is a string but not an integer. \r\n\r\nSo basically the if the condition can get mess up. So please upgrade the if condition inf finetune.py as follows:\r\n\r\n`\r\n if (\"LOCAL_RANK\" not in os.environ or os.environ[\"LOCAL_RANK\"] == 0) and ( \"NODE_RANK\" not in os.environ or int(os.environ[\"NODE_RANK\"] )== 0 ):`\r\n\r\nHope this solves the problem! \r\n"
] | 1,620 | 1,623 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
The original RAG implementation is able to end-to-end train the question encoder and the generator.
This extension enables the end-to-end training of RAG including the context encoder in the retriever component.
Please read the [accompanying blog post](https://shamanesiri.medium.com/how-to-finetune-the-entire-rag-architecture-including-dpr-retriever-4b4385322552) for details on this implementation.
The original RAG code has also been modified to work with the latest versions of PyTorch lightning (version 1.2.10) and RAY (version ). All other implementation details remain the same as the [original RAG code](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag).
Read more about RAG at https://arxiv.org/abs/2005.11401.
This code can be modified to experiment with other research on retrieval augmented models that include training of the retriever such as [REALM](https://arxiv.org/abs/2002.08909) and [MARGE](https://arxiv.org/abs/2006.15020).
Reviewers @patrickvonplaten @lhoestq | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11655/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11655",
"html_url": "https://github.com/huggingface/transformers/pull/11655",
"diff_url": "https://github.com/huggingface/transformers/pull/11655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11655.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11654/comments | https://api.github.com/repos/huggingface/transformers/issues/11654/events | https://github.com/huggingface/transformers/pull/11654 | 883,742,470 | MDExOlB1bGxSZXF1ZXN0NjM3MTkwMzYz | 11,654 | add bigbird-pegasus evaluation notebook | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
Add bigbird-pegasus evaluation notebook
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11654/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11654",
"html_url": "https://github.com/huggingface/transformers/pull/11654",
"diff_url": "https://github.com/huggingface/transformers/pull/11654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11654.patch",
"merged_at": 1620636501000
} |
https://api.github.com/repos/huggingface/transformers/issues/11653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11653/comments | https://api.github.com/repos/huggingface/transformers/issues/11653/events | https://github.com/huggingface/transformers/pull/11653 | 883,601,253 | MDExOlB1bGxSZXF1ZXN0NjM3MDYxMTcx | 11,653 | Add DETR | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik all comments are addressed, also added 2 community notebooks. PR is ready!"
] | 1,620 | 1,626 | 1,623 | CONTRIBUTOR | null | # What does this PR do?
It adds Facebook AI's DETR model (end-to-end object detection with Transformers). It's a clean PR based on #11506.
The 3 models are called `DetrModel`, `DetrForObjectDetection` and `DetrForSegmentation`. The latter was first called `DetrForPanopticSegmentation`, but as it can also be used to do only instance segmentation, I renamed it.
To do:
- [x] address remaining comments
- [x] fix remaining tests (there are still 2 tests failing for `test_modeling_detr.py`) - here I'd like some help
- [x] add remaining checkpoints to the hub
- [ ] add notebooks to showcase how to do inference/fine-tuning on custom data
- [x] perhaps also write more documentation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11653/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/11653/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11653",
"html_url": "https://github.com/huggingface/transformers/pull/11653",
"diff_url": "https://github.com/huggingface/transformers/pull/11653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11653.patch",
"merged_at": 1623253873000
} |
https://api.github.com/repos/huggingface/transformers/issues/11652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11652/comments | https://api.github.com/repos/huggingface/transformers/issues/11652/events | https://github.com/huggingface/transformers/issues/11652 | 883,540,331 | MDU6SXNzdWU4ODM1NDAzMzE= | 11,652 | [DOC] Fine-Tuning NER Custom Dataset Clarification | {
"login": "jorahn",
"id": 13120204,
"node_id": "MDQ6VXNlcjEzMTIwMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/13120204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorahn",
"html_url": "https://github.com/jorahn",
"followers_url": "https://api.github.com/users/jorahn/followers",
"following_url": "https://api.github.com/users/jorahn/following{/other_user}",
"gists_url": "https://api.github.com/users/jorahn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorahn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorahn/subscriptions",
"organizations_url": "https://api.github.com/users/jorahn/orgs",
"repos_url": "https://api.github.com/users/jorahn/repos",
"events_url": "https://api.github.com/users/jorahn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorahn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"When did you encounter that `ValueError`? Normally, if you pass text into the `__call__` method of a tokenizer with `truncation` set to `True`, you shouldn't encounter any errors, as sequences that are too long are truncated.",
"Hi Niels, thanks for your feedback. The Exception is raised when calling `encode_tags`. I'm following the tutorial code, just my dataset is not WNUT-17.\r\n> train_labels = encode_tags(train_tags, train_encodings)\r\n> val_labels = encode_tags(val_tags, val_encodings)",
"Update: Even when I ensure the number of tokens per sample is <= 512, I get ValueErrors from calling `encode_tags` on some samples. I'll try to understand this better or provide a demo.",
"Update: The function `tokenize_and_align_labels` from the [token classification example notebook](https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb) (cell 23) works fine on my data.",
"Having the same error on my custom dataset, with truncation set to true, with the Camembert tokenizer. \r\n`ValueError: NumPy boolean array indexing assignment cannot assign 1078 input values to the 319 output values where the mask is true`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Having the same issue, from the same tutorial, with a custom dataset. Any ideas on how to fix it?",
"@Famaral97 You may want to take a look here: https://github.com/huggingface/transformers/issues/11652#issuecomment-839496635 \r\nThis worked well for my dataset.",
"@jorahn Thanks for letting us know about the tokenize_and_align_labels() function. But when I follow the method mentioned in the notebook I'm getting an error with data collator saying:\r\nAttributeError: 'tokenizers.Encoding' object has no attribute 'keys'\r\n\r\n"
] | 1,620 | 1,660 | 1,625 | NONE | null | I'm following [this](https://huggingface.co/transformers/custom_datasets.html#tok-ner) guide for fine-tuning for NER with a custom dataset. I struggled with the example code for `def encode_tags()` until I realized, that the tokens per sample are limited to 512 and my dataset exceeded this in some instances. This resulted in errors like this:
`ValueError: NumPy boolean array indexing assignment cannot assign 544 input values to the 464 output values where the mask is true`.
I currently assume, the limit is due to the specific Tokenizer. I'm using `tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased')` as in the example.
I'm proposing to add a clarification about the token limit per sample assumption like this:
https://github.com/huggingface/transformers/edit/master/docs/source/custom_datasets.rst
Line 365 and following:
> Let's write a function to do this. This is where we will use the ``offset_mapping`` from the tokenizer as mentioned
> above. For each sub-token returned by the tokenizer, the offset mapping gives us a tuple indicating the sub-token's
> start position and end position relative to the original token it was split from. That means that if the first position
> in the tuple is anything other than ``0``, we will set its corresponding label to ``-100``. While we're at it, we can
> also set labels to ``-100`` if the second position of the offset mapping is ``0``, since this means it must be a
> special token like ``[PAD]`` or ``[CLS]``.
And append: `Be aware that this example has an upper limit of 512 tokens per sample.`
Let me know your thoughts and I'll open a PR, if you find this useful. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11652/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11651/comments | https://api.github.com/repos/huggingface/transformers/issues/11651/events | https://github.com/huggingface/transformers/pull/11651 | 883,524,707 | MDExOlB1bGxSZXF1ZXN0NjM2OTkyMDg5 | 11,651 | BigBird on TPU | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, do you guys have any idea whether we can still train with `trainer` on TPU with BigBird? I am still facing this same error.\r\n\r\nIs there any tentative timeline by which this problem can be fixed?",
"BigBird will be merged in Flax soon. It is recommend to use `FlaxBigBird` for TPU ",
"That's great news! Till when can we expect it to be available?",
"It will get merged this week (may be in a day or two)."
] | 1,620 | 1,623 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
This PR will enable BigBird working on TPUs. Problem was happening because we were concatenating tensors of different dtypes. This PR will fix it.
See this notebook to infer BigBird on TPUs: https://colab.research.google.com/drive/1ptZlDuEgmoElWmPmrZXHA7uWjvra9G8T#scrollTo=sR2Yk-HzmGnw
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11363
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11651/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11651",
"html_url": "https://github.com/huggingface/transformers/pull/11651",
"diff_url": "https://github.com/huggingface/transformers/pull/11651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11651.patch",
"merged_at": 1620903090000
} |
https://api.github.com/repos/huggingface/transformers/issues/11650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11650/comments | https://api.github.com/repos/huggingface/transformers/issues/11650/events | https://github.com/huggingface/transformers/pull/11650 | 882,742,203 | MDExOlB1bGxSZXF1ZXN0NjM2MjU4NjE5 | 11,650 | [Examples] Fix invalid links after reorg | {
"login": "oToToT",
"id": 8341564,
"node_id": "MDQ6VXNlcjgzNDE1NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8341564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oToToT",
"html_url": "https://github.com/oToToT",
"followers_url": "https://api.github.com/users/oToToT/followers",
"following_url": "https://api.github.com/users/oToToT/following{/other_user}",
"gists_url": "https://api.github.com/users/oToToT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oToToT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oToToT/subscriptions",
"organizations_url": "https://api.github.com/users/oToToT/orgs",
"repos_url": "https://api.github.com/users/oToToT/repos",
"events_url": "https://api.github.com/users/oToToT/events{/privacy}",
"received_events_url": "https://api.github.com/users/oToToT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,620 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
The link in some examples' README didn't being updated after example reorg (#11350).
I simply `grep -R https://github.com/huggingface/transformers/blob/master/examples` in examples and fix those invalid links.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger (author of #11350)
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11650/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11650",
"html_url": "https://github.com/huggingface/transformers/pull/11650",
"diff_url": "https://github.com/huggingface/transformers/pull/11650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11650.patch",
"merged_at": 1620625608000
} |
https://api.github.com/repos/huggingface/transformers/issues/11649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11649/comments | https://api.github.com/repos/huggingface/transformers/issues/11649/events | https://github.com/huggingface/transformers/issues/11649 | 882,591,491 | MDU6SXNzdWU4ODI1OTE0OTE= | 11,649 | Reformer inference widget is broken | {
"login": "hamelsmu",
"id": 1483922,
"node_id": "MDQ6VXNlcjE0ODM5MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1483922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamelsmu",
"html_url": "https://github.com/hamelsmu",
"followers_url": "https://api.github.com/users/hamelsmu/followers",
"following_url": "https://api.github.com/users/hamelsmu/following{/other_user}",
"gists_url": "https://api.github.com/users/hamelsmu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamelsmu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamelsmu/subscriptions",
"organizations_url": "https://api.github.com/users/hamelsmu/orgs",
"repos_url": "https://api.github.com/users/hamelsmu/repos",
"events_url": "https://api.github.com/users/hamelsmu/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamelsmu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah Reformer has no tokenizer so this doesn't work...sadly I also think since Reformer doesn't work super well, it's low priority to fix this (cc @Narsil )",
"Seems like the tokenizer fix is not that hard to make given the README (simply byte shift+2).\r\n\r\nHowever the code seems to refer to a spiece.model : https://github.com/huggingface/transformers/blob/master/src/transformers/models/reformer/tokenization_reformer.py#L37\r\n\r\nHow bad is reformer performing ? (I am also curious to know how bad are character level models)\r\n\r\nEdit: OK, this one is different because it is character level, but it's different from regular transformers that uses spiece, I see.",
"The character level Reformer model does quite well at text generation\r\n\r\nHere is a demo (from a while back) for generating Wikipedia like entries: \r\nhttps://colab.research.google.com/drive/1Oao8vBtDkz6v1E1efUhTlug5uuxC_BnE?usp=sharing",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,623 | 1,623 | CONTRIBUTOR | null | From: https://huggingface.co/google/reformer-enwik8?text=My+name+is+Julien+and+I+like+to

@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11649/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11648/comments | https://api.github.com/repos/huggingface/transformers/issues/11648/events | https://github.com/huggingface/transformers/issues/11648 | 882,565,132 | MDU6SXNzdWU4ODI1NjUxMzI= | 11,648 | Very large difference between the results after resume | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please do not open a duplicate issue, you can reopen the old one.",
"Dear Sylvain\r\nIf huggingFace team closes an issue, the user is not able to reopen it, at least on my side, I also attached the screenshot of it. \r\n\r\nhttps://ibb.co/56S00LT\r\n\r\nThank you. ",
"Your screenshot does not show the bottom of the dialog box where there should be the button \"Reopen\" normally.",
"And if not, please ask us to reopen the issue on the issue discussion, do not open a duplicate :-)",
"there is really no reopen on the users sides. Sure I will make sure to ask and I will avoid recreate the issue, thank you very much for the remark :) I pay attention"
] | 1,620 | 1,620 | 1,620 | NONE | null | Dear HuggingFace team,
There is unfortunately no option to reopen the bugs, the issue I reported here [1] still exists with testing the last version of transformers. I add my comments on the same ticket. Could you kindly reopen this bug?
The variations are very high after resume, which makes the results not usable with resuming after checkpoint, I also tried to make things deterministic in torch, but it also could not solve the issue. I study in an institution where I only have access to GPUs for short hours, and I very much appreciate your help on this issue to make training reproducible after resume from the checkpoints in trainer.
@sgugger
[1] https://github.com/huggingface/transformers/issues/11323 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11648/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11647/comments | https://api.github.com/repos/huggingface/transformers/issues/11647/events | https://github.com/huggingface/transformers/issues/11647 | 882,062,899 | MDU6SXNzdWU4ODIwNjI4OTk= | 11,647 | Key Error: 'pre-processing' during conversion from tatoeba to Marian model | {
"login": "velocityCavalry",
"id": 35786257,
"node_id": "MDQ6VXNlcjM1Nzg2MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/35786257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/velocityCavalry",
"html_url": "https://github.com/velocityCavalry",
"followers_url": "https://api.github.com/users/velocityCavalry/followers",
"following_url": "https://api.github.com/users/velocityCavalry/following{/other_user}",
"gists_url": "https://api.github.com/users/velocityCavalry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/velocityCavalry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/velocityCavalry/subscriptions",
"organizations_url": "https://api.github.com/users/velocityCavalry/orgs",
"repos_url": "https://api.github.com/users/velocityCavalry/repos",
"events_url": "https://api.github.com/users/velocityCavalry/events{/privacy}",
"received_events_url": "https://api.github.com/users/velocityCavalry/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"unstale",
"@patil-suraj - It would be really nice if we could tackle the tatoeba models at some point...\r\n\r\nThis seems to be related: https://github.com/huggingface/transformers/pull/12192\r\nhttps://github.com/huggingface/transformers/issues/10943"
] | 1,620 | 1,632 | null | NONE | null | ## Environment info
- `transformers` version: `4.6.0.dev0`
- Platform: `CentOS Linux release 7.7.1908 (Core)`
- Python version: `3.8.5`
- PyTorch version: `1.8.1 + cuda 10.2`
- Tensorflow version: N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Marian: @patrickvonplaten , @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Marian
The problem arises when using:
* [x] the official example scripts: tatoeba to marian model script
* [ ] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: machine translation
* [ ] my own task or dataset
## To reproduce
Following the script from [scripts/tatoeba/README.md ](https://github.com/huggingface/transformers/tree/master/scripts/tatoeba)
1.
```git clone [email protected]:huggingface/transformers.git
cd transformers
pip install -e .
pip install pandas GitPython wget
```
2.
```
curl https://cdn-datasets.huggingface.co/language_codes/language-codes-3b2.csv > language-codes-3b2.csv
curl https://cdn-datasets.huggingface.co/language_codes/iso-639-3.csv > iso-639-3.csv
```
3. `git clone [email protected]:Helsinki-NLP/Tatoeba-Challenge.git`
4. `python src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models kor-eng eng-kor --save_dir converted/`
Error message:
```
Traceback (most recent call last):
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 1267, in <module>
resolver = TatoebaConverter(save_dir=args.save_dir)
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 58, in __init__
reg = self.make_tatoeba_registry()
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 258, in make_tatoeba_registry
return [(k, v["pre-processing"], v["download"], v["download"][:-4] + ".test.txt") for k, v in results.items()]
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 258, in <listcomp>
return [(k, v["pre-processing"], v["download"], v["download"][:-4] + ".test.txt") for k, v in results.items()]
KeyError: 'pre-processing'
```
## Expected behavior
Conversion of the model from Tatoeba to Marian for the chosen language pair with no errors.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11647/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11647/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11646/comments | https://api.github.com/repos/huggingface/transformers/issues/11646/events | https://github.com/huggingface/transformers/issues/11646 | 881,776,145 | MDU6SXNzdWU4ODE3NzYxNDU= | 11,646 | Strange implementation of `convert_tokens_to_string` in albert tokenizer. | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"Indeed, you're probably right! When updating the ALBERT tokenizer to use the `sentencepiece.decode` instead of the manual handling - do all tests pass? Even the integration test?\r\n\r\nMakes me think we really should have integration tests for all tokenizers, as scenarios like this one are bound to happen.",
"Well yes. While \"adding subword regularization in more tokenizers\": #11417\r\nI recognized that the tokenizers could benefit from some bigger refactoring.\r\nPulling commen functions into a base class would be nice. And while doing this adding tests....\r\nThere is lot of duplicate code there...\r\n\r\nI might do this as a PR the next days (weeks) - we will see.",
"PR with a fix started: #11716",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am still working on this...",
"Fixed in #11716 closing here."
] | 1,620 | 1,626 | 1,626 | CONTRIBUTOR | null | Hi,
the albert tokenizer implements the `convert_tokens_to_string` function:
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/albert/tokenization_albert.py#L222-L223
While the deberta v2 and some other tokenizer just delegate this to the sentencepiece tokenizer:
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L146
IMO it would be better to always delegate to the sentencepiece tokenizer. What do you think?
## PS:
Some more examples here
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/barthez/tokenization_barthez.py#L251-L252
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/camembert/tokenization_camembert.py#L251-L252
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/m2m_100/tokenization_m2m_100.py#L187-L188
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/mbart/tokenization_mbart50.py#L208-L209
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/speech_to_text/tokenization_speech_to_text.py#L169-L173
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L264-L265 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11646/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11645/comments | https://api.github.com/repos/huggingface/transformers/issues/11645/events | https://github.com/huggingface/transformers/issues/11645 | 880,765,948 | MDU6SXNzdWU4ODA3NjU5NDg= | 11,645 | Bad result in fine-tuning XLNet for SQuAD | {
"login": "Timothy023",
"id": 34128356,
"node_id": "MDQ6VXNlcjM0MTI4MzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/34128356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Timothy023",
"html_url": "https://github.com/Timothy023",
"followers_url": "https://api.github.com/users/Timothy023/followers",
"following_url": "https://api.github.com/users/Timothy023/following{/other_user}",
"gists_url": "https://api.github.com/users/Timothy023/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Timothy023/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Timothy023/subscriptions",
"organizations_url": "https://api.github.com/users/Timothy023/orgs",
"repos_url": "https://api.github.com/users/Timothy023/repos",
"events_url": "https://api.github.com/users/Timothy023/events{/privacy}",
"received_events_url": "https://api.github.com/users/Timothy023/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Timothy023 \r\n\r\nPlease use the [forum](https://discuss.huggingface.co/) to post such questions. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,620 | 1,623 | 1,623 | NONE | null | Hello,
I'am fine-tuning XLNet for SQuAD V1.1 task, but I get the bad result. I got checkpoint of XLNet in [model hub](https://huggingface.co/xlnet-base-cased).
GPU: single GeForce RTX 3090 24G
running scripy:
`CUDA_VISIBLE_DEVICES=4 python ./examples/pytorch/question-answering/run_qa.py --model_name_or_path ../pretrained_model/xlnet_base --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --overwrite_output_dir --output_dir ../squad/xlnet_base`
result:
`{
"epoch": 2.0,
"eval_samples": 10848,
"exact_match": 12.639545884578997,
"f1": 14.638577161480404,
"train_runtime": 7179.3726,
"train_samples": 88835,
"train_samples_per_second": 2.062
}`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11645/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.