url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/12444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12444/comments | https://api.github.com/repos/huggingface/transformers/issues/12444/events | https://github.com/huggingface/transformers/issues/12444 | 933,902,336 | MDU6SXNzdWU5MzM5MDIzMzY= | 12,444 | Cannot load model saved with AutoModelForMaskedLM.from_pretrained if state_dict = True | {
"login": "q-clint",
"id": 77411256,
"node_id": "MDQ6VXNlcjc3NDExMjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/77411256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/q-clint",
"html_url": "https://github.com/q-clint",
"followers_url": "https://api.github.com/users/q-clint/followers",
"following_url": "https://api.github.com/users/q-clint/following{/other_user}",
"gists_url": "https://api.github.com/users/q-clint/gists{/gist_id}",
"starred_url": "https://api.github.com/users/q-clint/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/q-clint/subscriptions",
"organizations_url": "https://api.github.com/users/q-clint/orgs",
"repos_url": "https://api.github.com/users/q-clint/repos",
"events_url": "https://api.github.com/users/q-clint/events{/privacy}",
"received_events_url": "https://api.github.com/users/q-clint/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! You can find the documentation regarding `from_pretrained` here: https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained\r\n\r\nNamely, regarding the state dict:\r\n```\r\nstate_dict (Dict[str, torch.Tensor], optional) –\r\n A state dictionary to use instead of a state dictionary loaded from saved weights file.\r\n\r\n This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case \r\n though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.\r\n```\r\nIt accepts a state dict, not a boolean.\r\n\r\nWhat are you trying to do by passing it `state_dict=True`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,625 | 1,628 | 1,628 | NONE | null | Hello,
It seems that setting the `state_dict=True` flag in a model's `.save_pretrained` method breaks the load process for `AutoModelForMaskedLM.from_pretrained`
The following code works
```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained('distilroberta-base')
model.save_pretrained(
save_directory = './deleteme',
save_config = True,
#state_dict = True,
push_to_hub = False,
)
model = AutoModelForMaskedLM.from_pretrained('./deleteme')
```
However, the following code does not
```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained('distilroberta-base')
model.save_pretrained(
save_directory = './deleteme',
save_config = True,
state_dict = True,
push_to_hub = False,
)
model = AutoModelForMaskedLM.from_pretrained('./deleteme')
```
with error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-17-3ab36ee4e5d3> in <module>
8 push_to_hub = False,
9 )
---> 10 model = AutoModelForMaskedLM.from_pretrained('./deleteme')
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
393 if type(config) in cls._model_mapping.keys():
394 model_class = _get_model_class(config, cls._model_mapping)
--> 395 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
396 raise ValueError(
397 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1217 )
1218
-> 1219 model, missing_keys, unexpected_keys, error_msgs = cls._load_state_dict_into_model(
1220 model, state_dict, pretrained_model_name_or_path, _fast_init=_fast_init
1221 )
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/transformers/modeling_utils.py in _load_state_dict_into_model(cls, model, state_dict, pretrained_model_name_or_path, _fast_init)
1243 old_keys = []
1244 new_keys = []
-> 1245 for key in state_dict.keys():
1246 new_key = None
1247 if "gamma" in key:
AttributeError: 'bool' object has no attribute 'keys'
```
Environment:
```
>>> transformers.__version__
'4.8.2'
>>> torch.__version__
'1.8.0+cu111'
```
Fixes tried (but same error):
- tried with transformers 4.8.1
- tried with absolute paths (rather than relative)
- tried also saving configuration with:
```
from transformers import AutoConfig
config = AutoConfig.from_pretrained('distilroberta-base')
config.save_pretrained('./deleteme')
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12444/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12443/comments | https://api.github.com/repos/huggingface/transformers/issues/12443/events | https://github.com/huggingface/transformers/issues/12443 | 933,889,837 | MDU6SXNzdWU5MzM4ODk4Mzc= | 12,443 | [Wav2Vec2] Better names for internal classes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"This also concerns a badly chosen duplicate of `Wav2Vec2FeatureExtractor` as part of `modeling_wav2vec2.py` since it's also the \"FeatureExtractor\" in `feature_extraction_wav2Vec2.py` -> should be corrected as well.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,625 | 1,629 | null | MEMBER | null | Wav2Vec2's classes have too many names, *e.g.*: FlaxWav2Vec2EncoderLayerStableLayerNormCollection.
We should make those names easier (reminder for myself @patrickvonplaten to do this) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12443/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12443/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12442/comments | https://api.github.com/repos/huggingface/transformers/issues/12442/events | https://github.com/huggingface/transformers/pull/12442 | 933,782,775 | MDExOlB1bGxSZXF1ZXN0NjgwOTM5NjI3 | 12,442 | Add to talks section | {
"login": "suzana-ilic",
"id": 27798583,
"node_id": "MDQ6VXNlcjI3Nzk4NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/27798583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzana-ilic",
"html_url": "https://github.com/suzana-ilic",
"followers_url": "https://api.github.com/users/suzana-ilic/followers",
"following_url": "https://api.github.com/users/suzana-ilic/following{/other_user}",
"gists_url": "https://api.github.com/users/suzana-ilic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzana-ilic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzana-ilic/subscriptions",
"organizations_url": "https://api.github.com/users/suzana-ilic/orgs",
"repos_url": "https://api.github.com/users/suzana-ilic/repos",
"events_url": "https://api.github.com/users/suzana-ilic/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzana-ilic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | Added DeepMind details, switched time slots Ben Wang and DeepMind | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12442/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12442",
"html_url": "https://github.com/huggingface/transformers/pull/12442",
"diff_url": "https://github.com/huggingface/transformers/pull/12442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12442.patch",
"merged_at": 1625065083000
} |
https://api.github.com/repos/huggingface/transformers/issues/12441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12441/comments | https://api.github.com/repos/huggingface/transformers/issues/12441/events | https://github.com/huggingface/transformers/pull/12441 | 933,695,474 | MDExOlB1bGxSZXF1ZXN0NjgwODY0OTg5 | 12,441 | Add template for adding flax models | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's mostly done. I can successfully create a new model with flax. See #12454\r\nTODO:\r\n* Finish modeling_flax for encoder-decoder architecture\r\n* Tests are mapped from tf to np, need fixes.\r\n* Refactor coockie-cutter workflow, make flax blend in with pytorch, tensorflow.\r\n* ??",
"I think only the test for template is failing now, can someone take a quick look? Thanks!\r\n@patil-suraj @patrickvonplaten @LysandreJik ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Maybe I should rebase and open another PR?",
"Sorry for being so late on this one! I'll take a look tomorrow!",
"yaaaay - finally all the tests are green! @cccntu - amazing work. I've done a lot of small fixes that are quite time-consuming and tedious to make everything pass, but the main work was all done by you - thanks a mille!",
"Thank you very much @patrickvonplaten! 🤗\r\nThis is my biggest open source contribution so far, so I really appreciate you help fixing so many stuff and writing the test! (I merely copied the tf version and did some simple substitution.)\r\n\r\nI am curious how do you run the template tests? It was confusing to run them locally because the it would overwrite the tracked files and leave unused files, making the work tree messy. I guess `commit -> test -> clean, reset --hard` would work, but I always fear that I would accidentally delete the wrong thing."
] | 1,625 | 1,630 | 1,630 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12440
## From Patrick
@LysandreJik @sgugger Previously both the PT and TF encoder-decoder templates tests weren't run because the test name didn't match the "*template*" regex - I've fixed that here as pointed out in the comment below.
@cccntu added both FlaxBERT and FlaxBART templates and we've added two tests to make sure those work as expected.
The PR should be good for review now :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12441/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12441/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12441",
"html_url": "https://github.com/huggingface/transformers/pull/12441",
"diff_url": "https://github.com/huggingface/transformers/pull/12441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12441.patch",
"merged_at": 1630482544000
} |
https://api.github.com/repos/huggingface/transformers/issues/12440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12440/comments | https://api.github.com/repos/huggingface/transformers/issues/12440/events | https://github.com/huggingface/transformers/issues/12440 | 933,666,639 | MDU6SXNzdWU5MzM2NjY2Mzk= | 12,440 | cookiecutter template for adding flax model | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,625 | 1,630 | 1,630 | CONTRIBUTOR | null | # 🚀 Feature request
Add cookiecutter template for adding flax model.
## Motivation
There is no cookiecutter template for adding flax model.
## Your contribution
I am trying to add a flax model (#12411), I think it's a good opportunity to create a template at the same time. Can I work on this? Any suggestions?
@patil-suraj @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12440/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12440/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12439/comments | https://api.github.com/repos/huggingface/transformers/issues/12439/events | https://github.com/huggingface/transformers/issues/12439 | 933,657,370 | MDU6SXNzdWU5MzM2NTczNzA= | 12,439 | Expand text-generation pipeline support for other causal models e.g., BigBirdForCausalLM | {
"login": "AliOskooeiTR",
"id": 60223746,
"node_id": "MDQ6VXNlcjYwMjIzNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/60223746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AliOskooeiTR",
"html_url": "https://github.com/AliOskooeiTR",
"followers_url": "https://api.github.com/users/AliOskooeiTR/followers",
"following_url": "https://api.github.com/users/AliOskooeiTR/following{/other_user}",
"gists_url": "https://api.github.com/users/AliOskooeiTR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AliOskooeiTR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AliOskooeiTR/subscriptions",
"organizations_url": "https://api.github.com/users/AliOskooeiTR/orgs",
"repos_url": "https://api.github.com/users/AliOskooeiTR/repos",
"events_url": "https://api.github.com/users/AliOskooeiTR/events{/privacy}",
"received_events_url": "https://api.github.com/users/AliOskooeiTR/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,625 | 1,625 | null | NONE | null | # 🚀 Feature request
Tried using the text generation pipeline (TextGenerationPipeline) with BigBirdForCausalLM but seems like the pipeline currently only supports a limited number of models. Is there a reason for this? Is there a workaround short of implementing the pipeline myself? Thank you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12439/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12438/comments | https://api.github.com/repos/huggingface/transformers/issues/12438/events | https://github.com/huggingface/transformers/issues/12438 | 933,649,789 | MDU6SXNzdWU5MzM2NDk3ODk= | 12,438 | IndexError: index out of bound, MLM+XLA (pre-training) | {
"login": "neel04",
"id": 11617870,
"node_id": "MDQ6VXNlcjExNjE3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neel04",
"html_url": "https://github.com/neel04",
"followers_url": "https://api.github.com/users/neel04/followers",
"following_url": "https://api.github.com/users/neel04/following{/other_user}",
"gists_url": "https://api.github.com/users/neel04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neel04/subscriptions",
"organizations_url": "https://api.github.com/users/neel04/orgs",
"repos_url": "https://api.github.com/users/neel04/repos",
"events_url": "https://api.github.com/users/neel04/events{/privacy}",
"received_events_url": "https://api.github.com/users/neel04/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @lhoestq has an idea for the error in `datasets`",
"@lhoestq Any possible leads as to who can solve this bug?",
"This is the full traceback BTW, If it may help things going. I am also willing to create a reproducible Colab if you guys want:-\r\n````js\r\n06/28/2021 17:23:13 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False\r\n06/28/2021 17:23:13 - WARNING - datasets.builder - Using custom data configuration default-e8bc7b301aa1b353\r\n06/28/2021 17:23:13 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)\r\nDownloading and preparing dataset text/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5...\r\nDataset text downloaded and prepared to /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5. Subsequent calls will reuse this data.\r\n\r\nWARNING:root:TPU has started up successfully with version pytorch-1.9\r\nWARNING:root:TPU has started up successfully with version pytorch-1.9\r\nWARNING:run_mlm:Process rank: -1, device: xla:1, n_gpu: 0distributed training: False, 16-bits training: False\r\nINFO:run_mlm:Training/evaluation parameters TrainingArguments(\r\n_n_gpu=0,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.98,\r\nadam_epsilon=1e-08,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_find_unused_parameters=None,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=True,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=10,\r\neval_steps=50,\r\nevaluation_strategy=IntervalStrategy.STEPS,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\ngradient_accumulation_steps=1,\r\ngreater_is_better=True,\r\ngroup_by_length=False,\r\nignore_data_skip=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=0.0003,\r\nlength_column_name=length,\r\nload_best_model_at_end=True,\r\nlocal_rank=-1,\r\nlog_level=-1,\r\nlog_level_replica=-1,\r\nlog_on_each_node=True,\r\nlogging_dir=./logs,\r\nlogging_first_step=False,\r\nlogging_steps=50,\r\nlogging_strategy=IntervalStrategy.STEPS,\r\nlr_scheduler_type=SchedulerType.LINEAR,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=validation,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=5.0,\r\noutput_dir=./results,\r\noverwrite_output_dir=True,\r\npast_index=-1,\r\nper_device_eval_batch_size=1,\r\nper_device_train_batch_size=1,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=results,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=None,\r\nremove_unused_columns=True,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=./results,\r\nsave_steps=500,\r\nsave_strategy=IntervalStrategy.EPOCH,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=8,\r\nuse_legacy_prediction_loop=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=1000,\r\nweight_decay=0.01,\r\n)\r\nWARNING:datasets.builder:Using custom data configuration default-e8bc7b301aa1b353\r\nINFO:datasets.utils.filelock:Lock 139795201622480 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock\r\nINFO:datasets.utils.filelock:Lock 139795201622480 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock\r\nINFO:datasets.utils.filelock:Lock 139795201622864 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock\r\nINFO:datasets.builder:Generating dataset text (/root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)\r\n100%|██████████| 2/2 [00:00<00:00, 2330.17it/s]\r\nINFO:datasets.utils.download_manager:Downloading took 0.0 min\r\nINFO:datasets.utils.download_manager:Checksum Computation took 0.0 min\r\n100%|██████████| 2/2 [00:00<00:00, 920.91it/s]\r\nINFO:datasets.utils.info_utils:Unable to verify checksums.\r\nINFO:datasets.builder:Generating split train\r\nINFO:datasets.arrow_writer:Done writing 8 examples in 172 bytes /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete/text-train.arrow.\r\nINFO:datasets.builder:Generating split validation\r\nINFO:datasets.arrow_writer:Done writing 8 examples in 172 bytes /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete/text-validation.arrow.\r\nINFO:datasets.utils.info_utils:Unable to verify splits sizes.\r\nINFO:datasets.utils.filelock:Lock 139795201625808 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete.lock\r\nINFO:datasets.utils.filelock:Lock 139795201625808 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete.lock\r\nINFO:datasets.utils.filelock:Lock 139795201622864 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock\r\nINFO:datasets.builder:Constructing Dataset for split train, validation, from /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5\r\n100%|██████████| 2/2 [00:00<00:00, 458.74it/s]\r\n[INFO|configuration_utils.py:528] 2021-06-28 17:23:13,619 >> loading configuration file ./config/config.json\r\n[INFO|configuration_utils.py:566] 2021-06-28 17:23:13,619 >> Model config BigBirdConfig {\r\n \"architectures\": [\r\n \"BigBirdForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"attention_type\": \"block_sparse\",\r\n \"block_size\": 64,\r\n \"bos_token_id\": 1,\r\n \"eos_token_id\": 2,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu_new\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 16000,\r\n \"model_type\": \"big_bird\",\r\n \"num_attention_heads\": 4,\r\n \"num_hidden_layers\": 4,\r\n \"num_random_blocks\": 3,\r\n \"pad_token_id\": 0,\r\n \"rescale_embeddings\": false,\r\n \"sep_token_id\": 66,\r\n \"transformers_version\": \"4.9.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_bias\": true,\r\n \"use_cache\": true,\r\n \"vocab_size\": 40000\r\n}\r\n\r\n[INFO|tokenization_utils_base.py:1651] 2021-06-28 17:23:13,620 >> Didn't find file ./tokenizer/spiece.model. We won't load it.\r\n[INFO|tokenization_utils_base.py:1651] 2021-06-28 17:23:13,620 >> Didn't find file ./tokenizer/added_tokens.json. We won't load it.\r\n[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file None\r\n[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file ./tokenizer/tokenizer.json\r\n[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file None\r\n[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file ./tokenizer/special_tokens_map.json\r\n[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file ./tokenizer/tokenizer_config.json\r\nINFO:run_mlm:Training new model from scratch\r\nException in device=TPU:6: Default process group has not been initialized, please make sure to call init_process_group.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 329, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 323, in _start_fn\r\n fn(gindex, *args)\r\n File \"/content/run_mlm.py\", line 529, in _mp_fn\r\n main()\r\n File \"/content/run_mlm.py\", line 386, in main\r\n with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n File \"/usr/lib/python3.7/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/training_args.py\", line 1005, in main_process_first\r\n torch.distributed.barrier()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py\", line 2523, in barrier\r\n default_pg = _get_default_group()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py\", line 358, in _get_default_group\r\n raise RuntimeError(\"Default process group has not been initialized, \"\r\nRuntimeError: Default process group has not been initialized, please make sure to call init_process_group.\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .\r\nINFO:datasets.arrow_writer:Done writing 0 indices in 0 bytes .\r\nException in device=TPU:0: Default process group has not been initialized, please make sure to call init_process_group.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/training_args.py\", line 1006, in main_process_first\r\n yield\r\n File \"/content/run_mlm.py\", line 393, in main\r\n desc=\"Running tokenizer on dataset line_by_line\",\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py\", line 489, in map\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py\", line 489, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 1664, in map\r\n for rank in range(num_proc)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 1664, in <listcomp>\r\n for rank in range(num_proc)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2664, in shard\r\n writer_batch_size=writer_batch_size,\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 186, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py\", line 397, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2254, in select\r\n return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 2170, in _new_dataset_with_indices\r\n fingerprint=fingerprint,\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py\", line 297, in __init__\r\n self._indices.column(0)[0].type\r\n File \"pyarrow/table.pxi\", line 162, in pyarrow.lib.ChunkedArray.__getitem__\r\n File \"pyarrow/array.pxi\", line 549, in pyarrow.lib._normalize_index\r\nIndexError: index out of bounds\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 329, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 323, in _start_fn\r\n fn(gindex, *args)\r\n File \"/content/run_mlm.py\", line 529, in _mp_fn\r\n main()\r\n File \"/content/run_mlm.py\", line 393, in main\r\n desc=\"Running tokenizer on dataset line_by_line\",\r\n File \"/usr/lib/python3.7/contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/training_args.py\", line 1011, in main_process_first\r\n torch.distributed.barrier()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py\", line 2523, in barrier\r\n default_pg = _get_default_group()\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py\", line 358, in _get_default_group\r\n raise RuntimeError(\"Default process group has not been initialized, \"\r\nRuntimeError: Default process group has not been initialized, please make sure to call init_process_group.\r\nTraceback (most recent call last):\r\n File \"xla_spawn.py\", line 85, in <module>\r\n main()\r\n File \"xla_spawn.py\", line 81, in main\r\n xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 394, in spawn\r\n start_method=start_method)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py\", line 188, in start_processes\r\n while not context.join():\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py\", line 144, in join\r\n exit_code=exitcode\r\ntorch.multiprocessing.spawn.ProcessExitedException: process 6 terminated with exit code 17\r\n```",
"Hi !\r\nThis might be because `num_proc` is set to a value higher than the size of the dataset (so you end up with an empty dataset in one process).\r\nThis has recently been solved by this PR https://github.com/huggingface/datasets/pull/2566. There will be a new release of `datasets` today to make this fix available. In the meantime, you can try using a bigger dataset or reduce the number of data processing workers.",
"Hmmm...my dataset is about 25k sequences, which I cut down to 15k to save memory :thinking: so the `num_proc` shouldn't pose any issue. Right now, following up on your suggestion I ve set it to the default.\r\n\r\nAnyways, following up with the suggestion made by @LysandreJik, it seems that there might be some inconsistency while creating the dataset - putting it at a `max_length` of `512` and a few other flags for grad accumulation seems that it can train properly. \r\n\r\n> Could you try this out for me: set the max_seq_length value to something low, like 512 or 256. Does it still crash then?\r\n\r\nFor such lower values, it definitely doesn't crash which means you might be right. I would look to double-check my dataset generation process, but it still irks me why I can't see `max_seq_length` in the accepted TrainingArguments. Also, even if there aren't enough tokens to generate the require `16k` limit, why doesn't `pad_to_max_length` flag act here in this case, and pad till the max length?",
"In such case, should I crop long sequences and pad smaller sequences manually - or is this supposed to be done automatically by the dataset processing part of the script?",
"> it still irks me why I can't see max_seq_length in the accepted TrainingArguments.\r\n\r\n`max_seq_length` isn't a `TrainingArguments`, it's a `DataTrainingArguments`. The difference is that the former is used by the `Trainer`, while the latter is only used by the script to do pre/post-processing, and is [not passed to the `Trainer`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py#L470).\r\n\r\n> why doesn't pad_to_max_length flag act here in this case, and pad till the max length?\r\n\r\nI'm thinking the issue is happening earlier than the `pad_to_max_length` flax is consumed. I can reproduce with the following:\r\n\r\n```bash\r\necho \"This is a random sentence\" > small_file.txt\r\npython ~/transformers/examples/pytorch/language-modeling/run_mlm.py \\\r\n --output_dir=output_dir \\\r\n --model_name_or_path=google/bigbird-roberta-base \\\r\n --train_file=small_file.txt \\\r\n --do_train\r\n```\r\n\r\nThe error comes from the dataset map that is calling the [`group_text`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py#L414-L426) method. This method tries to put all of the tokenized examples in the `result` dictionary, but drops the small remainder. As we don't have enough data to complete a single sequence, then this method returns an empty result:\r\n```\r\n{'attention_mask': [], 'input_ids': [], 'special_tokens_mask': []}\r\n```\r\n\r\n@sgugger can chime in if my approach is wrong, but the following modifications to the `group_texts` method seems to do the trick:\r\n\r\n```diff\r\n def group_texts(examples):\r\n # Concatenate all texts.\r\n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can\r\n # customize this part to your needs.\r\n- total_length = (total_length // max_seq_length) * max_seq_length\r\n+ truncated_total_length = (total_length // max_seq_length) * max_seq_length\r\n # Split by chunks of max_len.\r\n- result = {\r\n- k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n- for k, t in concatenated_examples.items()\r\n- }\r\n+ if total_length == 0:\r\n+ result = {\r\n+ k: [t[i : i + max_seq_length] for i in range(0, truncated_total_length, max_seq_length)]\r\n+ for k, t in concatenated_examples.items()\r\n+ }\r\n+ else:\r\n+ result = {\r\n+ k: [t[i: i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n+ for k, t in concatenated_examples.items()\r\n+ }\r\n return result\r\n``` ",
"That clears up a lot of things @LysandreJik! Thanx a ton :rocket: :cake: :1st_place_medal:\r\n\r\n~~Just a minor peek, When running the scripts it apparently doesn't log anything to the Colab's cell output. tried using different logging levels and setting to defaults to no avail~~ (no bother, simply bash piped it to a file to save time and `tail -f` to view updates to file in real-time)",
"I don't understand your diff @LysandreJik . If `total_length==0` then `truncated_total_length` is also 0. I think you meant something more like this maybe?\r\n```diff\r\n- total_length = (total_length // max_seq_length) * max_seq_length\r\n+ if total_length >= max_seq_length:\r\n+ total_length = (total_length // max_seq_length) * max_seq_length\r\n```",
"Ah I think I did a typo when copying the code, my local code has the following:\r\n`if truncated_total_length != 0:` instead of `if total_length == 0:`.\r\n\r\nThis way, if the truncated total length is equal to 0 (like in this case), then it will use the `total_length` (which is of 7) to create the example.\r\n\r\nIf the truncated total length is not 0, then it will use this value to create the example; which was the case before.\r\n\r\nFeel free to modify as you wish so that it's clearer for you!\r\n",
"Yes, then it's equivalent to my suggestion. Thanks!",
"@LysandreJik I may be misunderstanding how argument parsing works, but for flags like `evaluation_strategy`, it doesn't seem that the script parses it at all? I have a logging problem (https://discuss.huggingface.co/t/no-step-wise-logging-for-xla-mlm-scripts-in-colab-jupyter/8134) which seems to ignore the arguments/fails to override them. I am getting log of loss only at the start of epoch (`0.19`) somewhere again (`epoch-1.89`) and never again, when set for 5 epochs.\r\n\r\nThis seems strange, nor can I judge my models as to how they are performing. any ideas?"
] | 1,625 | 1,625 | 1,625 | NONE | null | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.13
- JaxLib version: 0.1.66
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False (Only `TPU` cores)
### Who can help
Not sure who might be the most appropriate person
## Information
Model I am using (Bert, XLNet ...): `BigBird` (MLM)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
This is an error with the `MLM` script (PyTorch) for attempting to pre-train BigBird on TPUs over XLA. The dataset in question is a custom dataset, and the model config and tokenizer has been initialized appropriately.
This is a continuation of [this unanswered](https://discuss.huggingface.co/t/indexerror-index-out-of-bounds/2859) Forum post that faces the same error.
Command used to run the script:-
```py
%%bash
python xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir="./results" \
--model_type="big_bird" \
--config_name="./config" \
--tokenizer_name="./tokenizer" \
--train_file="./dataset.txt" \
--validation_file="./val.txt" \
--line_by_line="True" \
--max_seq_length="16000" \
--weight_decay="0.01" \
--per_device_train_batch_size="1" \
--per_device_eval_batch_size="1" \
--learning_rate="3e-4" \
--tpu_num_cores='8' \
--warmup_steps="1000" \
--overwrite_output_dir \
--pad_to_max_length \
--num_train_epochs="5" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--do_train \
--do_eval \
--logging_steps="50" \
--evaluation_strategy="steps" \
--eval_accumulation_steps='10' \
--report_to="tensorboard" \
--logging_dir='./logs' \
--save_strategy="epoch" \
--load_best_model_at_end='True' \
--metric_for_best_model='validation' \
--preprocessing_num_workers='15'
```
I am facing two errors to be precise,
```py
Exception in device=TPU:0: Default process group has not been initialized, please make sure to call init_process_group.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1006, in main_process_first
yield
File "/content/run_mlm.py", line 393, in main
desc="Running tokenizer on dataset line_by_line",
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in map
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in <dictcomp>
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in map
for rank in range(num_proc)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in <listcomp>
for rank in range(num_proc)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2664, in shard
writer_batch_size=writer_batch_size,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 186, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2254, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2170, in _new_dataset_with_indices
fingerprint=fingerprint,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 297, in __init__
self._indices.column(0)[0].type
File "pyarrow/table.pxi", line 162, in pyarrow.lib.ChunkedArray.__getitem__
File "pyarrow/array.pxi", line 549, in pyarrow.lib._normalize_index
IndexError: index out of bounds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/run_mlm.py", line 529, in _mp_fn
main()
File "/content/run_mlm.py", line 393, in main
desc="Running tokenizer on dataset line_by_line",
File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1011, in main_process_first
torch.distributed.barrier()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 2523, in barrier
default_pg = _get_default_group()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 358, in _get_default_group
raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
```
I haven't modified the script to call the `init_process_group` yet, focusing on the earlier error of index out of bounds. Clearly, the problem is arising from my own dataset - which was working before however. Interestingly, we get it when its in the tokenizing stage.
At some point, when constructing the arrow dataset its failing. I have no idea about Apache Arrow, so I can't debug further :sweat_smile:
As for the dataset to use, A few simple lines of code with random numbers would be more than enough to reproduce the dataset.
```py
!touch dataset.txt
import random
f = open('./dataset.txt', 'w')
for lines in range(50):
f.write(' '.join(m for m in [str(random.randint(0, 40000)) for i in range(16000)]) + '\n') #16000 words/(numbers) in one line, with random numbers from 0-40000 only.
f.close()
```
Can anyone give me some guidance on where should I start to investigate the error and some possible leads as to the origin?
Any ideas how I can solve it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12438/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12437/comments | https://api.github.com/repos/huggingface/transformers/issues/12437/events | https://github.com/huggingface/transformers/pull/12437 | 933,594,990 | MDExOlB1bGxSZXF1ZXN0NjgwNzc4MTc2 | 12,437 | Add test for a WordLevel tokenizer model | {
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
In this PR I propose to add a test for the feature developed in PR #12361. Unless I'm mistaken, no language model tested currently uses a tokenizer that would use the WordLevel model.
The tokenizer created for this test is hosted here: [https://hf.co/robot-test/dummy-tokenizer-wordlevel](https://huggingface.co/robot-test/dummy-tokenizer-wordlevel)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12437",
"html_url": "https://github.com/huggingface/transformers/pull/12437",
"diff_url": "https://github.com/huggingface/transformers/pull/12437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12437.patch",
"merged_at": 1625135827000
} |
https://api.github.com/repos/huggingface/transformers/issues/12436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12436/comments | https://api.github.com/repos/huggingface/transformers/issues/12436/events | https://github.com/huggingface/transformers/issues/12436 | 933,588,599 | MDU6SXNzdWU5MzM1ODg1OTk= | 12,436 | Add DEBERTA-base model for usage in EncoderDecoderModel. | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Great idea! Do you want to take a stab at it?",
"Hi @alexvaca0,\r\nThis is an interesting feature, \r\nBut I was curious that Deberta, Bert, and Roberta are encoder-based models so there is no decoder part right? I checked their model class and I could not find the Decoder / EncoderDecoder class!\r\nCan you please give more insight into it? ",
"That's right, there's no decoder in those models, but there is a class in Transformers, EncoderDecoderModel, that enables to create encoder-decoder architectures from encoder-only architectures :)\r\n\r\nPerfect, let me have a look at it and see if I can code that adaptation @LysandreJik",
"Great! If you run into any blockers, feel free to ping us. If you want to add the possibility for DeBERTa to be a decoder, you'll probably need to add the cross attention layers. \r\n\r\ncc @patrickvonplaten and @patil-suraj which have extensive experience with enc-dec models.",
"Hey, is this feature being worked on by someone? If not then I can pick it up! @LysandreJik ",
"Would be great if you could pick it up @manish-p-gupta :-) ",
"Great!. Any specific things I should go through before taking it up? I'm familiar with the Code of conduct and contributing guidelines. I'll also open a draft PR to carry on the discussions there. Let me know if you think I need to look at anything else. @patrickvonplaten ",
"@ArthurZucker has been working with DeBERTa models recently and can likely help and give advice!",
"Yes! Feel free to ping me for an early review if you have any doubts"
] | 1,625 | 1,674 | null | NONE | null | # 🚀 Feature request
Add DEBERTA-base model as an option for creating an EncoderDecoderModel.
## Motivation
Currently only BERT and RoBERTa models can be transformed to a Seq2Seq model via EncoderDecoder class, and for those of use developing DeBERTa models from scratch it would be wonderful to be able to generate a Seq2Seq model from them. Also, the Deberta-base model works much better than BERT and RoBERTa.
## Your contribution
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12436/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12436/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12435/comments | https://api.github.com/repos/huggingface/transformers/issues/12435/events | https://github.com/huggingface/transformers/issues/12435 | 933,586,798 | MDU6SXNzdWU5MzM1ODY3OTg= | 12,435 | Using huggingface Pipeline in industry | {
"login": "Lassehhansen",
"id": 54820693,
"node_id": "MDQ6VXNlcjU0ODIwNjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/54820693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lassehhansen",
"html_url": "https://github.com/Lassehhansen",
"followers_url": "https://api.github.com/users/Lassehhansen/followers",
"following_url": "https://api.github.com/users/Lassehhansen/following{/other_user}",
"gists_url": "https://api.github.com/users/Lassehhansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lassehhansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lassehhansen/subscriptions",
"organizations_url": "https://api.github.com/users/Lassehhansen/orgs",
"repos_url": "https://api.github.com/users/Lassehhansen/repos",
"events_url": "https://api.github.com/users/Lassehhansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lassehhansen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The pipeline is a simple wrapper over the model and tokenizer. When using a pipeline in production you should be aware of:\r\n- The pipeline is doing simple pre and post-processing reflecting the model's behavior. If you want to understand how the pipeline behaves, you should understand how the model behaves as the pipeline isn't doing anything fancy (true for sentiment analysis, less true for token classification or QA which have very specific post-processing).\r\n- Pipelines are very simple to use, but they remain an abstraction over the model and tokenizer. If you're looking for performance and you have a very specific use-case, you will get on par or better performance when using the model and tokenizer directly.\r\n\r\nDoes that answer your question?",
"I think my question was minded for whether the pipeline is free to use in industry, and or whether there is any security issues with using it in an automated work process for industry projects? Not so much minded for the bias of the model.\r\n\r\nThanks in advance\r\nLasse",
"For industry usage including security guidance etc I would recommend getting in touch with our Expert Acceleration Program at https://huggingface.co/support\r\n\r\nCheers",
"Perfect, I will try and do that. Thanks for the help\r\n"
] | 1,625 | 1,625 | 1,625 | NONE | null | Hi,
I was wondering if there is anything to be aware of when using your sentiment-analysis pipeline for industry projects at work. Are there any limitations to what I can or cannot do?
Thank you for your always amazing service.
Lasse | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12435/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12434/comments | https://api.github.com/repos/huggingface/transformers/issues/12434/events | https://github.com/huggingface/transformers/issues/12434 | 933,535,430 | MDU6SXNzdWU5MzM1MzU0MzA= | 12,434 | TPU not initialized when running official `run_mlm_flax.py` example. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Given that when running this command I'm getting ~30 training iterations per second, I'm assuming though that it's just a warning and not an error.",
"I am getting the same errors here. Also noting that the parameters from TrainingArguments are written at the start of the run_mlm_script, I see a few weird settings, like:\r\n```\r\nno_cuda=False,\r\ntpu_num_cores=None,\r\n``` \r\nI am not getting a stable report on iterations per second, so it is hard to see if this is a going well. Progress is still at 0%. Occationally, I am also getting error messages like this written to the screen roughly each minute:\r\n\r\n```\r\ntcmalloc: large alloc 25737314304 bytes == 0x770512000 @ 0x7ff6c92be680 0x7ff6c92df824 0x7ff6c92dfb8a 0x7ff49fbb6417 0x7ff49a9c43d0 0x7ff49a9d1ef4 0x7ff49a9d4e77 0x7ff49a9261dd 0x7ff49a6a0563 0x7ff49a68e460 0x5f5b29 0x5f66f6 0x50ad17 0x570296 0x56951a 0x5f60b3 0x5f6b6b 0x664e8d 0x5f556e 0x56ca9e 0x56951a 0x5f60b3 0x5f54e7 0x56ca9e 0x5f5ed6 0x56b3fe 0x5f5ed6 0x56b3fe 0x56951a 0x5f60b3 0x5f54e7\r\n```\r\nIt might also be mentioned that I am initially getting \"Missing XLA configuration\". I had to manually set the environment variable: `export XRT_TPU_CONFIG=\"localservice;0;localhost:51011\".` for the script to run. I am not sure if this really does the right thing. Maybe also the tpu driver needs to be specified?\r\n\r\n",
"...and ... I see this warning, tat does not look good:\r\n`[10:28:02] - WARNING - absl - No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)`",
"If of interest, this is my notes on installing this. It is mainly based on Patricks tutorial:\r\nhttps://github.com/NBAiLab/notram/blob/master/guides/flax.md",
"The simplest way to see if you're running on TPU is to call `jax.devices()`. E.g. you may see:\r\n\r\n```\r\n[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0),\r\n TpuDevice(id=1, process_index=0, coords=(0,0,0), core_on_chip=1),\r\n TpuDevice(id=2, process_index=0, coords=(1,0,0), core_on_chip=0),\r\n TpuDevice(id=3, process_index=0, coords=(1,0,0), core_on_chip=1),\r\n TpuDevice(id=4, process_index=0, coords=(0,1,0), core_on_chip=0),\r\n TpuDevice(id=5, process_index=0, coords=(0,1,0), core_on_chip=1),\r\n TpuDevice(id=6, process_index=0, coords=(1,1,0), core_on_chip=0),\r\n TpuDevice(id=7, process_index=0, coords=(1,1,0), core_on_chip=1)]\r\n ```",
"@avital. \"import jax;jax.devices()\" gives me exactly the same response.\r\n\r\nI am also able to make simple calculations on the TPU. The problem seem only to be related to the run_mlm_flax-script.",
"@avital. I also found this log file. It seems to think that the TPU is busy.\r\n```\r\nE0630 13:24:28.491443 20778 kernel_dma_mapper.cc:88] Error setting number simples with FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.491648 20778 tensor_node.cc:436] [0000:00:04.0 PE0 C0 MC-1 TN0] Failed to set number of simple DMA addresses: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.491660 20778 driver.cc:806] [0000:00:04.0 PE0 C0 MC-1] tensor node 0 open failed: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.491678 20778 driver.cc:194] [0000:00:04.0 PE0 C0 MC-1] Device has failed. Status:FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.492048 20777 kernel_dma_mapper.cc:88] Error setting number simples with FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.492093 20777 tensor_node.cc:436] [0000:00:05.0 PE0 C1 MC-1 TN0] Failed to set number of simple DMA addresses: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.492100 20777 driver.cc:806] [0000:00:05.0 PE0 C1 MC-1] tensor node 0 open failed: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.492110 20777 driver.cc:194] [0000:00:05.0 PE0 C1 MC-1] Device has failed. Status:FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.492996 20778 driver.cc:165] [0000:00:04.0 PE0 C0 MC-1] Transitioned to State::FAILED, dumping core\r\nE0630 13:24:28.493215 20777 driver.cc:165] [0000:00:05.0 PE0 C1 MC-1] Transitioned to State::FAILED, dumping core\r\nE0630 13:24:28.494112 20786 kernel_dma_mapper.cc:88] Error setting number simples with FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\nE0630 13:24:28.494225 20786 tensor_node.cc:436] [0000:00:07.0 PE0 C3 MC-1 TN0] Failed to set number of simple DMA addresses: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']\r\n```\r\n",
"I am now able to get this to run, and think I understand why this is happening.\r\n\r\nThe initial error I am seeing is: \"RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:273 : Missing XLA configuration\"\r\n\r\nI have been getting around this error by setting \"export XRT_TPU_CONFIG=\"localservice;0;localhost:51011\". This is probably the right way of doing it on torch system, but it also leads to torch stealing the TPU from Jax, and letting the run_mlm_flax.py train on CPU.\r\n\r\nHowever, it seems like there is a function in transformers/training_args.py called \"is_torch_tpu_available\". If this returns TRUE, it also asks for the XRT_TPU_CONFIG. I am really not sure why it is returning TRUE on my system but it might be because the VM I am using have other preinstalled software. \r\n\r\nLots of ways of fixing this of course. You guys probably know the best way. ",
"I've seen the same `INFO` message from `absl`:\r\n\r\n\r\n\r\nBut training seems to work (batch size 128 as from the mlm flax documentation)",
"Closing this issue now as it is expected",
"I am having the exact same issue described here. I see that @peregilk found a work around, but he/she hasn't shared what it was.\r\n\r\nCould you describe how you overcame this issue? @peregilk ",
"@erensezener I think a lot has changed in the code here since this was written. I am linking to my internal notes above. I have repeated that one several times, and know it gets a working system up and running.\r\n\r\nJust a wild guess: Have you tried setting ```export USE_TORCH=False```",
"> Just a wild guess: Have you tried setting `export USE_TORCH=False`\r\n\r\nThis solves the issue indeed! Thank you, you saved me many more hours of debugging :)\r\n\r\n"
] | 1,625 | 1,635 | 1,626 | MEMBER | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@avital @marcvanzee
## Information
I am setting up a new TPU VM according to the [Cloud TPU VM JAX quickstart](https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm) and the following the installation steps as described here: https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries to install `flax`, `jax` `transformers`, and `datasets`.
Then, when running a simple example using the [`run_mlm_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_mlm_flax.py) script, I'm encounting an error/ warning:
```
INFO:absl:Starting the local TPU driver.
INFO:absl:Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
INFO:absl:Unable to initialize backend 'gpu': Not found: Could not find registered platform with name: "cuda". Available platform names are: TPU Interpreter Host
```
=> I am now unsure whether the code actually runs on TPU or instead on CPU.
## To reproduce
The problem can be easily reproduced by:
1. sshing into a TPU, *e.g.* `patrick-test` (Flax, JAX, & Transformers should already be installed)
If one goes into `patrick-test` the libraries are already installed - on an "newly" created TPU VM, one can follow [these](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries) steps to install the relevant libraries.
2. Going to home folder
```
cd ~/
```
3. creating a new dir:
```
mkdir test && cd test
```
4. cloning a dummy repo into it
```
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/patrickvonplaten/norwegian-roberta-als
```
4. Linking the `run_mlm_flax.py` script
```
ln -s $(realpath ~/transformers/examples/flax/language-modeling/run_mlm_flax.py) ./
```
5. Running the following command (which should show the above warning/error again):
```
./run_mlm_flax.py \
--output_dir="norwegian-roberta-als" \
--model_type="roberta" \
--config_name="norwegian-roberta-als" \
--tokenizer_name="norwegian-roberta-als" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_als" \
--max_seq_length="128" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--learning_rate="3e-4" \
--overwrite_output_dir \
--num_train_epochs="3"
```
=>
You should see a console print that says:
```
[10:15:48] - INFO - absl - Starting the local TPU driver.
[10:15:48] - INFO - absl - Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
[10:15:48] - INFO - absl - Unable to initialize backend 'gpu': Not found: Could not find registered platform with name: "cuda". Available platform names are: TPU Host Interpreter
```
## Expected behavior
I think this warning / error should not be displayed and the TPU should be correctly configured.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12434/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/12434/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12433/comments | https://api.github.com/repos/huggingface/transformers/issues/12433/events | https://github.com/huggingface/transformers/pull/12433 | 933,502,288 | MDExOlB1bGxSZXF1ZXN0NjgwNjk3OTc3 | 12,433 | Added to talks section | {
"login": "suzana-ilic",
"id": 27798583,
"node_id": "MDQ6VXNlcjI3Nzk4NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/27798583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzana-ilic",
"html_url": "https://github.com/suzana-ilic",
"followers_url": "https://api.github.com/users/suzana-ilic/followers",
"following_url": "https://api.github.com/users/suzana-ilic/following{/other_user}",
"gists_url": "https://api.github.com/users/suzana-ilic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzana-ilic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzana-ilic/subscriptions",
"organizations_url": "https://api.github.com/users/suzana-ilic/orgs",
"repos_url": "https://api.github.com/users/suzana-ilic/repos",
"events_url": "https://api.github.com/users/suzana-ilic/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzana-ilic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | Added one more confirmed speaker, zoom links and gcal event links | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12433",
"html_url": "https://github.com/huggingface/transformers/pull/12433",
"diff_url": "https://github.com/huggingface/transformers/pull/12433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12433.patch",
"merged_at": 1625051651000
} |
https://api.github.com/repos/huggingface/transformers/issues/12432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12432/comments | https://api.github.com/repos/huggingface/transformers/issues/12432/events | https://github.com/huggingface/transformers/pull/12432 | 933,498,457 | MDExOlB1bGxSZXF1ZXN0NjgwNjk0ODI0 | 12,432 | fix typo in mt5 configuration docstring | {
"login": "fcakyon",
"id": 34196005,
"node_id": "MDQ6VXNlcjM0MTk2MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fcakyon",
"html_url": "https://github.com/fcakyon",
"followers_url": "https://api.github.com/users/fcakyon/followers",
"following_url": "https://api.github.com/users/fcakyon/following{/other_user}",
"gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions",
"organizations_url": "https://api.github.com/users/fcakyon/orgs",
"repos_url": "https://api.github.com/users/fcakyon/repos",
"events_url": "https://api.github.com/users/fcakyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/fcakyon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you!"
] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | default vocab_size value is written incorrectly in docstring. this pr updates the mt5 configuration docstring. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12432/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12432",
"html_url": "https://github.com/huggingface/transformers/pull/12432",
"diff_url": "https://github.com/huggingface/transformers/pull/12432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12432.patch",
"merged_at": 1625063046000
} |
https://api.github.com/repos/huggingface/transformers/issues/12431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12431/comments | https://api.github.com/repos/huggingface/transformers/issues/12431/events | https://github.com/huggingface/transformers/issues/12431 | 933,457,919 | MDU6SXNzdWU5MzM0NTc5MTk= | 12,431 | how to continue pre-train custom data | {
"login": "moseshu",
"id": 23112888,
"node_id": "MDQ6VXNlcjIzMTEyODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/23112888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moseshu",
"html_url": "https://github.com/moseshu",
"followers_url": "https://api.github.com/users/moseshu/followers",
"following_url": "https://api.github.com/users/moseshu/following{/other_user}",
"gists_url": "https://api.github.com/users/moseshu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moseshu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moseshu/subscriptions",
"organizations_url": "https://api.github.com/users/moseshu/orgs",
"repos_url": "https://api.github.com/users/moseshu/repos",
"events_url": "https://api.github.com/users/moseshu/events{/privacy}",
"received_events_url": "https://api.github.com/users/moseshu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,625 | 1,628 | 1,628 | NONE | null | 1.problem
I want to pretrain my specific data, reset the numbers of the encoders or layers.
can you offer an interface to load our custom data which is like json or other type to
pretrain. the LMASK, NSP,PE is packaged in the interface.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12431/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12430/comments | https://api.github.com/repos/huggingface/transformers/issues/12430/events | https://github.com/huggingface/transformers/issues/12430 | 933,446,596 | MDU6SXNzdWU5MzM0NDY1OTY= | 12,430 | Distilling zero-shot classification: Assertion `srcIndex < srcSelectDimSize` failed. | {
"login": "dbs700",
"id": 25131392,
"node_id": "MDQ6VXNlcjI1MTMxMzky",
"avatar_url": "https://avatars.githubusercontent.com/u/25131392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dbs700",
"html_url": "https://github.com/dbs700",
"followers_url": "https://api.github.com/users/dbs700/followers",
"following_url": "https://api.github.com/users/dbs700/following{/other_user}",
"gists_url": "https://api.github.com/users/dbs700/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dbs700/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dbs700/subscriptions",
"organizations_url": "https://api.github.com/users/dbs700/orgs",
"repos_url": "https://api.github.com/users/dbs700/repos",
"events_url": "https://api.github.com/users/dbs700/events{/privacy}",
"received_events_url": "https://api.github.com/users/dbs700/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Found a solution by reducing the input size of the text, and limiting it by the model's max_position_embeddings. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,625 | 1,629 | 1,629 | NONE | null | ## Environment info
tokenizers 0.10.2
torch 1.8.1+cu111
transformers 4.5.1
datasets 1.6.1
IPython 7.19.0
jupyter_client 6.1.7
jupyter_core 4.6.3
jupyterlab 2.2.6
notebook 6.1.4
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.17763-SP0
## Issue
I'm trying to train a distill_classifier on a custom dataset with 40.000 rows of text, 10 labels.
```
!python ./distill_classifier.py \
--overwrite_output_dir \
--data_file email.txt \
--class_names_file class_names.txt \
--hypothesis_template "This text is about {}." \
--student_name_or_path distilbert-base-uncased \
--output_dir ./distilbert-base-uncased-student-large
```
At first, I got an error regarding max_length, I changed the script by adding truncation=True to tokenizer, and it resulted in another error. Strange thing is that I'm able to train it on a small custom dataset (~200 rows of text data, 10 labels) but with bigger data, it doesn't work so well. The output:
```
[INFO|configuration_utils.py:491] 2021-06-30 10:00:54,363 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at C:\Users\d/.cache\huggingface\transformers\fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:527] 2021-06-30 10:00:54,364 >> Model config RobertaConfig {
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
06/30/2021 10:00:53 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 3distributed training: False, 16-bits training: False
06/30/2021 10:00:53 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments(output_dir='./distilbert-base-uncased-notino-student-large', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=128, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs\\Jun30_10-00-53_dcvmdwhanl03', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=0, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./distilbert-base-uncased-notino-student-large', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name='length', report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, mp_parameters='')
06/30/2021 10:00:53 - INFO - __main__ - Generating predictions from zero-shot teacher model
06/30/2021 10:03:10 - INFO - __main__ - Initializing student model
06/30/2021 10:03:58 - INFO - __main__ - Training student model on teacher predictions
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|modeling_utils.py:1052] 2021-06-30 10:00:54,790 >> loading weights file https://huggingface.co/roberta-large-mnli/resolve/main/pytorch_model.bin from cache at C:\Users\o/.cache\huggingface\transformers\63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0
[WARNING|modeling_utils.py:1159] 2021-06-30 10:01:01,810 >> Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[INFO|modeling_utils.py:1176] 2021-06-30 10:01:01,810 >> All the weights of RobertaForSequenceClassification were initialized from the model checkpoint at roberta-large-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForSequenceClassification for predictions without further training.
[INFO|configuration_utils.py:491] 2021-06-30 10:01:04,545 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at C:\Users\o/.cache\huggingface\transformers\fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:527] 2021-06-30 10:01:04,546 >> Model config RobertaConfig {
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/vocab.json from cache at C:\Users\o/.cache\huggingface\transformers\64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/merges.txt from cache at C:\Users\o/.cache\huggingface\transformers\425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer.json from cache at C:\Users\o/.cache\huggingface\transformers\d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer_config.json from cache at None
0%| | 0/147 [00:00<?, ?it/s]
...
100%|##########| 147/147 [02:03<00:00, 1.19it/s]
[INFO|configuration_utils.py:491] 2021-06-30 10:03:10,735 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at C:\Users\o/.cache\huggingface\transformers\23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|configuration_utils.py:527] 2021-06-30 10:03:10,736 >> Model config DistilBertConfig {
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8",
"9": "LABEL_9",
"10": "LABEL_10"
},
"initializer_range": 0.02,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_10": 10,
"LABEL_2": 2,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8,
"LABEL_9": 9
},
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.5.1",
"vocab_size": 30522
}
[INFO|modeling_utils.py:1052] 2021-06-30 10:03:11,160 >> loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at C:\Users\o/.cache\huggingface\transformers\9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a
[WARNING|modeling_utils.py:1159] 2021-06-30 10:03:12,327 >> Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1170] 2021-06-30 10:03:12,328 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[INFO|configuration_utils.py:491] 2021-06-30 10:03:12,750 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at C:\Users\o/.cache\huggingface\transformers\23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|configuration_utils.py:527] 2021-06-30 10:03:12,750 >> Model config DistilBertConfig {
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.5.1",
"vocab_size": 30522
}
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at C:\Users\o/.cache\huggingface\transformers\0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at C:\Users\o/.cache\huggingface\transformers\75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at C:\Users\o/.cache\huggingface\transformers\8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
0%| | 0/1282 [00:00<?, ?ex/s][WARNING|tokenization_utils_base.py:3136] 2021-06-30 10:03:14,926 >> Token indices sequence length is longer than the specified maximum sequence length for this model (527 > 512). Running this sequence through the model will result in indexing errors
20%|#9 | 250/1282 [00:00<00:00, 2475.26ex/s]
38%|###7 | 484/1282 [00:00<00:00, 2433.06ex/s]
57%|#####6 | 729/1282 [00:00<00:00, 2438.11ex/s]
78%|#######7 | 999/1282 [00:00<00:00, 2504.20ex/s]
94%|#########4| 1206/1282 [00:00<00:00, 2355.94ex/s]
100%|##########| 1282/1282 [00:00<00:00, 2387.33ex/s]
[INFO|trainer.py:490] 2021-06-30 10:03:58,139 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text.
[INFO|trainer.py:1013] 2021-06-30 10:03:58,433 >> ***** Running training *****
[INFO|trainer.py:1014] 2021-06-30 10:03:58,439 >> Num examples = 1282
[INFO|trainer.py:1015] 2021-06-30 10:03:58,444 >> Num Epochs = 1
[INFO|trainer.py:1016] 2021-06-30 10:03:58,449 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1017] 2021-06-30 10:03:58,455 >> Total train batch size (w. parallel, distributed & accumulation) = 96
[INFO|trainer.py:1018] 2021-06-30 10:03:58,461 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1019] 2021-06-30 10:03:58,466 >> Total optimization steps = 14
[INFO|integrations.py:586] 2021-06-30 10:03:59,225 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: dbs700 (use `wandb login --relogin` to force relogin)
wandb: wandb version 0.10.33 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.10.30
wandb: Syncing run ./distilbert-base-uncased-notino-student-large
wandb: View project at https://wandb.ai/dbs700/huggingface
wandb: View run at https://wandb.ai/dbs700/huggingface/runs/3aegl7qi
wandb: Run data is saved locally in C:\Users\o\wandb\run-20210630_100419-3aegl7qi
wandb: Run `wandb offline` to turn off syncing.
0%| | 0/14 [00:00<?, ?it/s]C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
```
## Who can help
@LysandreJik
@sgugger
@joeddav | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12430/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12429/comments | https://api.github.com/repos/huggingface/transformers/issues/12429/events | https://github.com/huggingface/transformers/pull/12429 | 933,410,355 | MDExOlB1bGxSZXF1ZXN0NjgwNjIwNDU5 | 12,429 | Add default bos_token and eos_token for tokenizer of deberta_v2 | {
"login": "hjptriplebee",
"id": 22477665,
"node_id": "MDQ6VXNlcjIyNDc3NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22477665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hjptriplebee",
"html_url": "https://github.com/hjptriplebee",
"followers_url": "https://api.github.com/users/hjptriplebee/followers",
"following_url": "https://api.github.com/users/hjptriplebee/following{/other_user}",
"gists_url": "https://api.github.com/users/hjptriplebee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hjptriplebee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hjptriplebee/subscriptions",
"organizations_url": "https://api.github.com/users/hjptriplebee/orgs",
"repos_url": "https://api.github.com/users/hjptriplebee/repos",
"events_url": "https://api.github.com/users/hjptriplebee/events{/privacy}",
"received_events_url": "https://api.github.com/users/hjptriplebee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
Add default bos_token and eos_token for tokenizer of deberta_v2
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@n1t0, @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12429/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12429/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12429",
"html_url": "https://github.com/huggingface/transformers/pull/12429",
"diff_url": "https://github.com/huggingface/transformers/pull/12429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12429.patch",
"merged_at": 1625054638000
} |
https://api.github.com/repos/huggingface/transformers/issues/12428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12428/comments | https://api.github.com/repos/huggingface/transformers/issues/12428/events | https://github.com/huggingface/transformers/issues/12428 | 933,371,817 | MDU6SXNzdWU5MzMzNzE4MTc= | 12,428 | [DeBerta V2] The vocab size of DeBerta V2 is incorrect | {
"login": "hjptriplebee",
"id": 22477665,
"node_id": "MDQ6VXNlcjIyNDc3NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22477665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hjptriplebee",
"html_url": "https://github.com/hjptriplebee",
"followers_url": "https://api.github.com/users/hjptriplebee/followers",
"following_url": "https://api.github.com/users/hjptriplebee/following{/other_user}",
"gists_url": "https://api.github.com/users/hjptriplebee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hjptriplebee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hjptriplebee/subscriptions",
"organizations_url": "https://api.github.com/users/hjptriplebee/orgs",
"repos_url": "https://api.github.com/users/hjptriplebee/repos",
"events_url": "https://api.github.com/users/hjptriplebee/events{/privacy}",
"received_events_url": "https://api.github.com/users/hjptriplebee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @BigBird01 can chime in :)",
"@LysandreJik I emailed to the author. He leave some space to add customized tokens for downstream tasks and this is by design.",
"v2/v3: 128000 + 100 (special tokens, unused tokens, e.g. reservation for PostOCR customized tokens) ---> 128100 in total\r\n\r\nhttps://huggingface.co/transformers/v4.9.2/model_doc/deberta_v2.html"
] | 1,625 | 1,680 | 1,625 | CONTRIBUTOR | null | ### Who can help
@LysandreJik
## Information
I am using Deberta V2. The document and vocab of the tokenizer shows that its vocab size is 128K. But the vocab size in its [configuration](https://huggingface.co/microsoft/deberta-v2-xlarge/resolve/main/config.json) is 128100 which is 100 larger.

It will make embedding not work as expected.
## Expected behavior
May be we should modify the vocab size in configuration?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12428/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12427/comments | https://api.github.com/repos/huggingface/transformers/issues/12427/events | https://github.com/huggingface/transformers/issues/12427 | 933,363,076 | MDU6SXNzdWU5MzMzNjMwNzY= | 12,427 | [DeepSpeed] Convert from fp16 to fp32 issue zero_to_fp32.py | {
"login": "srikar2097",
"id": 391500,
"node_id": "MDQ6VXNlcjM5MTUwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/391500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srikar2097",
"html_url": "https://github.com/srikar2097",
"followers_url": "https://api.github.com/users/srikar2097/followers",
"following_url": "https://api.github.com/users/srikar2097/following{/other_user}",
"gists_url": "https://api.github.com/users/srikar2097/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srikar2097/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srikar2097/subscriptions",
"organizations_url": "https://api.github.com/users/srikar2097/orgs",
"repos_url": "https://api.github.com/users/srikar2097/repos",
"events_url": "https://api.github.com/users/srikar2097/events{/privacy}",
"received_events_url": "https://api.github.com/users/srikar2097/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for this excellent report, @srikar2097 \r\n\r\nThis is fascinating!\r\n\r\nWhere is `zero_pp_rank_5_mp_rank_00_optim_states.pt`? I presume this was an 8-gpu run, so we are missing one rank.\r\n\r\nCould you re-run again and see if it was a fluke? Clearly one process failed to save its optimizer states.\r\n",
"@stas00 When I looked at what got saved in the previous check-point (previous to what I shared in my original report), this is what I see:\r\n\r\n```\r\nll ./checkpoint-50000/global_step50001/\r\ntotal 34G\r\n-rw-rw-r-- 1 ubuntu ubuntu 5.4G Jun 27 01:27 mp_rank_00_model_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_3_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_1_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_7_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_4_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_6_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_2_mp_rank_00_optim_states.pt\r\n```\r\nAs you can see here file not present is \"rank_5\". I went back to earlier checkpoints and consistently `zero_pp_rank_5_mp_rank_00_optim_states.pt` is missing. I did monitor GPU usage during the training and nothing stuck me as odd. \r\n\r\nLet me report back with how the re-run goes.",
"First please file an issue with Deepspeed, since it's Deepspeed that saves these files. https://github.com/microsoft/DeepSpeed/issues\r\n\r\nAre you running on all 8 gpus? can you validate in nvidia-smi? It should be the case since it reports:\r\n```\r\nDetected checkpoint of type zero stage 2, world_size: 8\r\n```\r\nThen I'd do debug logging (obviously switching to much more frequent save_steps, so it only takes one minute to log). I'd either:\r\n\r\n* change this log into print so it's printed on each gpu\r\nhttps://github.com/microsoft/DeepSpeed/blob/a029239812e15cf35334514449ed3127b915780a/deepspeed/runtime/engine.py#L1989\r\n\r\n* or if you use `transformers` master and the HF trainer you can set ` --log_level info --log_level_replica info` and it'll log this info on each gpu w/o you needing to touch the deepspeed code.\r\n\r\nFor example with the above setting, on 2 gpus I get:\r\n```\r\n[2021-06-30 12:43:06,647] [INFO] [engine.py:1990:_save_zero_checkpoint] zero checkpoint saved output_dir/checkpoint-2/global_step2/zero_pp_rank_1_mp_rank_00_optim_states.pt\r\n[2021-06-30 12:43:07,320] [INFO] [engine.py:1990:_save_zero_checkpoint] zero checkpoint saved output_dir/checkpoint-2/global_step2/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n```\r\n\r\nSo we know both files got saved.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,625 | 1,628 | 1,628 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux-4.4.0-1128-aws-x86_64-with-debian-stretch-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: using deepspeed v 0.4.0
### Who can help
@stas00 thanks for opening new doors by showing how to train large transformer models.
## Information
Model I am using (Bert, XLNet ...): t5-3b
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I trained a t5-3b model on A100-40GB single host using deepspeed using tips from https://github.com/huggingface/transformers/issues/9996 and huggingface docs. After a checkpoint was saved, I used the provided `zero_to_fp32.py`
Steps to reproduce the behavior:
1. Using deepspeed **zero2 config** to train a **t5-3b** model.
2. Tried converting the deepspeed saved fp16 checkpoint (checkpoint-60000) to fp32
2. I went into the checkpoint-60000 dir and ran the provided command `python zero_to_fp32.py global_step60001 pytorch_model_fp32.bin` this is based on deepspeed documentation.
3. however I get the crash shown below.
```python
python zero_to_fp32.py global_step60001 pytorch_model_fp32.bin
Processing zero checkpoint 'global_step60001'
Detected checkpoint of type zero stage 2, world_size: 8
Traceback (most recent call last):
File "zero_to_fp32.py", line 170, in <module>
convert_zero_chkpt_to_fp32_consolid_state_dict(args.checkpoint_dir, args.output_file)
File "zero_to_fp32.py", line 128, in convert_zero_chkpt_to_fp32_consolid_state_dict
unpartitioned_numel).view(shape)
RuntimeError: start (2499259392) + length (16777216) exceeds dimension size (2499738752).
```
contents inside `global_step60001` folder
```
total 34G
-rw-rw-r-- 1 ubuntu ubuntu 5.4G Jun 28 06:15 mp_rank_00_model_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_4_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_2_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_7_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_3_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_6_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_0_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_1_mp_rank_00_optim_states.pt
```
Oddly, I see one file not present "rank_2" I am assuming each GPU saves its optimizer state. But I have not modified any code to cause this issue. Please help!
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Since I was running the provided code `zero_to_fp32.py` I would expect this to create the `fp32` model binary which does not happen due to the cryptic crash.
Happy to provide more information.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12427/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12426/comments | https://api.github.com/repos/huggingface/transformers/issues/12426/events | https://github.com/huggingface/transformers/issues/12426 | 933,311,987 | MDU6SXNzdWU5MzMzMTE5ODc= | 12,426 | [roberta] lm_head.decoder save/load needs fixing | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, the key should be added to the keys to ignore on save, and expected missing keys.\r\n\r\nThanks for your investigation!",
"What if someone sets `config.tie_word_embeddings=False`. Should the save/expected keys be dynamically adjusted in `__init__`? \r\n\r\nor do it the other way around - have `lm_head.decoder.weight` on the list and instead remove it in `__init__` if `config.tie_word_embeddings=False` as the former is the most likely behavior.",
"Yes, I think the second proposition makes a lot of sense! Great catch.\r\n\r\nWe'll need to check, but I wouldn't be surprised if other models have their lm head on the list - and nothing set in the `__init__` to prevent the save/expected keys from interacting with that layer. If so, it's not a high priority issue as no one has brought it up - but it is an issue nonetheless that potentially affects several models."
] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | ```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("distilroberta-base")
print('decoder.weight' in dict(model.lm_head.named_parameters()).keys())
print('dense.weight' in dict(model.lm_head.named_parameters()).keys())
print('lm_head.decoder.weight' in dict(model.named_parameters()).keys())
print('lm_head.dense.weight' in dict(model.named_parameters()).keys())
```
gives:
```
True
True
False
True
```
So if we query `lm_head` we can see `lm_head.decoder.weight`, however it's not visible to the whole model via `parameters()` (named or not).
The problem comes from `tie_weights`:
```
output_embeddings = self.get_output_embeddings()
if output_embeddings is not None and self.config.tie_word_embeddings:
self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())
```
which essentially does:
```
lm_head.decoder.weight = embeddings.word_embeddings.weight
```
which disconnects it from the module list.
This is inconsistent behavior.
But the main issue is that `lm_head.decoder.weight` is saved in the `save_pretrained` and then is expected to be there on `torch.load` but since it's tied frameworks like deepspeed won't save it.
So if `config.tie_word_embeddings` is `True`, it shouldn't save that key and not expect to load it.
Please correct me if I have missed something obvious.
@LysandreJik, @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12426/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12425/comments | https://api.github.com/repos/huggingface/transformers/issues/12425/events | https://github.com/huggingface/transformers/issues/12425 | 933,218,888 | MDU6SXNzdWU5MzMyMTg4ODg= | 12,425 | Loading custom model | {
"login": "saksham-s",
"id": 15651802,
"node_id": "MDQ6VXNlcjE1NjUxODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/15651802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saksham-s",
"html_url": "https://github.com/saksham-s",
"followers_url": "https://api.github.com/users/saksham-s/followers",
"following_url": "https://api.github.com/users/saksham-s/following{/other_user}",
"gists_url": "https://api.github.com/users/saksham-s/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saksham-s/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saksham-s/subscriptions",
"organizations_url": "https://api.github.com/users/saksham-s/orgs",
"repos_url": "https://api.github.com/users/saksham-s/repos",
"events_url": "https://api.github.com/users/saksham-s/events{/privacy}",
"received_events_url": "https://api.github.com/users/saksham-s/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Same. I run into several problems. I figured that I would need to call something like `CustomModel.from_pretrained(\"PATH_TO_CHECKPOINT\")` to load the model. But that did not work for me either. If I find a solution I will give an update. "
] | 1,625 | 1,631 | 1,628 | NONE | null | I had changed the definition of a token classification model and added another output head. Now when I try to load the model using AutoModelForTokenClassification it does not load the weights of the modified final layer which I had added. Is there any other class I can use to load this custom model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12425/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12424/comments | https://api.github.com/repos/huggingface/transformers/issues/12424/events | https://github.com/huggingface/transformers/pull/12424 | 933,135,832 | MDExOlB1bGxSZXF1ZXN0NjgwMzkwNDUx | 12,424 | Fix default bool in argparser | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you want to pull in the test snippet from the issue to make sure it doesn't happen again? That test doesn't actually parse any arguments at the moment, so it relies on the argparser being configured right in the test, which is more error prone than just parsing simple things and checking them."
] | 1,625 | 1,625 | 1,625 | COLLABORATOR | null | # What does this PR do?
As outlined in #12423, in a dataclass with no default for a bool parameter, the bool ended up defaulting to `True` when not passed, which is the opposite of the wanted behavior.
Fixes #12423 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12424/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12424",
"html_url": "https://github.com/huggingface/transformers/pull/12424",
"diff_url": "https://github.com/huggingface/transformers/pull/12424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12424.patch",
"merged_at": 1625054225000
} |
https://api.github.com/repos/huggingface/transformers/issues/12423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12423/comments | https://api.github.com/repos/huggingface/transformers/issues/12423/events | https://github.com/huggingface/transformers/issues/12423 | 933,127,509 | MDU6SXNzdWU5MzMxMjc1MDk= | 12,423 | HfArgumentParser defaults booleans to on | {
"login": "Craigacp",
"id": 729696,
"node_id": "MDQ6VXNlcjcyOTY5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/729696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Craigacp",
"html_url": "https://github.com/Craigacp",
"followers_url": "https://api.github.com/users/Craigacp/followers",
"following_url": "https://api.github.com/users/Craigacp/following{/other_user}",
"gists_url": "https://api.github.com/users/Craigacp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Craigacp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Craigacp/subscriptions",
"organizations_url": "https://api.github.com/users/Craigacp/orgs",
"repos_url": "https://api.github.com/users/Craigacp/repos",
"events_url": "https://api.github.com/users/Craigacp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Craigacp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you agree this is a bug in the parsing logic then we'd be happy to fix it and send a PR.",
"Yes, it does look like a bug. Fix is quite simple, I can make a PR.",
"Ok thanks. That saves me a job later this week.",
"If you want to have a look at the PR mentioned above and check it does give the expected behavior, that would be great!",
"Ok, I'll check it tomorrow against our internal use case to make sure it fixes that too."
] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.9.0.dev0 (initially discovered on 4.8.1)
- Platform: macOS-11.4-x86_64-i386-64bit
- Python version: 3.9.5
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help
@sgugger
## Information
HfArgumentParser when used on a dataclass with a bool field with no default turns the bool on unless it's supplied with `--<field_name> False` or similar "false-y" value. I would expect that as the field has no default it should be false unless `--<field_name>` or `--<field_name> True` is supplied. This is a behaviour change from `v3.0.0` where the booleans are parsed correctly as we're looking at upgrading and this issue hit us.
## To reproduce
Steps to reproduce the behavior:
1. Define a dataclass with a boolean field
2. Supply a list of arguments which does not include that field name
3. The field is turned on.
Appending this snippet to the bottom of [`test_basic` at line 110](https://github.com/huggingface/transformers/blob/master/tests/test_hf_argparser.py#L110) in `test_hf_argparser.py` fails the test.
```python
args = ["--foo", "1", "--baz", "quux", "--bar", "0.5"]
example, = parser.parse_args_into_dataclasses(args, look_for_args_file=False)
self.assertFalse(example.flag)
```
Extending `args` with `["--flag","False"]` recovers the expected behaviour.
## Expected behavior
The boolean should be set to false if the argument is not passed in. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12423/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12422/comments | https://api.github.com/repos/huggingface/transformers/issues/12422/events | https://github.com/huggingface/transformers/pull/12422 | 933,112,365 | MDExOlB1bGxSZXF1ZXN0NjgwMzcwNjMy | 12,422 | [modelcard] fix | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,625 | 1,625 | 1,625 | CONTRIBUTOR | null | this PR is fixing an incorrect attribute - probably some tests are needed?
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12422",
"html_url": "https://github.com/huggingface/transformers/pull/12422",
"diff_url": "https://github.com/huggingface/transformers/pull/12422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12422.patch",
"merged_at": 1625003943000
} |
https://api.github.com/repos/huggingface/transformers/issues/12421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12421/comments | https://api.github.com/repos/huggingface/transformers/issues/12421/events | https://github.com/huggingface/transformers/pull/12421 | 933,021,541 | MDExOlB1bGxSZXF1ZXN0NjgwMjkyODUz | 12,421 | Add option to save on each training node | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,625 | 1,625 | COLLABORATOR | null | # What does this PR do?
There is currently a problem when using `load_best_model_at_end=True` on a training with multiple nodes: the model is only saved on the main process (so the machine rank 0) and machine with other ranks can't see the saved model (unless the system uses some kind of shared storage). This PR adds a flag to enable the save on each node for that situation, and avoids the hard failure when the model to reload is not found. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12421/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12421",
"html_url": "https://github.com/huggingface/transformers/pull/12421",
"diff_url": "https://github.com/huggingface/transformers/pull/12421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12421.patch",
"merged_at": 1625035307000
} |
https://api.github.com/repos/huggingface/transformers/issues/12420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12420/comments | https://api.github.com/repos/huggingface/transformers/issues/12420/events | https://github.com/huggingface/transformers/pull/12420 | 932,930,367 | MDExOlB1bGxSZXF1ZXN0NjgwMjE1MjY1 | 12,420 | Easily train a new fast tokenizer from a given one - tackle the special tokens format (str or AddedToken) | {
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR is a sub-PR of the feature developed in PR #12361.
In the `train_new_from_iterator` method, the user can indicate that he wants to change the wording of a special token with the `special_tokens_map` argument.
In terms of behavior, we expect the resulting tokenizer to have special tokens that behave like the special tokens that were in the initial tokenizer. In other words, if in the initial token the special token linked to the `mask_token` was an `AddedToken` with `lstrip=True` then this parameter must be kept in the new trained tokenizer even if the user in the `special_tokens_map` argument indicates that the wording changes. For example from `[MASK]` to `<mask>`.
This PR proposes this behavior and tests it
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12420/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12420",
"html_url": "https://github.com/huggingface/transformers/pull/12420",
"diff_url": "https://github.com/huggingface/transformers/pull/12420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12420.patch",
"merged_at": 1624991483000
} |
https://api.github.com/repos/huggingface/transformers/issues/12419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12419/comments | https://api.github.com/repos/huggingface/transformers/issues/12419/events | https://github.com/huggingface/transformers/pull/12419 | 932,926,109 | MDExOlB1bGxSZXF1ZXN0NjgwMjExNTk4 | 12,419 | [JAX/Flax readme] add philosophy doc | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot Patrick for the awesome images :)\r\n\r\n"
] | 1,624 | 1,625 | 1,625 | MEMBER | null | # What does this PR do?
Adds a section about Flax's design philosophy in Transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12419",
"html_url": "https://github.com/huggingface/transformers/pull/12419",
"diff_url": "https://github.com/huggingface/transformers/pull/12419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12419.patch",
"merged_at": 1625069412000
} |
https://api.github.com/repos/huggingface/transformers/issues/12418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12418/comments | https://api.github.com/repos/huggingface/transformers/issues/12418/events | https://github.com/huggingface/transformers/issues/12418 | 932,864,894 | MDU6SXNzdWU5MzI4NjQ4OTQ= | 12,418 | DeepSpeed gets stuck when training | {
"login": "SamsTheGreatest",
"id": 41646769,
"node_id": "MDQ6VXNlcjQxNjQ2NzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/41646769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamsTheGreatest",
"html_url": "https://github.com/SamsTheGreatest",
"followers_url": "https://api.github.com/users/SamsTheGreatest/followers",
"following_url": "https://api.github.com/users/SamsTheGreatest/following{/other_user}",
"gists_url": "https://api.github.com/users/SamsTheGreatest/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamsTheGreatest/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamsTheGreatest/subscriptions",
"organizations_url": "https://api.github.com/users/SamsTheGreatest/orgs",
"repos_url": "https://api.github.com/users/SamsTheGreatest/repos",
"events_url": "https://api.github.com/users/SamsTheGreatest/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamsTheGreatest/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I added your changes to the original and I am not able to reproduce the hanging with \"EleutherAI/gpt-neo-2.7B\" as it is in the original.\r\n\r\nI'm on transformers master, but I don't think it makes any difference.\r\n\r\nIf you want me to try anything else please fork https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/, apply whatever changes you need and share the link to your fork.\r\n\r\nTo debug hanging do:\r\n```\r\npip install py-spy\r\nsudo py-spy dump --PID pid_of_the_hanging_process\r\n```\r\nand share the backtraces.\r\n\r\n\r\nUnrelated, if you could make a PR to https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/ with the new ds_config.json it'd help others.",
"Thanks @stas00,\r\n\r\nI installed `transformers `with `pip`. \r\n\r\nCreated a simple example and packed everything into a repo along with all the requirements. Attaching the link to the repo here [https://github.com/SamsTheGreatest/gpt-neo-with-deepspeed.git](https://github.com/SamsTheGreatest/gpt-neo-with-deepspeed.git). I have put other relevant info is in the README. Hopefully, it will help to shine some light on this.\r\n\r\nUnfortunately, I don't have sudo access. Maybe there is another way to backtrace it? If I could have interrupted the kernel in Jupiter, it would show me some traceback, however in this case, when I start the `Trainer`, I can't even interrupt the kernel anymore.",
"That's a wonderful way to do it, @SamsTheGreatest - thank you!\r\n\r\nOK, so I run your fork and it's running just fine. i.e. it started training - I didn't wait for it to finish.\r\n\r\nwrt, debug\r\n\r\n1. try `py-spy` w/o `sudo` if your system has ptrace set to 0\r\n``` \r\ncat /proc/sys/kernel/yama/ptrace_scope\r\n```\r\nyou don't need `sudo` to attach to the process.\r\n\r\n2. if it's >0, then used `faulthandler`\r\n\r\nadd this to your code:\r\n```\r\nimport faulthandler\r\nfaulthandler.dump_traceback_later(20, repeat=True)\r\n```\r\n\r\nand when you run it, it will dump the bt for each thread every 20 sec.\r\n\r\n(I haven't tried it in the notebook, but it should probably work just fine)\r\n",
"Thanks @stas00, that's very detailed!\r\n\r\n `cat /proc/sys/kernel/yama/ptrace_scope` yields `1` so ill do it with `faulthandler`. \r\n\r\nAccidentally found out that when removing DeepSpeed option from trainer, it still gets stuck. Removing\r\n```\r\n# os.environ['MASTER_ADDR'] = 'localhost'\r\n# os.environ['MASTER_PORT'] = '9994'\r\n# os.environ['RANK'] = \"0\"\r\n# os.environ['LOCAL_RANK'] = \"0\"\r\n# os.environ['WORLD_SIZE'] = \"1\"\r\n```\r\nstarts training as expected again. I also tried letting the settings to be discovered via `mpi4py`, as you wrote in the original post, it says `mpi4py` needs to be installed (can't install as I need `sudo` .....again). Could it be all due to the fact that I'm running things not on my own machine directly but using `kubeflow` notebook server?\r\n\r\nI have dumped the traceback files from all 3 experiments into the same repo. `FP16` is on during all of them. `No settings` means that `os.environ` is commented out. I have also labeled the start of training with `\\n\\nNow training\\n\\n`.\r\n\r\nThanks again",
"You don't need `sudo` to install `mpi4py` - this is just `pip install mpi4py` \r\n\r\nPerhaps you're the first one to run deepspeed on kubeflow, by looking at the traces seems like it has some distributed issues there \r\n\r\nThank you for making the traces. It seems to be stuck at:\r\n```\r\n File \"/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 1080 in broadcast\r\n```\r\n\r\nIt might be something specific to the their jupyter setup? If I understand correctly kubeflow is notebook only, right?\r\n\r\nCan you run deepspeed from the command line? e.g. as in this example? https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb\r\n\r\nAll the `os.environ` code is that we are emulating a distributed launcher in the notebook. (instead of runing `torch.distributed.launch` or the `deepspeed` launcher.)\r\n\r\nAlso try a different port?\r\n\r\nA different address? Perhaps `127.0.0.1` or find its IP address?\r\n\r\nIt's very possible that the distributed network gets stuck because of either of these 2 as it can't network.\r\n\r\nDeepspeed requires a fully distributed setup even with just one gpu, since it wasn't really designed for that kind of situation in mind (But perhaps it could).",
"Hi @stas00,\r\n\r\nSorry for the long wait. Tried other IP, but all yield Permission errors and such.. The correct IP seems to be localhost or **IP of the Kubernetes Pod**. This are the only options I have tried that don't yield errors, however the script still hangs at the same spot.\r\n\r\n[The notebook you referenced](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb \r\n), hangs at the same spot unfortunately. \r\n\r\n```python\r\nDownloading: 5.40kB [00:00, 3.13MB/s] \r\n\r\nUsing amp fp16 backend \r\n\r\n[2021-07-05 08:20:38,917] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.4.2, git-hash=unknown, git-branch=unknown \r\n\r\n[2021-07-05 08:20:43,129] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1 \r\n\r\n^CKilling subprocess 4452 \r\n\r\nMain process received SIGINT, exiting \r\n\r\nTraceback (most recent call last): \r\n\r\n File \"/home/jovyan/anaconda3/envs/esemala/bin/deepspeed\", line 6, in <module> \r\n\r\n main() \r\n\r\n File \"/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/launcher/runner.py\", line 362, in main \r\n\r\n result.wait() \r\n\r\n File \"/home/jovyan/anaconda3/envs/esemala/lib/python3.7/subprocess.py\", line 1019, in wait \r\n\r\n return self._wait(timeout=timeout) \r\n\r\n File \"/home/jovyan/anaconda3/envs/esemala/lib/python3.7/subprocess.py\", line 1653, in _wait \r\n\r\n (pid, sts) = self._try_wait(0) \r\n\r\n File \"/home/jovyan/anaconda3/envs/esemala/lib/python3.7/subprocess.py\", line 1611, in _try_wait \r\n\r\n (pid, sts) = os.waitpid(self.pid, wait_flags) \r\n\r\nKeyboardInterrupt \r\n\r\n(esemala) tf-docker ~/transformers >\r\n```\r\n(Had to keyboard-interrupt it)\r\n\r\nI have installed transformers and deepspeed as suggested in the notebook.\r\n\r\nPS: quick suggestion: in the last cell, when running the example, one might consider changing `rm -r output_dir` to `rm -rf output_dir` so that we don't get an error if the directory does not exist.\r\n\r\n\r\nCould we investigate this a little further? Maybe there is something wrong with the mismatch of cuda and cuda-toolkit installed? `nvcc -V` yields `10.1`, however the latest pytorch is installed as for `11.1`. Trying to follow [this tutorial](https://github.com/mallorbc/GPT_Neo_fine-tuning_notebook/blob/main/GPT_Neo_Fine-tune.ipynb) ,now, instead of installing OPs for Deepspeed just in time, I treid `DS_BUILD_OPS=1 pip install .`, however it says \r\n```python\r\nException: Installed CUDA version 10.1 does not match the version torch was compiled with 11.1, unable to compile cuda/cpp extensions without a matching cuda version.\r\n```",
"So the issue in this one is in launching a pytorch subprocess here.\r\n\r\nIs there a way I could have a direct access to the same environment? \r\n\r\n> PS: quick suggestion: in the last cell, when running the example, one might consider changing rm -r output_dir to rm -rf output_dir so that we don't get an error if the directory does not exist.\r\n\r\nThat's a great suggestion, @SamsTheGreatest - done!\r\n\r\n> Exception: Installed CUDA version 10.1 does not match the version torch was compiled with 11.1, unable to compile cuda/cpp extensions without a matching cuda version.\r\n\r\nYou need to install pytorch built with cuda 10 for that. As of this writing this is done with:\r\n```\r\npip install torch torchvision torchaudio\r\n```\r\n\r\nNormally find the right command here: https://pytorch.org/get-started/locally/\r\n\r\nDS will handle minor version mismatch no problem.",
"@stas00, Unfortunately, I am not authorized to do that.. but I can provide you with the exact docker image I am using. Here is a link: [https://github.com/kubeflow/kubeflow/tree/v1.2.0/components/tensorflow-notebook-image](https://github.com/kubeflow/kubeflow/tree/v1.2.0/components/tensorflow-notebook-image)\r\n\r\nI tried installing torch for 10.1, process still hangs at \r\n\r\n> File \"/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py\", line 1080 in broadcast\r\n\r\njust as before. \r\n\r\nNow, I had to rebuild the docker container as `sudo` password wasn't set. I am now root, so I installed `conda 11.1.1` for linux. All versions are now matching and I managed to build all OPs for deepspeed except `async_io` (I assume I don't need it atm..) using `DS_BUILD_OPS=1 pip install .`. So.. now ds_report shows that all OPs are installed and all cuda versions are matching. \r\n\r\nStill hangs at the same spot...\r\n\r\nReading though some issues, could it be that its due to the `nccl` usage? Is there a trivial way to set backend to `gloo` within the notebook I shared with you @stas00?",
"I'm not succeeding at building that Docker image. If I use `build_image.sh` it hangs, if I try to `docker build .` it fails with some deps missing. Do you have a ready docker image I could pull?\r\n\r\nSince kubeflow is run in a docker image most likely the issue has something to do with its setup/configuration. \r\n\r\n> Reading though some issues, could it be that its due to the nccl usage? Is there a trivial way to set backend to gloo within the notebook I shared with you @stas00?\r\n\r\nIt's very possible. I haven't run into this myself, so I trust your research.\r\n\r\ngloo doesn't provide the same functionality as nccl, but it looks that Deepspeed docs say it should work.\r\n\r\nOK, what if you do: `deepspeed.init_distributed(\"gloo\")` here? instead of `deepspeed.init_distributed()`\r\n\r\nhttps://github.com/huggingface/transformers/blob/d7e156bd1ae2467e9ea1dbc44f31da0ed2296aee/src/transformers/training_args.py#L812\r\n\r\nI found this issue https://github.com/microsoft/DeepSpeed/issues/1030 where a user was able to use the gloo backend with Deepspeed.\r\n",
"@stas00 consulted internally again and tried using \"gloo\" as you specified. Colleagues said they could not manage to run `nccl` on `kubeflow` either. Basically cloned the transformers repo and changed the training_args as you specified. \r\n\r\nChanged model for trainer like so too:\r\n```python\r\ntrainer = tr.Trainer(model=model.requires_grad_(False), \r\n args=training_args, ..... \r\n```\r\n\r\n**Now, with `gloo` code runs a little further!!**\r\n\r\n```python\r\n[2021-07-08 15:28:56,767] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.4.3+c9fee82, git-hash=c9fee82, git-branch=master\r\n[2021-07-08 15:28:56,775] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1\r\n[2021-07-08 15:28:56,891] [INFO] [engine.py:177:__init__] DeepSpeed Flops Profiler Enabled: False\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-18-ccb66750b859> in <module>\r\n 10 # Start training process!\r\n 11 \r\n---> 12 trainer.train()\r\n 13 trainer.save_model(save_dir)\r\n 14 tokenizer.save_pretrained(save_dir+'/tokenizer/')\r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)\r\n 1122 if args.deepspeed:\r\n 1123 deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(\r\n-> 1124 self, num_training_steps=max_steps, resume_from_checkpoint=resume_from_checkpoint\r\n 1125 )\r\n 1126 self.model = deepspeed_engine.module\r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/transformers/deepspeed.py in deepspeed_init(trainer, num_training_steps, resume_from_checkpoint)\r\n 369 config_params=config,\r\n 370 optimizer=optimizer,\r\n--> 371 lr_scheduler=lr_scheduler,\r\n 372 )\r\n 373 \r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/__init__.py in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params)\r\n 134 collate_fn=collate_fn,\r\n 135 config=config,\r\n--> 136 config_params=config_params)\r\n 137 else:\r\n 138 assert mpu is None, \"mpu must be None with pipeline parallelism\"\r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/runtime/engine.py in __init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params, dont_change_device)\r\n 189 self.lr_scheduler = None\r\n 190 if model_parameters or optimizer:\r\n--> 191 self._configure_optimizer(optimizer, model_parameters)\r\n 192 self._configure_lr_scheduler(lr_scheduler)\r\n 193 self._report_progress(0)\r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/runtime/engine.py in _configure_optimizer(self, client_optimizer, model_parameters)\r\n 701 logger.info('Using client Optimizer as basic optimizer')\r\n 702 else:\r\n--> 703 basic_optimizer = self._configure_basic_optimizer(model_parameters)\r\n 704 if self.global_rank == 0:\r\n 705 logger.info(\r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/runtime/engine.py in _configure_basic_optimizer(self, model_parameters)\r\n 772 optimizer = DeepSpeedCPUAdam(model_parameters,\r\n 773 **optimizer_parameters,\r\n--> 774 adamw_mode=effective_adam_w_mode)\r\n 775 else:\r\n 776 from deepspeed.ops.adam import FusedAdam\r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/ops/adam/cpu_adam.py in __init__(self, model_params, lr, bias_correction, betas, eps, weight_decay, amsgrad, adamw_mode)\r\n 72 bias_correction=bias_correction,\r\n 73 amsgrad=amsgrad)\r\n---> 74 super(DeepSpeedCPUAdam, self).__init__(model_params, default_args)\r\n 75 \r\n 76 self.opt_id = DeepSpeedCPUAdam.optimizer_id\r\n\r\n~/anaconda3/envs/esemala/lib/python3.7/site-packages/torch/optim/optimizer.py in __init__(self, params, defaults)\r\n 47 param_groups = list(params)\r\n 48 if len(param_groups) == 0:\r\n---> 49 raise ValueError(\"optimizer got an empty parameter list\")\r\n 50 if not isinstance(param_groups[0], dict):\r\n 51 param_groups = [{'params': param_groups}]\r\n\r\nValueError: optimizer got an empty parameter list\r\n```\r\n\r\nTrying to battle this value error now, is it because `AdamW` was used and now its `DeepSpeedCPUAdam`? Shall I be concerned that CPU is being used?\r\n\r\nWe are using multi-node with single GPU in each cluster, so those issue could be arising from such architecture, but I'm not sure. \r\n\r\nI will respond on your request for the Docker image a little later once I get it sorted out.\r\n\r\nThanks again\r\n",
"Now, concerning the Docker image. We used the same docker image as one I shared, but at the end used `USER root` instead of `jovyan`. \r\n\r\nalso used those commands for this. Sorry I didn't share this earlier, was not the one involved with images...\r\n\r\n```python\r\npython build_image.py --tf_version=1.15.2 --platform=gpu tf_notebook\r\n\r\npip install --upgrade pip\r\n\r\npython3 -m pip install -r tensorflow-notebook-image/requirements.txt\r\n```\r\n\r\nIf it helps I will try building the image and pushing it to docker hub myself, with all necessary requirements (on top of what I gave you I just installed necessary version of torch, compatible with cuda 10.1, huggingface transformers and deepspeed). But I would likely need some time for this...till next week or so",
"@SamsTheGreatest, glad to see you made some progress!\r\n\r\nNot sure why you needed to turn gradients off - that surely won't work as the optimizer now has no params to optimize, which is probably the reason why you had that most recent failure.\r\n\r\n-------------------\r\n\r\nAs we are progressing with the diagnosis of OP, it's becoming clear now that this issue has little to do with `transformers` (other than having a hardcoded `nccl` backend) and we should probably try to sort it out on the DeepSpeed Issues-side of things. Once sorted out we can then adjust the HF Trainer to do the right thing as `deepspeed` needs it.\r\n\r\nCould you please open a new issue at https://github.com/microsoft/DeepSpeed/issues and I suppose the topic should be something along the lines of: using deepspeed in env where nccl doesn't work\r\n\r\nAnd then specific sub-issues:\r\n\r\n1. make deepspeed work on kubeflow - `nccl`-backend hangs - your OP report\r\n2. make deepspeed work with the 'gloo' backend - your last gloo-specific report https://github.com/huggingface/transformers/issues/12418#issuecomment-876545975 \r\n\r\nor perhaps these should be 2 separate issues? I trust your judgment.\r\n\r\nAnd from there let's see what the Deepspeed developers need, i.e. whether they will want the image or they already know what to do.",
"Thanks, @stas00! Yes it seems reasonable, I will reply shortly to this in a little more detail. Also, discovered one more thing. Remember I mentioned this, \r\n\r\n> Accidentally found out that when removing DeepSpeed option from trainer, it still gets stuck.\r\n\r\nWhen trying the same but also changing `nccl` to `gloo` in `training_args.py`, gets everything unstuck aswell!\r\n\r\n```python\r\ntorch.distributed.init_process_group(backend=\"gloo\")\r\n device = torch.device(\"cuda\", self.local_rank)\r\n self._n_gpu = 1\r\n```\r\n\r\nCould we conclude that for some reason `nccl` doesn't work on with the current hardware setup? Could there be a particular reason for that?",
"Great to know that this is not deepspeed specific then - thank you for the experiments, @SamsTheGreatest \r\n\r\nI'd say make a short repro script like:\r\n```\r\necho 'import torch; torch.distributed.init_process_group(backend=\"nccl\")' > run\r\npython -m torch.distributed.launch --nproc_per_node=2 run\r\n```\r\nand if it hangs file an issue at pytorch? Hopefully someone on their team has dealt with kubeflow.\r\n\r\nIt probably has to do with how it builds Docker with regards to pytorch and cuda tools, or the interface to the gpu cards. \r\n\r\nFor example, what happens if you install the normal pytorch on that kubeflow instance after it was built? That would tests whether the issue is with how the pre-built pytorch was created while building the kubeflow image.\r\n\r\n",
"@stas00 \r\n\r\n> Not sure why you needed to turn gradients off - that surely won't work as the optimizer now has no params to optimize, which is probably the reason why you had that most recent failure.\r\n\r\nyes turning on gradients doesn't make any sense. I was attempting to battle the issue with using 'gloo' backend that you referred to... not sure how to fix it https://github.com/microsoft/DeepSpeed/issues/1030\r\n\r\n",
"Also, have a look at when to use which backend notes here: https://pytorch.org/docs/stable/distributed.html\r\n\r\nScroll down to \"Which backend to use?\"\r\n\r\nDo any of these ring a bell?\r\n\r\n--------------\r\n\r\nAnd also these may aid debugging the NCCL issues:\r\n```\r\n export NCCL_DEBUG=INFO\r\n export NCCL_DEBUG_SUBSYS=ALL\r\n```\r\n\r\nFinally, you can attach to a hanging process with `strace` (or start it under strace) and see where it is hanging on the libc-level.",
"> @stas00\r\n> \r\n> > Not sure why you needed to turn gradients off - that surely won't work as the optimizer now has no params to optimize, which is probably the reason why you had that most recent failure.\r\n> \r\n> yes turning on gradients doesn't make any sense. I was attempting to battle the issue with using 'gloo' backend that you referred to... not sure how to fix it [microsoft/DeepSpeed#1030](https://github.com/microsoft/DeepSpeed/issues/1030)\r\n\r\nOpen a new issue there?",
"@SamsTheGreatest trying to get caught up on this thread but are you able to run NCCL without deepspeed? Even if we can get the gloo backend working I suspect the performance would not be ideal. Can you try a simple all-reduce test in your environment using NCCL? \r\n\r\nWe often run this gist on our systems to test basic NCCL functionality: https://gist.github.com/jeffra/b5e80466b4c86be00ea3b6f130fb7a36",
"> Can you try a simple all-reduce test in your environment using NCCL?\r\n\r\nSo a simple test could be something like:\r\n```\r\n# test.py\r\nimport torch.distributed as dist\r\nimport argparse\r\nimport torch\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument(\"--local_rank\", type=int)\r\nargs = parser.parse_args()\r\ntorch.cuda.set_device(args.local_rank)\r\ndevice = torch.device(\"cuda\", local_rank)\r\n\r\ndist.init_process_group(\"nccl\")\r\ndist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM)\r\n```\r\n\r\n```\r\n# to run\r\npython -m torch.distributed.launch --nproc_per_node=2 test.py\r\n```\r\n\r\nadjust the number of gpus above - probably just 1 in your case. You have only 1 gpu, correct?\r\n\r\n**Edit** I see you reported earlier 1 gpu per node, \r\n\r\n> We are using multi-node with single GPU in each cluster, so those issue could be arising from such architecture, but I'm not sure.\r\n\r\nso then you need to adapt the above to include the `--nnode=` as well.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\nI'm having the same issue when trying to reproduce the Academic-Budget-Bert code. \r\nI've run the provided test.py code and encountered the same behavior\r\n\r\n```\r\n# test.py\r\nimport torch.distributed as dist\r\nimport argparse\r\nimport torch\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument(\"--local_rank\", type=int)\r\nargs = parser.parse_args()\r\ntorch.cuda.set_device(args.local_rank)\r\ndevice = torch.device(\"cuda\", args.local_rank)\r\n\r\ndist.init_process_group(\"nccl\")\r\ndist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM)\r\n```\r\n_________________________________________________\r\n\r\n```\r\n@^CTraceback (most recent call last):\r\n File \"/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/site-packages/torch/distributed/launch.py\", line 260, in <module>\r\n main()\r\n File \"/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/site-packages/torch/distributed/launch.py\", line 253, in main\r\n process.wait()\r\n File \"/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/subprocess.py\", line 1189, in wait\r\n return self._wait(timeout=timeout)\r\n File \"/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/subprocess.py\", line 1917, in _wait\r\n (pid, sts) = self._try_wait(0)\r\n File \"/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/subprocess.py\", line 1875, in _try_wait\r\n (pid, sts) = os.waitpid(self.pid, wait_flags)\r\nKeyboardInterrupt\r\n\r\n```\r\n\r\nSo, if anyone has a workaround, that would be great.\r\n\r\nBest,\r\nDjamé\r\n\r\n",
"I think you could try this solution:\r\n`rm -rf ~/.cache/torch_extensions/`\r\n\r\nref: https://github.com/huggingface/transformers/issues/12715",
"Is this already solved? I also have this problem when training inside pod.",
"Creating a new pod has solved this issue for me a couple of times.",
"try `export NCCL_P2P_DISABLE=1`, it works for me.",
"It may work but at what cost?\r\n\r\n> The NCCL_P2P_DISABLE variable disables the peer to peer (P2P) transport, which uses CUDA direct access between GPUs, using NVLink or PCI.\r\n\r\nYou will lose on performance greatly.",
"> It may work but at what cost?\r\n> \r\n> > The NCCL_P2P_DISABLE variable disables the peer to peer (P2P) transport, which uses CUDA direct access between GPUs, using NVLink or PCI.\r\n> \r\n> You will lose on performance greatly.\r\n\r\nI am using Zero stage 2 for training on a single host with multi-GPUs, the performance scaleup is ok for me.\r\nIn my case, the training task is GPU-computation-bound, not GPU-communication-bound.",
"If you don't care for your training to finish faster then your approach definitely works.\r\n\r\nIt's not about whether it's comms-bound or gpu-bound, it's about the wasted time on comms. Please see the diagram at https://github.com/stas00/ml-engineering/tree/master/network#single-node-training to have a better understanding that comms aren't instant.\r\n\r\nI was just flagging to future readers that this is not the right solution for many users. Instead they need to figure out what's wrong with their network setup and enjoy the fast P2P comms and faster training time.\r\n",
"> If you don't care for your training to finish faster then your approach definitely works.\r\n> \r\n> It's not about whether it's comms-bound or gpu-bound, it's about the wasted time on comms. Please see the diagram at https://github.com/stas00/ml-engineering/tree/master/network#single-node-training to have a better understanding that comms aren't instant.\r\n> \r\n> I was just flagging to future readers that this is not the right solution for many users. Instead they need to figure out what's wrong with their network setup and enjoy the fast P2P comms and faster training time.\r\n\r\nAgree with your points. Thank you for sharing.",
"> torch_extensions\r\n\r\nrunning on the server which is shared by many users, will this deletion affect other users?"
] | 1,624 | 1,708 | 1,629 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.1
- Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: single gpu
### Who can help
@stas00
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Trying to replicate [this](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo_xl_deepspeed.py), I am using a 125M GPT Neo model and fine-tune it with using the Trainer. Training arguments include a DeepSpeed option. The Trainer gets stuck with:
```
[2021-06-29 14:29:44,747] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.4.1, git-hash=unknown, git-branch=unknown
[2021-06-29 14:29:44,757] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1
```
ds_report gives:
```
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
sparse_attn ............ [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
[WARNING] async_io requires the libraries: ['libaio-dev'] but are missing. Can be fixed by: `apt install libaio-dev`.
async_io ............... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch']
torch version .................... 1.9.0
torch cuda version ............... 11.1
nvcc version ..................... 10.1
deepspeed install path ........... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed']
deepspeed info ................... 0.4.1, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.9, cuda 11.1
```
Is there a way to debug this?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## To Replicate
I modified the [original code](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo_xl_deepspeed.py) slightly to remove the errors:
```python
training_args = tr.TrainingArguments(output_dir=save_dir, num_train_epochs=5, logging_steps=300, save_steps=300,
per_device_train_batch_size=1, per_device_eval_batch_size=1,warmup_steps=50,
learning_rate=0.001,adam_epsilon=1e-06,fp16=True,
weight_decay=0.01, logging_dir=f'{save_dir}/logs', deepspeed='./ds_config.json')
```
and ds_config.json is now:
```json
{
"fp16": {
"enabled": true,
"min_loss_scale": 1,
"opt_level": "O3"
},
"zero_optimization": {
"stage": 3,
"cpu_offload": true,
"cpu_offload_params" : true,
"contiguous_gradients": true,
"overlap_comm": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [
0.9,
0.999
],
"eps": 1e-6
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.001,
"warmup_num_steps": 50
}
},
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"steps_per_print":1
}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12418/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12417/comments | https://api.github.com/repos/huggingface/transformers/issues/12417/events | https://github.com/huggingface/transformers/pull/12417 | 932,806,357 | MDExOlB1bGxSZXF1ZXN0NjgwMTA4MTk2 | 12,417 | Raises an error when BertTokenizer is initialized from BertJapaneseTokenizer | {
"login": "europeanplaice",
"id": 38364983,
"node_id": "MDQ6VXNlcjM4MzY0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38364983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/europeanplaice",
"html_url": "https://github.com/europeanplaice",
"followers_url": "https://api.github.com/users/europeanplaice/followers",
"following_url": "https://api.github.com/users/europeanplaice/following{/other_user}",
"gists_url": "https://api.github.com/users/europeanplaice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/europeanplaice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/europeanplaice/subscriptions",
"organizations_url": "https://api.github.com/users/europeanplaice/orgs",
"repos_url": "https://api.github.com/users/europeanplaice/repos",
"events_url": "https://api.github.com/users/europeanplaice/events{/privacy}",
"received_events_url": "https://api.github.com/users/europeanplaice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12416
This PR makes `BertTokenizer` raise an error if it is initialized from `BertJapaneseTokenizer` pretrained tokenizer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12417",
"html_url": "https://github.com/huggingface/transformers/pull/12417",
"diff_url": "https://github.com/huggingface/transformers/pull/12417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12417.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12416/comments | https://api.github.com/repos/huggingface/transformers/issues/12416/events | https://github.com/huggingface/transformers/issues/12416 | 932,791,544 | MDU6SXNzdWU5MzI3OTE1NDQ= | 12,416 | BertTokenizer with BertJapaneseTokenizer pretrained model generates unintended tokenization. | {
"login": "europeanplaice",
"id": 38364983,
"node_id": "MDQ6VXNlcjM4MzY0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38364983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/europeanplaice",
"html_url": "https://github.com/europeanplaice",
"followers_url": "https://api.github.com/users/europeanplaice/followers",
"following_url": "https://api.github.com/users/europeanplaice/following{/other_user}",
"gists_url": "https://api.github.com/users/europeanplaice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/europeanplaice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/europeanplaice/subscriptions",
"organizations_url": "https://api.github.com/users/europeanplaice/orgs",
"repos_url": "https://api.github.com/users/europeanplaice/repos",
"events_url": "https://api.github.com/users/europeanplaice/events{/privacy}",
"received_events_url": "https://api.github.com/users/europeanplaice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you're raising a good issue that tokenizers will not tell you when they're instantiated from a checkpoint that doesn't have the same tokenizer architecture.\r\n\r\nHowever, I think this should be resolved for all tokenizers rather than for a single one, probably by checking the `tokenizer_class` inside the `config.json` and `tokenizer_config.json`. cc @SaulLu @sgugger ",
"> However, I think this should be resolved for all tokenizers rather than for a single one, probably by checking the `tokenizer_class` inside the `config.json` and `tokenizer_config.json`.\r\n\r\nI agree. `AutoTokenizer` can choose a tokenizer automatically, so until it is solved, I think that recommending a user uses `AutoTokenizer` is a better way to prevent the silent error. \r\n\r\n",
"Thank you very much for reporting this problem @europeanplaice :+1:. Indeed, I share your opinion, it would be better if a warning was logged if ever the class tokenizer used to load a pretrained tokenizer is not the same type.\r\n\r\nI also agree with @LysandreJik, it should be possible to find this information in `config.json` and/or` tokenizer_config.json` and this would allow to have a logged warning for all types of tokenizers. \r\n\r\n@europeanplaice, would you like to work on this? If you want, I could of course help you. Or if you don't have the time/want to, I can take over your PR in the next days and adapt it to this new approach. What do you think? :blush: ",
"@SaulLu Thank you for your offer. I want to try to tackle this problem.\r\nI plan to add something like below\r\nhttps://github.com/huggingface/transformers/blob/122d7dc34fd0e397a08b8a584a632fc57d3fd5d0/src/transformers/models/auto/tokenization_auto.py#L527-L551\r\n\r\nto `from_pretrained` in `PreTrainedTokenizerBase (tokenization_utils_base.py)` to make sure that we can check whether a user is trying to use different tokenizers between `cls` and `config.json or tokenizer_config.json`'s class before a tokenizer returns.\r\nIf this detected conflicts between them, a warning would be logged, or an error would occur.\r\n\r\nI want my PR to be in line with your overall plan, so I hope to get your opinion about this comment.\r\n",
"Thank you very much for offering to take care of this issue! \r\n\r\nFrom my point of view, what you described above sounds really great! :+1: ",
"I opened a new pull request about this issue. However, there is a point I couldn't overcome.\r\nIf `config.json` and/or `tokenizer_config.json` don't have information about the tokenizer's class, it's impossible to specify which model is correct.\r\n\r\nIn `AutoTokenizer`, it seems that TOKENIZER_MAPPING is used in this pattern, so I first intended to import `AutoTokenizer` in tokenization_utils_base.py, but it was a circular import. 😂\r\n\r\n\r\n "
] | 1,624 | 1,626 | 1,626 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Windows-10-10.0.19043-SP0
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
`BertTokenizer` with `BertJapaneseTokenizer` pretrained model generates unintended tokenization without any caution.
## To reproduce
Steps to reproduce the behavior:
Run
```python
EXAMPLE_BERT_JAPANESE_ID = "cl-tohoku/bert-base-japanese"
tokenizer = BertTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID)
print(tokenizer.tokenize("今日はいい天気ですね"))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
```python
not_correct = BertTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID)
correct = BertJapaneseTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID)
print(not_correct.tokenize("今日はいい天気ですね"))
print(correct.tokenize("今日はいい天気ですね"))
```
Because the two tokenizers were made from the same pretrained model, the output should have been
```
['今日', 'は', 'いい', '天気', 'です', 'ね']
['今日', 'は', 'いい', '天気', 'です', 'ね']
```
or `BertTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID)` should have raised an error.
However, the actual result was
```
['今', '日', 'はい', '##い', '天', '気', 'です', '##ね']
['今日', 'は', 'いい', '天気', 'です', 'ね']
```
and no error or warning raised.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12415/comments | https://api.github.com/repos/huggingface/transformers/issues/12415/events | https://github.com/huggingface/transformers/pull/12415 | 932,777,819 | MDExOlB1bGxSZXF1ZXN0NjgwMDgyODgw | 12,415 | Added talks | {
"login": "suzana-ilic",
"id": 27798583,
"node_id": "MDQ6VXNlcjI3Nzk4NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/27798583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzana-ilic",
"html_url": "https://github.com/suzana-ilic",
"followers_url": "https://api.github.com/users/suzana-ilic/followers",
"following_url": "https://api.github.com/users/suzana-ilic/following{/other_user}",
"gists_url": "https://api.github.com/users/suzana-ilic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzana-ilic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzana-ilic/subscriptions",
"organizations_url": "https://api.github.com/users/suzana-ilic/orgs",
"repos_url": "https://api.github.com/users/suzana-ilic/repos",
"events_url": "https://api.github.com/users/suzana-ilic/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzana-ilic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome looks great!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | Added info on talks and speakers from our doc, will add remaining speaker info tomorrow (there's still a bit to be confirmed). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12415/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12415",
"html_url": "https://github.com/huggingface/transformers/pull/12415",
"diff_url": "https://github.com/huggingface/transformers/pull/12415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12415.patch",
"merged_at": 1624978877000
} |
https://api.github.com/repos/huggingface/transformers/issues/12414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12414/comments | https://api.github.com/repos/huggingface/transformers/issues/12414/events | https://github.com/huggingface/transformers/issues/12414 | 932,761,141 | MDU6SXNzdWU5MzI3NjExNDE= | 12,414 | Streaming mode in training examples | {
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @gsarti,\r\n\r\nThat's a great issue! We will provide at least one streaming example until Friday :-)",
"For future watchers, @patrickvonplaten is working on this in #12470! Thanks!",
"Note that it's actually `datasets` 1.9.0 that will probably be released on Monday that will feature Streaming.\r\nFor now it's still available on the `master` branch only !"
] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null | # 🚀 Feature request
The v1.8,0 of `datasets` introduced [streaming mode](https://huggingface.co/docs/datasets/master/dataset_streaming.html). It would be very useful to include this mode in the training examples, either with a parameter `--stream_dataset` or an ad-hoc training script if radical changes are required to make it work.
## Motivation
Besides showcasing the potential of the new streaming feature, this would be very useful in the context of pre-training scripts (e.g., the new [`run_t5_mlm_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py)) where large dataset like OSCAR and C4 are commonly leveraged. It would be especially useful to have it for the Flax Community Event!
## Your contribution
I can submit a PR and work to integrate the feature in some of the Flax examples.
Would love to hear the opinion of @patrickvonplaten and @patil-suraj on whether this is relatively easy to pull off!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12414/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/12414/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12413/comments | https://api.github.com/repos/huggingface/transformers/issues/12413/events | https://github.com/huggingface/transformers/issues/12413 | 932,698,898 | MDU6SXNzdWU5MzI2OTg4OTg= | 12,413 | Benchmark Colab does not work | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue is related directly to Google Colab environment. I ran same code on my local machine (CPU only) and whole benchmark has passed.\r\n\r\n```{python}\r\nfrom transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments\r\nargs = TensorFlowBenchmarkArguments(models=[\"bert-base-uncased\"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512])\r\nbenchmark = TensorFlowBenchmark(args)\r\n\r\nresults = benchmark.run()\r\nprint(results)\r\n```\r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n bert-base-uncased 8 8 0.139 \r\n bert-base-uncased 8 32 0.369 \r\n bert-base-uncased 8 128 1.319 \r\n bert-base-uncased 8 512 5.523 \r\n--------------------------------------------------------------------------------\r\n\r\n==================== INFERENCE - MEMORY - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Memory in MB \r\n--------------------------------------------------------------------------------\r\n bert-base-uncased 8 8 1089 \r\n bert-base-uncased 8 32 1212 \r\n bert-base-uncased 8 128 1535 \r\n bert-base-uncased 8 512 1956 \r\n--------------------------------------------------------------------------------\r\n```",
"It seems that colab is killing your process, which may be due to a lack of resources. Does it still happen if you use a tiny model on small batch sizes & sequence lengths?",
"I have found out that there is a problem with the multiprocess benchmark on Colab. I added **multi_process = False** in args and the benchmark has passed.\r\n```\r\nargs = TensorFlowBenchmarkArguments(models=[\"bert-base-uncased\"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512],\r\n multi_process = False)\r\n```\r\n\r\nThe default value for the multi_process flag is True. I don't know why Colab is killing those processes. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | CONTRIBUTOR | null | https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/tensorflow/benchmarks.ipynb
(https://huggingface.co/transformers/benchmarks.html?highlight=benchmark)
Running the above Colab on a CPU crashes with:
```
1 / 1
Process killed. Error in Process
Process killed. Error in Process
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-df0caab4d791> in <module>()
----> 1 results = benchmark.run()
2 print(results)
/usr/local/lib/python3.7/dist-packages/transformers/benchmark/benchmark_utils.py in run(self)
705 if self.args.inference:
706 if self.args.memory:
--> 707 memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
708 inference_result_memory[model_name]["result"][batch_size][sequence_length] = memory
709 if self.args.speed:
ValueError: too many values to unpack (expected 2)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12413/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12412/comments | https://api.github.com/repos/huggingface/transformers/issues/12412/events | https://github.com/huggingface/transformers/pull/12412 | 932,554,342 | MDExOlB1bGxSZXF1ZXN0Njc5ODgzMDA1 | 12,412 | fix ids_to_tokens naming error in tokenizer of deberta v2 | {
"login": "hjptriplebee",
"id": 22477665,
"node_id": "MDQ6VXNlcjIyNDc3NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22477665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hjptriplebee",
"html_url": "https://github.com/hjptriplebee",
"followers_url": "https://api.github.com/users/hjptriplebee/followers",
"following_url": "https://api.github.com/users/hjptriplebee/following{/other_user}",
"gists_url": "https://api.github.com/users/hjptriplebee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hjptriplebee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hjptriplebee/subscriptions",
"organizations_url": "https://api.github.com/users/hjptriplebee/orgs",
"repos_url": "https://api.github.com/users/hjptriplebee/repos",
"events_url": "https://api.github.com/users/hjptriplebee/events{/privacy}",
"received_events_url": "https://api.github.com/users/hjptriplebee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
"ids_to_tokens" is named as "id_to_tokens" in tokenizer of deberta v2, which may cause an exception when "convert_ids_to_tokens" is called. So fix ids_to_tokens naming error in tokenizer of deberta v2.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12412/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12412",
"html_url": "https://github.com/huggingface/transformers/pull/12412",
"diff_url": "https://github.com/huggingface/transformers/pull/12412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12412.patch",
"merged_at": 1624968936000
} |
https://api.github.com/repos/huggingface/transformers/issues/12411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12411/comments | https://api.github.com/repos/huggingface/transformers/issues/12411/events | https://github.com/huggingface/transformers/issues/12411 | 932,552,566 | MDU6SXNzdWU5MzI1NTI1NjY= | 12,411 | 🌟 New model addition: FNet | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Somebody is already working on this, see #12335 ",
"Thanks @NielsRogge , weird that I didn't see it when I searched.",
"@cccntu I believe what you want for the JAX/Flax community week is a Flax model. It seems unlikely that I will finish the PR in the next week. Maybe, you can start working on the Flax model parallely?\r\n\r\nOr, we can discuss over slack and then try to finish both.\r\n\r\n@patil-suraj @patrickvonplaten wdyt? Is it easier to go from PyTorch to Flax? Or it doesn't matter at all? In case PT is needed, I am willing to spend my time next week on this and try to finish it.\r\n\r\n",
"@gchhablani Yes! I would love to add the Flax part. \r\n@patil-suraj @patrickvonplaten I have a few questions before I proceed: \r\n* There is no license in the original repo, should I email the authors for permission for code and weights? \r\n* How much of the original model code should I modify, other than wrapping it in huggingface/transformers classes?\r\nShould we refactor it for better weight alignment with pytorch code e.t.c?\r\n\r\nThanks!\r\n\r\n",
"Great @cccntu! Let's discuss over Slack."
] | 1,624 | 1,624 | null | CONTRIBUTOR | null | # 🌟 New model addition: FNet
FNet is a highly efficient Transformer-like encoder architecture, wherein the self-attention sublayers have been wholly replaced by standard, unparameterized Fourier Transforms.
I would like to help adding this!
## Open source status
* [x] the model implementation is available: https://github.com/google-research/google-research/tree/master/f_net
* [x] the model weights are available: https://github.com/google-research/google-research/tree/master/f_net
* [x] who are the authors: (@ilyaeck @santiontanon) (Not sure, googled the authors' name + github, sorry if it's incorrect) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12411/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12410/comments | https://api.github.com/repos/huggingface/transformers/issues/12410/events | https://github.com/huggingface/transformers/issues/12410 | 932,510,878 | MDU6SXNzdWU5MzI1MTA4Nzg= | 12,410 | New Model: Charformer: Fast Character Transformers via Gradient-based Subword Tokenization | {
"login": "neel04",
"id": 11617870,
"node_id": "MDQ6VXNlcjExNjE3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neel04",
"html_url": "https://github.com/neel04",
"followers_url": "https://api.github.com/users/neel04/followers",
"following_url": "https://api.github.com/users/neel04/following{/other_user}",
"gists_url": "https://api.github.com/users/neel04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neel04/subscriptions",
"organizations_url": "https://api.github.com/users/neel04/orgs",
"repos_url": "https://api.github.com/users/neel04/repos",
"events_url": "https://api.github.com/users/neel04/events{/privacy}",
"received_events_url": "https://api.github.com/users/neel04/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Code is out now:\r\n\r\nhttps://github.com/google-research/google-research/tree/master/charformer (please note the different url - compared to paper url)",
"An unofficial PyTorch implementation for Charformer https://github.com/lucidrains/charformer-pytorch",
"Thanks for the great work!\r\nWill charformer be supported in the near future?\r\n",
"Still not supported yet"
] | 1,624 | 1,664 | null | NONE | null | # 🌟 New model addition
## Model description
arXiv = https://arxiv.org/pdf/2106.12672.pdf (pre-print; under review)
In this paper, they introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters in a data-driven fashion. More importantly, is the introduction of Charformer, a deep Transformer model that integrates GBST and operates on the byte level.
> Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. Additionally, Charformer is fast, improving the speed of both vanilla byte-level and subword-level Transformers by 28%-100% while maintaining competitive quality. We believe this work paves the way for highly performant token-free models that are trained completely end-to-end.
## Open source status
* [ ] the model implementation is available: ----] [Implementation and weights](https://github.com/google-research/google-research/charformer)
* [ ] the model weights are available: ------------] [to be released soon here](https://github.com/google-research/google-research/charformer)
* [x] who are the authors: Yi Tay, Vinh Q. Tran, Sebastian Ruder*, Jai Gupta, Hyung Won Chung, Dara BahriZhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler (Google and DeepMind-->`*`)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12410/reactions",
"total_count": 12,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12410/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12409/comments | https://api.github.com/repos/huggingface/transformers/issues/12409/events | https://github.com/huggingface/transformers/pull/12409 | 932,369,504 | MDExOlB1bGxSZXF1ZXN0Njc5NzIyOTQx | 12,409 | [Flax] Example scripts - correct weight decay | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR corrects the weight decay in most flax examples.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12409",
"html_url": "https://github.com/huggingface/transformers/pull/12409",
"diff_url": "https://github.com/huggingface/transformers/pull/12409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12409.patch",
"merged_at": 1624964468000
} |
https://api.github.com/repos/huggingface/transformers/issues/12408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12408/comments | https://api.github.com/repos/huggingface/transformers/issues/12408/events | https://github.com/huggingface/transformers/issues/12408 | 932,237,455 | MDU6SXNzdWU5MzIyMzc0NTU= | 12,408 | Wrong logical operation | {
"login": "alberto-hong",
"id": 26590050,
"node_id": "MDQ6VXNlcjI2NTkwMDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/26590050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alberto-hong",
"html_url": "https://github.com/alberto-hong",
"followers_url": "https://api.github.com/users/alberto-hong/followers",
"following_url": "https://api.github.com/users/alberto-hong/following{/other_user}",
"gists_url": "https://api.github.com/users/alberto-hong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alberto-hong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alberto-hong/subscriptions",
"organizations_url": "https://api.github.com/users/alberto-hong/orgs",
"repos_url": "https://api.github.com/users/alberto-hong/repos",
"events_url": "https://api.github.com/users/alberto-hong/events{/privacy}",
"received_events_url": "https://api.github.com/users/alberto-hong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If it was an 'or' operation, then with `raw_datasets = [\"validation\"]`, the second part of the statement would return `True` which would raise an error"
] | 1,624 | 1,625 | 1,625 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12408/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/12407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12407/comments | https://api.github.com/repos/huggingface/transformers/issues/12407/events | https://github.com/huggingface/transformers/pull/12407 | 932,207,936 | MDExOlB1bGxSZXF1ZXN0Njc5NTg3NDAx | 12,407 | Validation split added: custom data files @sgugger, @patil-suraj | {
"login": "Souvic",
"id": 11485233,
"node_id": "MDQ6VXNlcjExNDg1MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/11485233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Souvic",
"html_url": "https://github.com/Souvic",
"followers_url": "https://api.github.com/users/Souvic/followers",
"following_url": "https://api.github.com/users/Souvic/following{/other_user}",
"gists_url": "https://api.github.com/users/Souvic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Souvic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Souvic/subscriptions",
"organizations_url": "https://api.github.com/users/Souvic/orgs",
"repos_url": "https://api.github.com/users/Souvic/repos",
"events_url": "https://api.github.com/users/Souvic/events{/privacy}",
"received_events_url": "https://api.github.com/users/Souvic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger \r\n@patil-suraj \r\n",
"@sgugger\r\n@patil-suraj\r\nMade the necessary changes.. please see and comment.",
"Hopefully done now!",
"All good now, thanks!"
] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null |
# What does this PR do?
Validation split added in case of no validation file and loading custom data for TensorFlow examples run_mlm.py file
Fixes # (issue)
Issue #12406 fixed.
Docs on language modeling TensorFlow updated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12407",
"html_url": "https://github.com/huggingface/transformers/pull/12407",
"diff_url": "https://github.com/huggingface/transformers/pull/12407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12407.patch",
"merged_at": 1625160162000
} |
https://api.github.com/repos/huggingface/transformers/issues/12406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12406/comments | https://api.github.com/repos/huggingface/transformers/issues/12406/events | https://github.com/huggingface/transformers/issues/12406 | 932,202,354 | MDU6SXNzdWU5MzIyMDIzNTQ= | 12,406 | MLM training fails with no validation file | {
"login": "Souvic",
"id": 11485233,
"node_id": "MDQ6VXNlcjExNDg1MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/11485233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Souvic",
"html_url": "https://github.com/Souvic",
"followers_url": "https://api.github.com/users/Souvic/followers",
"following_url": "https://api.github.com/users/Souvic/following{/other_user}",
"gists_url": "https://api.github.com/users/Souvic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Souvic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Souvic/subscriptions",
"organizations_url": "https://api.github.com/users/Souvic/orgs",
"repos_url": "https://api.github.com/users/Souvic/repos",
"events_url": "https://api.github.com/users/Souvic/events{/privacy}",
"received_events_url": "https://api.github.com/users/Souvic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Log:\r\nGrouping texts in chunks of 512: 100% 87/87 [00:26<00:00, 3.25ba/s]\r\nTraceback (most recent call last):\r\n File \"./transformers/examples/tensorflow/language-modeling/run_mlm.py\", line 604, in <module>\r\n main()\r\n File \"./transformers/examples/tensorflow/language-modeling/run_mlm.py\", line 493, in main\r\n eval_dataset = tokenized_datasets[\"validation\"]\r\n File \"/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py\", line 37, in __getitem__\r\n return super().__getitem__(k)\r\nKeyError: 'validation'",
"Fixed."
] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
## Information
Model I am using (Bert, XLNet ...): distilbert-base-cased
The problem arises when using: the official example scripts: (give details below)
The tasks I am working on is: MLM finetuning
## To reproduce
Steps to reproduce the behavior:
1. Just run the tensorflow examples
2. python3 ./transformers/examples/tensorflow/language-modeling/run_mlm.py\
--model_name_or_path distilbert-base-cased \
--output_dir ./g \
--train_file "customdata.txt" \
3. The model fails with error message that no validation file is there.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
it should use the validation split percentage parameter to divide the training set into training and eval samples. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12406/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12405/comments | https://api.github.com/repos/huggingface/transformers/issues/12405/events | https://github.com/huggingface/transformers/pull/12405 | 932,091,212 | MDExOlB1bGxSZXF1ZXN0Njc5NDg5MDg0 | 12,405 | Fix for the issue of device-id getting hardcoded for token_type-ids during Tracing for iBert | {
"login": "HamidShojanazeri",
"id": 9162336,
"node_id": "MDQ6VXNlcjkxNjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9162336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamidShojanazeri",
"html_url": "https://github.com/HamidShojanazeri",
"followers_url": "https://api.github.com/users/HamidShojanazeri/followers",
"following_url": "https://api.github.com/users/HamidShojanazeri/following{/other_user}",
"gists_url": "https://api.github.com/users/HamidShojanazeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamidShojanazeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamidShojanazeri/subscriptions",
"organizations_url": "https://api.github.com/users/HamidShojanazeri/orgs",
"repos_url": "https://api.github.com/users/HamidShojanazeri/repos",
"events_url": "https://api.github.com/users/HamidShojanazeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamidShojanazeri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for taking so long to get back to it, this one really fell through the cracks. Would you mind implementing a test for this like it was done with other models, for example in https://github.com/huggingface/transformers/pull/13350?\r\n\r\nThanks a lot!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,635 | 1,635 | CONTRIBUTOR | null | # What does this PR do?
This PR is part of a series of PRs that follows PR #11252 and applies similar changes to Flaubert.
Fixes # (issue)
issue #5664
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12405",
"html_url": "https://github.com/huggingface/transformers/pull/12405",
"diff_url": "https://github.com/huggingface/transformers/pull/12405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12405.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12404/comments | https://api.github.com/repos/huggingface/transformers/issues/12404/events | https://github.com/huggingface/transformers/issues/12404 | 931,994,007 | MDU6SXNzdWU5MzE5OTQwMDc= | 12,404 | [FLAX] Core dump using example code | {
"login": "peregilk",
"id": 9079808,
"node_id": "MDQ6VXNlcjkwNzk4MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peregilk",
"html_url": "https://github.com/peregilk",
"followers_url": "https://api.github.com/users/peregilk/followers",
"following_url": "https://api.github.com/users/peregilk/following{/other_user}",
"gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peregilk/subscriptions",
"organizations_url": "https://api.github.com/users/peregilk/orgs",
"repos_url": "https://api.github.com/users/peregilk/repos",
"events_url": "https://api.github.com/users/peregilk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peregilk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I got the same error, I temporarily fixed it with this [patch](https://katb.in/meow2590). \r\n",
"Thanks a lot. I was actually also able solve this issue by using flax from the git.\r\ngit clone https://github.com/google/flax.git\r\npip install --user -e flax\r\n",
"Thanks, I will try it too",
"Hey @Wikidepia,\r\n\r\nThanks a lot for your error report. I'm not really sure what is causing the error here. Are you using a TPU VMv3-8? It's important to make sure that jax/flax is installed correctly as explained here: https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm",
"Also could you maybe follow the guide here: https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries to make sure everything is correctly installed? The code should work as is - please let me know if you continue getting a core dump error.",
"Exactly. That turned out to be the issue. Was a bit confused because installing flax with pip install gives me flax version 0.3.4. Installing from git still gives version 0.3.4, but now it works. Thanks a lot.",
"> Hey @Wikidepia,\r\n> \r\n> Thanks a lot for your error report. I'm not really sure what is causing the error here. Are you using a TPU VMv3-8? It's important to make sure that jax/flax is installed correctly as explained here: https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm\r\n\r\nYes I'm using TPU VM, I might need to update transformers or just create new TPU\r\nThanks for your help :D ",
"Also pinging @avital @marcvanzee here in case they have seen something similar before - I don't think it's necessary to clone the flax repo to make it work on TPU VM no? Think one can just `pip install ...` it",
"I see you have updated the guide with additional info. Getting the TPU VMs up and running was a bit back and forth. I will do a fresh install of this in a couple of days, and check if I can reproduce this. Thanks a lot @patrickvonplaten for your response.",
"@patrickvonplaten. I have now reinstalled from scratch following the guides, including this https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm. The error I got was not caused by installing Flax from pip instead from source. My mistake. \r\n\r\nThe core dump was caused by not running \"pip install --upgrade clu\". Sorry for the confusion. ",
"No worries, thanks for documenting everything here! I'm sure it'll be helpful for others :-)",
"Hey @patrickvonplaten , I'm in the TRC program so I could also test some of scripts for TPU VM (before the projects starts on July, 7).",
"@stefan-it this would be great, actually. Could you check whether you can set up the libraries correctly according to:\r\nunshuffled_deduplicated_no\r\nand then run these steps: https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling#masked-language-modeling where instead of running it on **norwegian (no)** (which would take to long) could you run it on **Alemannic (als)**? So simply replacing all occurrences of `unshuffled_deduplicated_no` with `unshuffled_deduplicated_als` ?",
"Hi @patrickvonplaten ,\r\n\r\nI created a virtual environment (`venv') and followed the installation instructions from [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries).\r\n\r\nHere are my obserations:\r\n\r\nAfter installing `jax` there's a strange wheel output:\r\n\r\n```bash\r\nBuilding wheels for collected packages: jax \r\n Building wheel for jax (setup.py) ... error \r\n ERROR: Command errored out with exit status 1: \r\n command: /home/stefan/dev/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/tmp/pip-install-gl33kipc/jax/setup.py'\"'\"'; __file__='\"'\"'/tmp/pip-install-gl33kipc/ja\r\nx/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel\r\n -d /tmp/pip-wheel-itpzq66t cwd: /tmp/pip-install-gl33kipc/jax/ \r\n Complete output (6 lines): usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] \r\n or: setup.py --help [cmd1 cmd2 ...] \r\n or: setup.py --help-commands \r\n or: setup.py cmd --help \r\n \r\n error: invalid command 'bdist_wheel' \r\n ---------------------------------------- \r\n ERROR: Failed building wheel for jax Running setup.py clean for jax \r\nFailed to build jax \r\nInstalling collected packages: six, absl-py, numpy, opt-einsum, scipy, flatbuffers, jaxlib, libtpu-nightly, jax \r\n Running setup.py install for jax ... done \r\n```\r\n\r\nHowever, `jax` is installed but:\r\n\r\n```\r\nIn [1]: import jax\r\n/home/stefan/dev/lib/python3.8/site-packages/jax/__init__.py:27: UserWarning: cloud_tpu_init failed: ModuleNotFoundError(\"No module named 'requests'\")\r\n This a JAX bug; please report an issue at https://github.com/google/jax/issues\r\n _warn(f\"cloud_tpu_init failed: {repr(exc)}\\n This a JAX bug; please report \"\r\n```\r\n\r\nSo there's something wrong with dependency management from `jax`, I manually installed `requests` and it is working.\r\n\r\nThen I could run the tokenizer script, which was perfectly working. For the `run_mlm_flax.py` I just found this message:\r\n\r\n```bash\r\nTraceback (most recent call last): \r\n File \"./run_mlm_flax.py\", line 319, in <module> f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\" \r\n File \"/home/stefan/transformers/src/transformers/file_utils.py\", line 1641, in wrapper \r\n raise ImportError(f\"Method `{func.__name__}` requires PyTorch.\")\r\nImportError: Method `device` requires PyTorch.\r\n\r\n```\r\n\r\nOk, I did not install PyTorch, and this method is only used in a `logger.info` command, maybe we can write a small logic around it to not use a PyTorch-specific function. I did comment it out, so training could start.\r\n\r\nFor Alemannic, the following error is thrown after first epoch:\r\n\r\n```bash\r\n[18:40:31] - INFO - absl - Starting the local TPU driver. \r\n[18:40:31] - INFO - absl - Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local:// [18:40:31] - INFO - absl - Unable to initialize backend 'gpu': Not found: Could not find registered platform with name: \"cuda\". Available platform names are: TPU Interpreter Host \r\n \r\n[18:40:38] - INFO - absl - A polynomial schedule was set with a non-positive `transition_steps` value; this results in a constant schedule with value `init_value`. \r\n/home/stefan/dev/lib/python3.8/site-packages/jax/lib/xla_bridge.py:382: UserWarning: jax.host_count has been renamed to jax.process_count. This alias will eventually be removed; please upd\r\nate your code. \r\n warnings.warn( \r\n/home/stefan/dev/lib/python3.8/site-packages/jax/lib/xla_bridge.py:369: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update\r\n your code. warnings.warn( \r\nEpoch ... (1/18): 0%| | 0/18 [00:00<?, ?it/s]\r\nTraining...: 0%| | 0/4 [00:00<?, ?it/s]\r\nTraining...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:13<00:00, 18.38s/it]\r\nEpoch... (1/18 | Loss: [11.003486 11.003486 11.003486 11.003486 11.003486 11.003486 11.003486 \r\n 11.003486], Learning Rate: [9.0000685e-07 9.0000685e-07 9.0000685e-07 9.0000685e-07 9.0000685e-07 \r\n 9.0000685e-07 9.0000685e-07 9.0000685e-07]) \r\nEpoch ... (1/18): 0%| | 0/18 [01:15<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"/home/stefan/dev/lib/python3.8/site-packages/numpy/lib/shape_base.py\", line 867, in split len(indices_or_sections) \r\nTypeError: object of type 'int' has no len() \r\n During handling of the above exception, another exception occurred: \r\n \r\nTraceback (most recent call last): \r\n File \"./run_mlm_flax.py\", line 622, in <module> \r\n eval_batch_idx = generate_batch_splits(eval_samples_idx, eval_batch_size) \r\n File \"./run_mlm_flax.py\", line 268, in generate_batch_splits \r\n batch_idx = np.split(samples_idx, sections_split) \r\n File \"<__array_function__ internals>\", line 5, in split File \"/home/stefan/dev/lib/python3.8/site-packages/numpy/lib/shape_base.py\", line 871, in split \r\n if N % sections: \r\nZeroDivisionError: integer division or modulo by zero\r\n```\r\n\r\nMaybe the training corpus is just too small.\r\n\r\nI'm currently training a model for Amharic, and training is running (2 epochs and 1 evaluation phase) :)",
"Just a question - maybe it is documented already, but how should we deal with the limited hard disk space? \r\n\r\nI've seen discussions on the Alpha VM TPU Google channel, where it was suggested to use [gcsfuse](https://github.com/GoogleCloudPlatform/gcsfuse), but I've just seen your thread in the Google channel right now, so let's wait :)\r\n\r\n",
"@stefan-it did you install JAX as follows: \r\n\r\n```\r\npip install \"jax[tpu]>=0.2.16\" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html\r\n```\r\n\r\nor just via installing `transformers` from source? ",
"Regarding limited disk space - we are currenty working on a solution :-)",
"@patrickvonplaten yeah, I was using the `pip install` command as above (so before installing transformers library).\r\n\r\nIt seems that disk can be attached during creating of the VM, but not after creation, unfortunately, example [here](https://cloud.google.com/sdk/gcloud/reference/alpha/compute/tpus/tpu-vm/create#--data-disk).",
"I also got this core dump on TPUv3-8 and TPUv2-8 VMs. I'll try some of the proposed fixes tomorrow and post an update. \r\n@patil-suraj ",
"Alemanic is also a super small dataset so if your batch size is too large it might actually be bigger than the number of examples in the eval set",
"The alemanic `als` script really is just a dummy dataset and should be run with a small batch size (2 per device) for testing",
"Hi @stefan-it, for limited disk space I found a work around which don't need using any gsutil.\r\nSince TPU-VM has huge RAM (335gb), I mount part of it as a partition, and set HF_HOME to this mount partition **before** running any tokenizers (which cache the preprocessed dataset into $HF_HOME). For more specific:\r\n\r\n```bash\r\nmkdir $HOME/hfcache\r\nsudo mount -t tmpfs -o size=128000m tmpfs $HOME/hfcache # mount 125Gb RAM as disk\r\nexport HF_HOME=/home/lethanh/hfcache\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,629 | 1,629 | CONTRIBUTOR | null |
## Environment info
- `transformers` version: 4.8.1
- `flax` version: 0.3.4
- `python` version: 3.8.5
## Who can help
@patrickvonplaten
## Models:
FLAX - RoBERTa MLM
## Information
Following the official guides for creating VMs and TPUs:
https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm
Following this guide for training RoBERTa on the Norwegian OSCAR training set.
https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling
I am unable to run the run_mlm_flax.py without getting a core dump. The same happens on the run_clm_flax.py script.
## Error message
```
tcmalloc: large alloc 435677134848 bytes == (nil) @ 0x7f61ae7be680 0x7f61ae7deff4 0x7f61ae2d5309 0x7f61ae2d6fb9 0x7f61ae2d7056 0x7f5e637fd659 0x7f5e59233a09 0x7f61ae9b2b8a 0x7f61ae9b2c91 0x7f61ae711915 0x7f61ae9b70bf 0x7f61ae7118b8 0x7f61ae9b65fa 0x7f61ae58634c 0x7f61ae7118b8 0x7f61ae711983 0x7f61ae586b59 0x7f61ae5863da 0x67299f 0x682dcb 0x684321 0x5c3cb0 0x5f257d 0x56fcb6 0x56822a 0x5f6033 0x56ef97 0x5f5e56 0x56a136 0x5f5e56 0x569f5e
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
https://symbolize.stripped_domain/r/?trace=7f61ae5f418b,7f61ae5f420f&map=
*** SIGABRT received by PID 8576 (TID 8576) on cpu 95 from PID 8576; stack trace: ***
PC: @ 0x7f61ae5f418b (unknown) raise
@ 0x7f5f7fb581e0 976 (unknown)
@ 0x7f61ae5f4210 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f61ae5f418b,7f5f7fb581df,7f61ae5f420f&map=ca1b7ab241ee28147b3d590cadb5dc1b:7f5f72e59000-7f5f7fe8bb20
E0628 20:40:48.745220 8576 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked.
E0628 20:40:48.745291 8576 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start.
E0628 20:40:48.745305 8576 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0628 20:40:48.745322 8576 coredump_hook.cc:447] RAW: Sending fingerprint to remote end.
E0628 20:40:48.745346 8576 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E0628 20:40:48.745362 8576 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0628 20:40:48.745366 8576 coredump_hook.cc:525] RAW: Discarding core.
E0628 20:40:48.749975 8576 process_state.cc:771] RAW: Raising signal 6 with default behavior
Aborted (core dumped)
```
## To reproduce
Follow the guide.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12404/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12403/comments | https://api.github.com/repos/huggingface/transformers/issues/12403/events | https://github.com/huggingface/transformers/issues/12403 | 931,875,378 | MDU6SXNzdWU5MzE4NzUzNzg= | 12,403 | [Deepspeed][initialization] pegasus: unable to load/init the weights | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thank you for the report, @sajastu \r\n\r\nCould you please adjust the command line in your report so that it uses some small public dataset and not custom files which we don't have?\r\n\r\nThen I will sort it out.\r\n\r\nThank you.\r\n",
"Sure thing! @stas00 \r\n\r\nPlease let me modify the script, and then test so that it runs flawlessly. I'll give you an update shortly!",
"I was able to reproduce the problem with:\r\n```\r\nexport BS=16; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 \\\r\nexamples/pytorch/summarization/run_summarization.py --model_name_or_path \\\r\ngoogle/pegasus-cnn_dailymail --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing \\\r\n0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 \\\r\n--max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size \\\r\n$BS --predict_with_generate --sortish_sampler --dataset_name cnn_dailymail --dataset_config \"3.0.0\" \\\r\n--val_max_target_length 128 --warmup_steps 50 --max_train_samples 50 --max_eval_samples 50 \\\r\n--deepspeed tests/deepspeed/ds_config_zero3.json\r\n```\r\nSo nothing else needs to be done by your side.\r\n",
"so the quick fix is:\r\n```\r\n--- a/src/transformers/models/pegasus/modeling_pegasus.py\r\n+++ b/src/transformers/models/pegasus/modeling_pegasus.py\r\n@@ -26,6 +26,7 @@ from torch import nn\r\n from torch.nn import CrossEntropyLoss\r\n\r\n from ...activations import ACT2FN\r\n+from ...deepspeed import is_deepspeed_zero3_enabled\r\n from ...file_utils import (\r\n add_end_docstrings,\r\n add_start_docstrings,\r\n@@ -109,7 +110,13 @@ class PegasusSinusoidalPositionalEmbedding(nn.Embedding):\r\n\r\n def __init__(self, num_positions: int, embedding_dim: int, padding_idx: Optional[int] = None):\r\n super().__init__(num_positions, embedding_dim)\r\n- self.weight = self._init_weight(self.weight)\r\n+ if is_deepspeed_zero3_enabled():\r\n+ import deepspeed\r\n+ with deepspeed.zero.GatheredParameters(self.weight, modifier_rank=0):\r\n+ self.weight = self._init_weight(self.weight)\r\n+ else:\r\n+ self.weight = self._init_weight(self.weight)\r\n+\r\n\r\n @staticmethod\r\n def _init_weight(out: nn.Parameter):\r\n```\r\n\r\nLet me know if you can handle the diff.\r\n\r\nI will work on a normal PR and test. Ideally should think of something that requires less code changes, but it will do the right thing for now.",
"@stas00 Thanks. It works perfectly now! ",
"thank you for validating that it works for you.\r\n\r\nI'm trying to have this solved on the deepspeed side, so that all our models will work w/o needing to change each one of them separately. so I will keep you posted on the progress.",
"If you want to try the fix on the deepspeed side, instead of the workaround on transformers side, you can try this branch:\r\nhttps://github.com/microsoft/DeepSpeed/pull/1202\r\n",
"https://github.com/microsoft/DeepSpeed/pull/1202 has been merged, so if you use the master version of deepspeed, you no longer need the workaround I shared with you.\r\n\r\nI will close this, but if you still encounter any problems please feel free to re-open."
] | 1,624 | 1,625 | 1,625 | NONE | null | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Ubuntu
- Python version: 3.8
- PyTorch version (GPU?): Y
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y
_- Deepspeed version: deepspeed 0.4.1 (installed with pip)_
@stas00,
## Information
I'm trying to fine-tuned pegasus-large model using deepspeed with multi-gpu. It seems that deepspeed is unable to initialize the weights in the beginning. While, I removed deepspeed and weights seem to be properly initialized. I'm hesitating if this is a bug with deepspeed library. Details are given below.
The command:
```
deepspeed --num_gpus=8 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google/pegasus-large \
--do_train \
--do_eval \
--do_predict \
--output_dir /home/code-base/user_space/saved_models/pegasus/reddit-xsum-1024-tuned/ \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=4 \
--learning_rate 3e-5 \
--weight_decay 0.01 \
--adam_beta2 0.98 \
--num_train_epochs 10 \
--overwrite_output_dir \
--predict_with_generate \
--evaluation_strategy steps --eval_steps 1000 --save_steps 1000 --warmup_steps 10000 \
--text_column document \
--summary_column summary \
--train_file $DS_BASE_DIR_P/train.json \
--validation_file $DS_BASE_DIR_P/validation.json \
--test_file $DS_BASE_DIR_P/test.json \
--deepspeed ds_config.json
```
Error message:
```
...
Traceback (most recent call last):
File "examples/pytorch/summarization/run_summarization.py", line 617, in <module>
main()
File "examples/pytorch/summarization/run_summarization.py", line 355, in main
model = AutoModelForSeq2SeqLM.from_pretrained(
File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/auto/auto_factory.py", line 395, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/modeling_utils.py", line 1176, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper
f(module, *args, **kwargs)
File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 1209, in __init__
self.model = PegasusModel(config)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper
f(module, *args, **kwargs)
File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 1082, in __init__
self.encoder = PegasusEncoder(config, self.shared)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper
f(module, *args, **kwargs)
File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 652, in __init__
self.embed_positions = PegasusSinusoidalPositionalEmbedding(
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper
f(module, *args, **kwargs)
File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 114, in __init__
self.weight = self._init_weight(self.weight)
File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 122, in _init_weight
n_pos, dim = out.shape
ValueError: not enough values to unpack (expected 2, got 1)
Killing subprocess 3351
Killing subprocess 3352
Killing subprocess 3353
Killing subprocess 3354
Killing subprocess 3355
Killing subprocess 3356
Killing subprocess 3357
Killing subprocess 3358
...
```
- `ds_config.json` is Zero3 copied from the repository.
- I checked `self.out`: with `deepspeed` its shape is `[1]` and only contains a 1-d tensor with value 1. However, in single-gpu env, the shape is `[1024, 1024]` which contains floating numbers (i.e., much like embeddings).
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below) --reddit_tifu_long
## To reproduce
Steps to reproduce the behavior:
1. Running the above command with deepspeed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12403/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12402/comments | https://api.github.com/repos/huggingface/transformers/issues/12402/events | https://github.com/huggingface/transformers/pull/12402 | 931,866,171 | MDExOlB1bGxSZXF1ZXN0Njc5MjkyODg5 | 12,402 | [Flax][WIP] added Flax Pegasus Models | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @bhadreshpsavani thanks for the PR, let us know if you want to continue working on this :) ",
"Sure @patil-suraj,\nI want to continue on it but I will need some help from you I think!\nShall I contact you slack If I need any guidance from you?\nI will update this PR today with latest changes!",
"Sounds good, happy to help :)",
"Given that we have a cookie-cutter example now, it might be worth actually starting from scratch using the cookie-cutter that is based on BART - will probably be more efficient",
"Hi @patrickvonplaten \nI will start from scratch! That would be better.\nI create another PR, that will be fine right?",
"This one is being handled by above mentioned PR"
] | 1,624 | 1,631 | 1,631 | CONTRIBUTOR | null | # What does this PR do?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patil-suraj @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12402",
"html_url": "https://github.com/huggingface/transformers/pull/12402",
"diff_url": "https://github.com/huggingface/transformers/pull/12402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12402.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12401/comments | https://api.github.com/repos/huggingface/transformers/issues/12401/events | https://github.com/huggingface/transformers/pull/12401 | 931,847,800 | MDExOlB1bGxSZXF1ZXN0Njc5Mjc3MjE1 | 12,401 | [Deepspeed] match the trainer log level | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR sets the trainer log level for Deepspeed, so the whole application runs on the same log level.
Once https://github.com/microsoft/DeepSpeed/pull/1190 is merged running the trainer with `--log_level error --log_level_replica error` with deepspeed is absolutely silent, just gives you the training results.
well, minus the lame `tensorflow` info logs who refuses to be respectful of the ecosphere.
and pt-1.9.0's distributed + launch which too has the default log level wrong, but it will be fixed in 1.9.1
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12401",
"html_url": "https://github.com/huggingface/transformers/pull/12401",
"diff_url": "https://github.com/huggingface/transformers/pull/12401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12401.patch",
"merged_at": 1624905805000
} |
https://api.github.com/repos/huggingface/transformers/issues/12400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12400/comments | https://api.github.com/repos/huggingface/transformers/issues/12400/events | https://github.com/huggingface/transformers/pull/12400 | 931,767,260 | MDExOlB1bGxSZXF1ZXN0Njc5MjA5MzM0 | 12,400 | [WIP] train tokenizer like test | {
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | Suggestion to test the special tokens mapping into the common tests
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12400",
"html_url": "https://github.com/huggingface/transformers/pull/12400",
"diff_url": "https://github.com/huggingface/transformers/pull/12400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12400.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12399/comments | https://api.github.com/repos/huggingface/transformers/issues/12399/events | https://github.com/huggingface/transformers/issues/12399 | 931,719,422 | MDU6SXNzdWU5MzE3MTk0MjI= | 12,399 | Reference postprocess_qa_predictions score method | {
"login": "LozanoAlvarezb",
"id": 76513765,
"node_id": "MDQ6VXNlcjc2NTEzNzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/76513765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LozanoAlvarezb",
"html_url": "https://github.com/LozanoAlvarezb",
"followers_url": "https://api.github.com/users/LozanoAlvarezb/followers",
"following_url": "https://api.github.com/users/LozanoAlvarezb/following{/other_user}",
"gists_url": "https://api.github.com/users/LozanoAlvarezb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LozanoAlvarezb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LozanoAlvarezb/subscriptions",
"organizations_url": "https://api.github.com/users/LozanoAlvarezb/orgs",
"repos_url": "https://api.github.com/users/LozanoAlvarezb/repos",
"events_url": "https://api.github.com/users/LozanoAlvarezb/events{/privacy}",
"received_events_url": "https://api.github.com/users/LozanoAlvarezb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | To the best of my knowledge literature on question answering computes the probability of a span by first computing the probability for a token to be the start token or the end token independently and then multiplying those probabilities. In contrast, the [postprocess_qa_predictions](https://github.com/huggingface/transformers/blob/57461ac0b4e4f7349c2437fcf8d4115014d6ceda/examples/pytorch/question-answering/utils_qa.py#L31) function computes the probability for an answer by first summing both logits, ranking them, and applying the softmax over the top n_best scores.
Is there any reference in the literature supporting this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12399/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12398/comments | https://api.github.com/repos/huggingface/transformers/issues/12398/events | https://github.com/huggingface/transformers/pull/12398 | 931,684,685 | MDExOlB1bGxSZXF1ZXN0Njc5MTQwOTE4 | 12,398 | [Flax community event] Add more description to readme | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for the feedback!"
] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds information and tips for the team work.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12398/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12398",
"html_url": "https://github.com/huggingface/transformers/pull/12398",
"diff_url": "https://github.com/huggingface/transformers/pull/12398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12398.patch",
"merged_at": 1624897122000
} |
https://api.github.com/repos/huggingface/transformers/issues/12397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12397/comments | https://api.github.com/repos/huggingface/transformers/issues/12397/events | https://github.com/huggingface/transformers/pull/12397 | 931,658,707 | MDExOlB1bGxSZXF1ZXN0Njc5MTE5MjQ1 | 12,397 | [RoFormer] Fix some issues | {
"login": "JunnYu",
"id": 50394665,
"node_id": "MDQ6VXNlcjUwMzk0NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/50394665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JunnYu",
"html_url": "https://github.com/JunnYu",
"followers_url": "https://api.github.com/users/JunnYu/followers",
"following_url": "https://api.github.com/users/JunnYu/following{/other_user}",
"gists_url": "https://api.github.com/users/JunnYu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JunnYu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JunnYu/subscriptions",
"organizations_url": "https://api.github.com/users/JunnYu/orgs",
"repos_url": "https://api.github.com/users/JunnYu/repos",
"events_url": "https://api.github.com/users/JunnYu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JunnYu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok thanks, I just will let Lysandre review the `try:except` block, that's the only thing remaining.",
"> This looks good, but why did you change `rjieba` to `jieba`? Is the latter better?\r\n\r\n@LysandreJik\r\nI want to use `Hosted inference API` in https://huggingface.co. \r\n\r\n\r\nI found `CpmTokenizer` and `XLMTokenizer` use jieba.\r\nhttps://github.com/huggingface/transformers/blob/fb41f9f50c37aba0eced055323ba17e4203f7d57/src/transformers/models/cpm/tokenization_cpm.py#L31\r\nhttps://github.com/huggingface/transformers/blob/b24ead87e1be6bce17e4ec5c953b6d028e4b3af7/src/transformers/models/xlm/tokenization_xlm.py#L530",
"cc @Narsil ",
"@JunnYu Should work now: \r\n\r\nhttps://huggingface.co/junnyu/roformer_chinese_base?text=%E7%94%9F%E6%B4%BB%E7%9A%84%E7%9C%9F%E8%B0%9B%E6%98%AF+%5BMASK%5D%E3%80%82\r\n\r\nWe do update the API regularly with dependencies, `rjieba`was added pretty recently. \r\nCheers ! ",
"@Narsil thank you!",
"@LysandreJik \r\n(1) Now I use `rust jieba` . \r\n(2) This pr https://github.com/huggingface/transformers/pull/12361 add `test_training_new_tokenizer` and `test_training_new_tokenizer_with_special_tokens_change`. \r\nI found `test_tokenization_roformer.py` can't pass these two tests. Because `roformer tokenizer` has a custom PreTokenizer. \r\nhttps://github.com/huggingface/transformers/blob/e2c1dd09667af5a535689c371b4658c36681131f/src/transformers/convert_slow_tokenizer.py#L318"
] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
- add RoFormerTokenizerFast into AutoTokenizer
- fix typo in roformer docs
- Fix #12000 and make onnx export happy
- update RoFormerConfig embedding_size
- use jieba not rjieba and then we can enjoy "Hosted inference API in huggingface.co"
- fix #12244 and make test_alignement passed
- update roformer ARCHIVE_MAP
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12397/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12397",
"html_url": "https://github.com/huggingface/transformers/pull/12397",
"diff_url": "https://github.com/huggingface/transformers/pull/12397.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12397.patch",
"merged_at": 1625556717000
} |
https://api.github.com/repos/huggingface/transformers/issues/12396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12396/comments | https://api.github.com/repos/huggingface/transformers/issues/12396/events | https://github.com/huggingface/transformers/issues/12396 | 931,652,860 | MDU6SXNzdWU5MzE2NTI4NjA= | 12,396 | getting error with BertForMaskedLM | {
"login": "gems047",
"id": 10912073,
"node_id": "MDQ6VXNlcjEwOTEyMDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/10912073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gems047",
"html_url": "https://github.com/gems047",
"followers_url": "https://api.github.com/users/gems047/followers",
"following_url": "https://api.github.com/users/gems047/following{/other_user}",
"gists_url": "https://api.github.com/users/gems047/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gems047/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gems047/subscriptions",
"organizations_url": "https://api.github.com/users/gems047/orgs",
"repos_url": "https://api.github.com/users/gems047/repos",
"events_url": "https://api.github.com/users/gems047/events{/privacy}",
"received_events_url": "https://api.github.com/users/gems047/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, do you have a reproducible code example that showcases what's in your variables such as `input_embeddings`? Thank you!",
"Did not mention parameters like input_embeddings, however input looks like below:\r\n\r\n\r\n\r\ntensor([[[-0.0129, -0.0099, 0.2570, ..., -0.1841, 0.3609, 0.2851],\r\n [-0.1268, 0.0133, 0.2927, ..., -0.1167, 0.4605, -0.1288],\r\n [-0.9993, 0.4705, 0.5119, ..., -0.9274, 0.5529, -0.5890],\r\n ...,\r\n [-0.4415, 0.0786, 0.3132, ..., -0.2137, -0.0387, 0.2496],\r\n [ 0.2243, 0.2535, 0.2158, ..., -0.0974, -0.1830, 0.1292],\r\n [ 0.0239, -0.2080, 0.4332, ..., -0.2069, 0.0078, 0.2262]],\r\n\r\n [[-0.2665, 0.1647, 0.4427, ..., 0.0847, 0.4180, 0.7866],\r\n [ 0.1999, -0.3408, 0.4952, ..., -0.3468, 0.4271, -0.5220],\r\n [-0.6198, 0.1422, 0.5547, ..., -0.1745, -0.0165, -0.4338],\r\n ...,\r\n [-0.0902, 0.1044, 0.2038, ..., -0.0335, 0.4127, 0.2904],\r\n [-0.0747, 0.0279, 0.2409, ..., -0.0989, 0.0915, 0.0109],\r\n [ 0.2910, 0.1765, 0.3457, ..., 0.0559, 0.0067, -0.0191]],\r\n\r\n [[-0.0183, -0.0937, 0.6092, ..., -0.4594, 0.2707, 0.1108],\r\n [ 0.5192, -0.0532, 0.4865, ..., 0.1216, 0.0658, 0.5460],\r\n [-0.0984, -0.1430, 0.3035, ..., -0.0563, 0.3445, -0.2272],\r\n ...,\r\n [ 0.1298, -0.1624, 0.1905, ..., 0.0979, -0.0197, -0.3143],\r\n [-0.3790, 0.0682, 0.0601, ..., 0.0266, -0.1095, -0.2442],\r\n [-0.0352, -0.0526, 0.1690, ..., 0.0723, 0.1064, -0.2718]],\r\n\r\n ...,\r\n\r\n [[-0.3095, -0.3042, 0.2681, ..., -0.1081, -0.0650, 0.3146],\r\n [ 0.3054, 0.0550, 0.1716, ..., -0.1492, -0.0201, -0.1543],\r\n [-0.4458, 0.0661, 0.2862, ..., -0.2693, 0.3367, 0.0015],\r\n ...,\r\n [-0.3311, -0.0961, 0.2018, ..., 0.0840, -0.1578, 0.3397],\r\n [-0.0362, 0.0713, 0.4921, ..., 0.0881, 0.0501, -0.1048],\r\n [-0.0793, -0.1054, 0.1489, ..., -0.0762, -0.0039, -0.0471]],\r\n\r\n [[ 0.2076, -0.4345, 0.0533, ..., -0.0296, -0.1365, -0.1304],\r\n [ 0.5159, 0.3230, 0.6001, ..., -0.4266, -0.1751, -0.6830],\r\n [ 0.2633, -0.0747, 0.6887, ..., -0.5294, 0.4353, -0.3712],\r\n ...,\r\n [ 0.3373, 0.2944, 0.3050, ..., -0.0972, -0.1798, -0.2998],\r\n [-0.3282, 0.1189, 0.3962, ..., -0.2579, -0.2661, -0.0275],\r\n [-0.0706, 0.0654, 0.6177, ..., -0.1825, 0.0214, -0.1656]],\r\n\r\n [[ 0.0669, -0.3896, -0.0204, ..., -0.2962, -0.3721, -0.0138],\r\n [ 0.4778, 0.1336, 0.5360, ..., -0.0931, -0.3350, -0.3153],\r\n [ 0.3600, -0.2580, 0.1261, ..., 0.0296, -0.0979, 0.1038],\r\n ...,\r\n [ 0.0821, 0.0034, 0.2967, ..., -0.1719, -0.2646, -0.1868],\r\n [-0.0868, 0.4321, 0.0466, ..., 0.2056, -0.4406, -0.1953],\r\n [-0.1131, -0.1266, 0.1438, ..., -0.3065, -0.2185, 0.1069]]],\r\n device='cuda:1', grad_fn=<SliceBackward>)\r\n\r\nWill check how to put reproducible code example",
"weirdly it seems that your input embeddings are treated as input IDs, which should not happen. Can you let me know the result of `transforemrs-cli env` in your environment?",
"I get below result for transformers-cli env :\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.1.1\r\n- Platform: Linux-4.15.0-107-generic-x86_64-with-debian-stretch-sid\r\n- Python version: 3.7.4\r\n- PyTorch version (GPU?): 1.9.0+cu102 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n",
"Ah, does you error come from `x2 = self.x1(sequence_output)` ? What are you trying to do here, are you passing the BERT output as input IDs to a BERT Masked LM model?",
"Changed this line x2 = self.x1(sequence_output) to\r\n\r\nx2 = self.x1(inputs_embeds=sequence_output) as suggested above.\r\n\r\nIt works now , Thankyou",
"Can I change this code \r\n\r\n```py\r\nself.bert = BertModel.from_pretrained(\"bert-base-uncased\")\r\nsequence_output = self.bert(\r\n inputs_embeds=input_embeddings,\r\n position_ids=position_ids,\r\n token_type_ids=token_type_ids,\r\n attention_mask=attention_mask,\r\n)[0][:, max_ids:, :]\r\n```\r\n\r\nas below:\r\n\r\n```py\r\nself.bert = BertModel.from_pretrained(\"bert-base-uncased\")\r\nsequence_output = self.bert(\r\n inputs_embeds= position_embeddings,\r\n #position_ids=position_ids,\r\n token_type_ids=token_type_ids,\r\n attention_mask=attention_mask,\r\n)[0][:, max_ids:, :]\r\n```\r\n\r\nie assign position_embeddings to inputs_embeds\r\nor like this `input_embeds = input_embeddings + position_embeddings`",
"No, the inputs embedding are only for the input IDs. Position embeddings will be added to that variable in the embedding layer.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | config = BertConfig.from_pretrained("bert-base-uncased")
self.bert = BertModel.from_pretrained("bert-base-uncased")
sequence_output = self.bert(
inputs_embeds=input_embeddings,
position_ids=position_ids,
token_type_ids=token_type_ids,
attention_mask=attention_mask,
)[0][:, max_ids:, :]
self.x1 = BertForMaskedLM(config)
x2 = self.x1(sequence_output)
While running above code , getting below error at last line (x2 = self.x1(sequence_output). Unable to relate the error with code sequence , and why is this error coming. Is there any issue with BertForMaskedLM
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "<ipython-input-9-d980e37a9621>", line 90, in forward
x_scores = self.x_head(sequence_output).to(self.device)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1185, in forward
return_dict=return_dict,
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 862, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 198, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/functional.py", line 2043, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12395/comments | https://api.github.com/repos/huggingface/transformers/issues/12395/events | https://github.com/huggingface/transformers/pull/12395 | 931,624,626 | MDExOlB1bGxSZXF1ZXN0Njc5MDkxMzEy | 12,395 | Minor fixes in original RAG training script | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
1. Did a minor fix for the original RAG's fine-tuning script in order to train with distributed GPU architectures (multiple node).
2. Corrected a typo in callbacks_rag.py
Who can review?
@lhoestq @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12395/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12395",
"html_url": "https://github.com/huggingface/transformers/pull/12395",
"diff_url": "https://github.com/huggingface/transformers/pull/12395.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12395.patch",
"merged_at": 1624970388000
} |
https://api.github.com/repos/huggingface/transformers/issues/12394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12394/comments | https://api.github.com/repos/huggingface/transformers/issues/12394/events | https://github.com/huggingface/transformers/pull/12394 | 931,585,591 | MDExOlB1bGxSZXF1ZXN0Njc5MDU4OTI2 | 12,394 | Remove the need for `einsum` in Albert's attention computation | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We did some with Lysandre, but I can rerun some more checks before we merge, just to be sure we are not breaking anything 👍🏻 "
] | 1,624 | 1,624 | 1,624 | MEMBER | null | This change makes it easier to optimize for exporting libraries such as ONNX and/or TensorRT. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12394",
"html_url": "https://github.com/huggingface/transformers/pull/12394",
"diff_url": "https://github.com/huggingface/transformers/pull/12394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12394.patch",
"merged_at": 1624897806000
} |
https://api.github.com/repos/huggingface/transformers/issues/12393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12393/comments | https://api.github.com/repos/huggingface/transformers/issues/12393/events | https://github.com/huggingface/transformers/pull/12393 | 931,487,293 | MDExOlB1bGxSZXF1ZXN0Njc4OTc1ODU3 | 12,393 | [example/flax] add summarization readme | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good, maybe also add a `requirements.txt` file? "
] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
This PR adds readme with instructions for the summarization example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12393/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12393",
"html_url": "https://github.com/huggingface/transformers/pull/12393",
"diff_url": "https://github.com/huggingface/transformers/pull/12393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12393.patch",
"merged_at": 1624955553000
} |
https://api.github.com/repos/huggingface/transformers/issues/12392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12392/comments | https://api.github.com/repos/huggingface/transformers/issues/12392/events | https://github.com/huggingface/transformers/pull/12392 | 931,448,273 | MDExOlB1bGxSZXF1ZXN0Njc4OTQyNjk3 | 12,392 | GLM Model implementation [WIP] | {
"login": "spatil6",
"id": 6419011,
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spatil6",
"html_url": "https://github.com/spatil6",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"repos_url": "https://api.github.com/users/spatil6/repos",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"I can review this! Ping me when you are ready.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Gently pinging @spatil6 :) ",
"> Gently pinging @spatil6 :)\r\n\r\nYeah, i'm on it. will have update by next week,"
] | 1,624 | 1,648 | null | CONTRIBUTOR | null | #11377
Started implementation of GLM model.
@patil-suraj
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12392/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12392/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12392",
"html_url": "https://github.com/huggingface/transformers/pull/12392",
"diff_url": "https://github.com/huggingface/transformers/pull/12392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12392.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12391/comments | https://api.github.com/repos/huggingface/transformers/issues/12391/events | https://github.com/huggingface/transformers/pull/12391 | 931,445,886 | MDExOlB1bGxSZXF1ZXN0Njc4OTQwNjE4 | 12,391 | [Flax] Adapt flax examples to include `push_to_hub` | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj, I think we should also add a `README.md` for the summarization example (this will probs be used a lot during the sprint)."
] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
This PR adapts all Flax examples to automatically push trained checkpoints to the hub | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12391/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12391",
"html_url": "https://github.com/huggingface/transformers/pull/12391",
"diff_url": "https://github.com/huggingface/transformers/pull/12391.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12391.patch",
"merged_at": 1624904615000
} |
https://api.github.com/repos/huggingface/transformers/issues/12390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12390/comments | https://api.github.com/repos/huggingface/transformers/issues/12390/events | https://github.com/huggingface/transformers/issues/12390 | 931,374,013 | MDU6SXNzdWU5MzEzNzQwMTM= | 12,390 | `fill-mask` pipeline provides `<mask>` token among predictions | {
"login": "rspreafico-absci",
"id": 83304116,
"node_id": "MDQ6VXNlcjgzMzA0MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/83304116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rspreafico-absci",
"html_url": "https://github.com/rspreafico-absci",
"followers_url": "https://api.github.com/users/rspreafico-absci/followers",
"following_url": "https://api.github.com/users/rspreafico-absci/following{/other_user}",
"gists_url": "https://api.github.com/users/rspreafico-absci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rspreafico-absci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rspreafico-absci/subscriptions",
"organizations_url": "https://api.github.com/users/rspreafico-absci/orgs",
"repos_url": "https://api.github.com/users/rspreafico-absci/repos",
"events_url": "https://api.github.com/users/rspreafico-absci/events{/privacy}",
"received_events_url": "https://api.github.com/users/rspreafico-absci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | ## Environment info
- `transformers` version: 4.8.1
- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: see below
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
```
from transformers import RobertaTokenizerFast, RobertaForMaskedLM, pipeline
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
model = RobertaForMaskedLM.from_pretrained("roberta-base")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer,
)
resp = fill_mask("My <mask> is Roberto.", top_k=len(tokenizer.get_vocab()))
[x for x in resp if x['token'] == tokenizer.mask_token_id]
```
## Expected behavior
Because the job of the `fill-mask` pipeline is to fill the `<mask>` special token, the expectation is that `<mask>` itself is not part of the possible predictions.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12390/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12389/comments | https://api.github.com/repos/huggingface/transformers/issues/12389/events | https://github.com/huggingface/transformers/issues/12389 | 931,359,002 | MDU6SXNzdWU5MzEzNTkwMDI= | 12,389 | GPT2-large for sequence classification default num_labels differs from the default for GPT2-small and GPT2-medium | {
"login": "matthewfranglen",
"id": 122936,
"node_id": "MDQ6VXNlcjEyMjkzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/122936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewfranglen",
"html_url": "https://github.com/matthewfranglen",
"followers_url": "https://api.github.com/users/matthewfranglen/followers",
"following_url": "https://api.github.com/users/matthewfranglen/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewfranglen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewfranglen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewfranglen/subscriptions",
"organizations_url": "https://api.github.com/users/matthewfranglen/orgs",
"repos_url": "https://api.github.com/users/matthewfranglen/repos",
"events_url": "https://api.github.com/users/matthewfranglen/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewfranglen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-config.json this still has `_num_labels` of 1 where https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json lacks the entry and so inherits the default value.",
"I would argue that you should always manually specify the number of labels that you wish for when loading a pretrained model with no sequence classification head - the `gpt2-large` configuration shouldn't have a default number of labels set to 1, however.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-config.json this still has `_num_labels` of 1 where https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json lacks the entry and so inherits the default value.",
"This is fixed for both `gpt2-large` and `gpt2-xl`"
] | 1,624 | 1,631 | 1,631 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
## Information
When creating an `AutoModelForSequenceClassification` using `from_pretrained` if you pass in `gpt2` as the model name then you receive a classifier with two targets (`model.config.num_labels` = 2). If you instead pass in `gpt2-large` as the model name then you receive a regressor with one target (`model.config.num_labels` = 1).
Model I am using: GPT-2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: The Stanford Sentiment Treebank
* [ ] my own task or dataset: (give details below)
(I found this issue when working on sst2 but it is not particularly relevant to the issue).
## To reproduce
Steps to reproduce the behavior:
1. Run this code:
```python
from transformers import AutoModelForSequenceClassification
gpt2_small_features = AutoModelForSequenceClassification.from_pretrained("gpt2").score.out_features
gpt2_large_features = AutoModelForSequenceClassification.from_pretrained("gpt2-large").score.out_features
print([gpt2_small_features, gpt2_large_features])
```
This prints `[2, 1]`.
## Expected behavior
`num_labels` should have a consistent default across different versions of gpt2. The source code for PretrainedConfig suggests that this should be 2. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12389/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12388/comments | https://api.github.com/repos/huggingface/transformers/issues/12388/events | https://github.com/huggingface/transformers/pull/12388 | 931,259,780 | MDExOlB1bGxSZXF1ZXN0Njc4NzgxODE1 | 12,388 | Onnx export v2 fixes | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | Tiny fixes. To see just the fixes, check the first commit's changes.
The second commit addresses code quality issues.
Remaining TODOs before merging:
- Add a test for all supported architectures
- Write a small Usage doc | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12388/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12388",
"html_url": "https://github.com/huggingface/transformers/pull/12388",
"diff_url": "https://github.com/huggingface/transformers/pull/12388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12388.patch",
"merged_at": 1624865998000
} |
https://api.github.com/repos/huggingface/transformers/issues/12387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12387/comments | https://api.github.com/repos/huggingface/transformers/issues/12387/events | https://github.com/huggingface/transformers/issues/12387 | 931,166,505 | MDU6SXNzdWU5MzExNjY1MDU= | 12,387 | Connot correctly fine-tune Bert for generation | {
"login": "wyu97",
"id": 47213568,
"node_id": "MDQ6VXNlcjQ3MjEzNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/47213568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wyu97",
"html_url": "https://github.com/wyu97",
"followers_url": "https://api.github.com/users/wyu97/followers",
"following_url": "https://api.github.com/users/wyu97/following{/other_user}",
"gists_url": "https://api.github.com/users/wyu97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wyu97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wyu97/subscriptions",
"organizations_url": "https://api.github.com/users/wyu97/orgs",
"repos_url": "https://api.github.com/users/wyu97/repos",
"events_url": "https://api.github.com/users/wyu97/events{/privacy}",
"received_events_url": "https://api.github.com/users/wyu97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi Stas, Suraj, Sylvain (@stas00 @patil-suraj @sgugger),\r\n\r\nWould you please to give some helps on using the `finetune_trainer.py` [file](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/finetune_trainer.py) for BertGen?\r\n\r\nThank you very much!\r\n\r\n",
"It looks like @patrickvonplaten ported this model, https://huggingface.co/transformers/model_doc/bertgeneration.html\r\nso he is probably the best person to ask.\r\n",
"Yes, I followed that document and changed the example script [transformers/examples/legacy/seq2seq/finetune_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/finetune_trainer.py). However, it did not work.",
"@patrickvonplaten Hi Patrick, do you have any script for training BertGen model?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey@wyu97,\r\n\r\nCould you try to follow this blog post and the accompanying google colab to fine-tune a Beer for generation Seq2Seq model? \r\n\r\nhttps://huggingface.co/blog/warm-starting-encoder-decoder#warm-starting-encoder-decoder-models-with-%F0%9F%A4%97transformers-practice",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,631 | 1,631 | NONE | null | # 📚 Migration
## Information
<!-- Important information -->
Model I am using Bert for generation (a Seq2Seq style):
The problem arises when using:
* The official example scripts from : [transformers/examples/legacy/seq2seq/finetune_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/finetune_trainer.py)
```
config = AutoConfig.from_pretrained(model_args.config_name)
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_args.model_name_or_path)
```
The above is the official code to fine-tune a text generation model such as BART (i.e., just specify model_name_or_path as facebook/BART-base). **I am trying to use the BERTGEN instead of using BART of T5.** So, I modified the above code into the following one.
* My own modified scripts:
```
config = BertConfig.from_pretrained("bert-large-uncased")
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102)
decoder = BertGenerationDecoder.from_pretrained("bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
```
The tasks I am working on is:
* An official task: Gigaword, CNN/DailyMail
## Details
Since the examples in the github repo do not contain `Bert for generation`, so I adopt the code from the documents [here](https://huggingface.co/transformers/model_doc/bertgeneration.html). The only modification has been shown as aboves. However, the model performance cannot reach the reported performance in their [paper](https://arxiv.org/pdf/1907.12461.pdf), even has been left by a large margin. I felt the model has not been correctly trained.
**I am wondering whether there is an example code for using Bert for generation. Thank you very much for your help in advance.**
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Python version: 3.6
- PyTorch version (GPU?): 1.8
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): 4.7.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12386/comments | https://api.github.com/repos/huggingface/transformers/issues/12386/events | https://github.com/huggingface/transformers/issues/12386 | 931,152,457 | MDU6SXNzdWU5MzExNTI0NTc= | 12,386 | A model or a config like 'transformer_iwslt_de_en' for machine translation | {
"login": "logoutAgain",
"id": 23735761,
"node_id": "MDQ6VXNlcjIzNzM1NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/23735761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/logoutAgain",
"html_url": "https://github.com/logoutAgain",
"followers_url": "https://api.github.com/users/logoutAgain/followers",
"following_url": "https://api.github.com/users/logoutAgain/following{/other_user}",
"gists_url": "https://api.github.com/users/logoutAgain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/logoutAgain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/logoutAgain/subscriptions",
"organizations_url": "https://api.github.com/users/logoutAgain/orgs",
"repos_url": "https://api.github.com/users/logoutAgain/repos",
"events_url": "https://api.github.com/users/logoutAgain/events{/privacy}",
"received_events_url": "https://api.github.com/users/logoutAgain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"I am also eager for a transformer base model to train from scratch with HuggingFace ",
"In huggingface transformers it's called [FSMT](https://huggingface.co/docs/transformers/model_doc/fsmt), short for FairSeq Machine Translation.",
"I have the same need, looking for a transformer base model.\r\nWill try FSMT.\r\nThanks!"
] | 1,624 | 1,669 | null | NONE | null | # 🌟 New model addition
Does huggingface have some models like `transformer_iwslt_de_en` or `transformer_wmt_en_de` in fairseq for machine translation?
I plan to write a model for machine translation on huggingface. It would be great to be able to compare directly with the baseline model on huggingface.
@patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12386/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12385/comments | https://api.github.com/repos/huggingface/transformers/issues/12385/events | https://github.com/huggingface/transformers/pull/12385 | 931,034,425 | MDExOlB1bGxSZXF1ZXN0Njc4NTkyMDAw | 12,385 | Rework LongFormer to make it compatible with ONNX | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"I'm fine with this rework as long as all the slow tests are passing :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey github actions, I need to find some time to continue my work here ! :D "
] | 1,624 | 1,651 | 1,651 | MEMBER | null | - [x] Remove implicit `bool` -> `int` conversion while padding attention_mask with `False` value.
- [x] Remove call to `einsum` where `matmul` + `transpose` can be used (makes optimizations easier for ONNX)
- [ ] Diagonal matrix computation try to avoid ScatterND with negative steps / index
- [ ] Use `torch.div(a, b, rounding_mode='trunc')` instead of `floor_divide`
- [ ] Check [function](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py#L783) reporting `torch.Tensor` return type which doesn't return anything | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12385/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12385",
"html_url": "https://github.com/huggingface/transformers/pull/12385",
"diff_url": "https://github.com/huggingface/transformers/pull/12385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12385.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12384/comments | https://api.github.com/repos/huggingface/transformers/issues/12384/events | https://github.com/huggingface/transformers/issues/12384 | 930,941,962 | MDU6SXNzdWU5MzA5NDE5NjI= | 12,384 | Request: New LM Adapted checkpoints for T5 | {
"login": "Xirider",
"id": 37597043,
"node_id": "MDQ6VXNlcjM3NTk3MDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/37597043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xirider",
"html_url": "https://github.com/Xirider",
"followers_url": "https://api.github.com/users/Xirider/followers",
"following_url": "https://api.github.com/users/Xirider/following{/other_user}",
"gists_url": "https://api.github.com/users/Xirider/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xirider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xirider/subscriptions",
"organizations_url": "https://api.github.com/users/Xirider/orgs",
"repos_url": "https://api.github.com/users/Xirider/repos",
"events_url": "https://api.github.com/users/Xirider/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xirider/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,624 | 1,624 | null | NONE | null | # 🌟 New LM Adapted checkpoints for T5
## Description
Google released a new set of checkpoints for T5 v1.1. here:
https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511
Especially interesting for most people will be the checkpoints below, as finetuning T5 with a prompt or using T5 for conditional text generation is quite common and these checkpoints promise better performance. The default T5 v1.1 checkpoints have never seen sequences without sentinel tokens.
### LM-Adapted: t5.1.1.lm100k (copied from the readme)
These "LM adapted" models are initialized from t5.1.1 (above) and train for an
additional 100K steps on the LM objective discussed in the [T5 paper][paper].
This adaptation improves the ability of the model to be used for [prompt
tuning](https://arxiv.org/abs/2104.08691).
* **t5.1.1.lm100k.small** (~77 million parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.small](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.small/)
* **t5.1.1.lm100k.base** (~250 million parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.base](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.base/)
* **t5.1.1.lm100k.large** (~800 million parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.large](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.large/)
* **t5.1.1.lm100k.xl** (~3 billion parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.xl](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.xl/)
* **t5.1.1.lm100k.xxl** (~11 billion parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.xxl](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.xxl/)
## Open source status
* [x] the model implementation is available: t5 v1.1. with geglu
* [x] the model weights are available: see links above
* [x] who are the authors: Brian Lester, Rami Al-Rfou, Noah Constant
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12384/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12383/comments | https://api.github.com/repos/huggingface/transformers/issues/12383/events | https://github.com/huggingface/transformers/issues/12383 | 930,940,725 | MDU6SXNzdWU5MzA5NDA3MjU= | 12,383 | Size of tensors not matching even though using tweets (all same length) | {
"login": "batmanscode",
"id": 29989939,
"node_id": "MDQ6VXNlcjI5OTg5OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/29989939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/batmanscode",
"html_url": "https://github.com/batmanscode",
"followers_url": "https://api.github.com/users/batmanscode/followers",
"following_url": "https://api.github.com/users/batmanscode/following{/other_user}",
"gists_url": "https://api.github.com/users/batmanscode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/batmanscode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/batmanscode/subscriptions",
"organizations_url": "https://api.github.com/users/batmanscode/orgs",
"repos_url": "https://api.github.com/users/batmanscode/repos",
"events_url": "https://api.github.com/users/batmanscode/events{/privacy}",
"received_events_url": "https://api.github.com/users/batmanscode/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you try changing this line:\r\n```\r\nencoded_input = tokenizer(text, return_tensors='pt')\r\n```\r\nto\r\n```\r\nencoded_input = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=512)\r\n```\r\n?\r\n\r\nIt will pad/truncate all sequences to 512 tokens. Feel free to adapt to the maximum size within the batch or to a smaller max length (512 is the maximum sequence length for that model)",
"That worked! Thank you @LysandreJik, much appreciated 😊\r\n\r\nBtw how did you get that the max sequence length for that model is 512? I thought it was 514 based on the error message.\r\n\r\nI checked some tweets that were close to the character limit and their `num_tokens` (in encoded_input) was ~60. This is far less than the error message. Do you know why/how there could be such a difference?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | # What I'm doing
Trying to estimate the emotion of tweets using https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion
# The error
`RuntimeError: The expanded size of the tensor (601) must match the existing size (514) at non-singleton dimension 1. Target sizes: [1, 601]. Tensor sizes: [1, 514]`
Does anyone what the problem might be? I tried `truncating=True` as well
# Code and data to reproduce
### View only
https://deepnote.com/project/Code-to-reproduce-error-RuntimeError-The-expanded-size-of-the-tensor-601-must-match-the-existing-size-514-at-non-singleton-dimension-1-ZfBSsqbKQ7-XWrir593tKQ/%2Fnotebook.ipynb
### Interactive (can run and/or make changes)
https://deepnote.com/project/Interactive-Code-to-reproduce-error-RuntimeError-The-expanded-size-of-the-tensor-601-must-match-the-existing-size-514-at-non-singleton-dimension-1-Duplicate-qJTy9jxRTPWhXhwytQjU4Q/%2Fnotebook.ipynb
### Environment
"Deepnote projects run in containers on Debian Buster with Python 3.7" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12382/comments | https://api.github.com/repos/huggingface/transformers/issues/12382/events | https://github.com/huggingface/transformers/issues/12382 | 930,881,328 | MDU6SXNzdWU5MzA4ODEzMjg= | 12,382 | About a error in retrain a xlm model | {
"login": "zxk19981227",
"id": 40291547,
"node_id": "MDQ6VXNlcjQwMjkxNTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/40291547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zxk19981227",
"html_url": "https://github.com/zxk19981227",
"followers_url": "https://api.github.com/users/zxk19981227/followers",
"following_url": "https://api.github.com/users/zxk19981227/following{/other_user}",
"gists_url": "https://api.github.com/users/zxk19981227/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zxk19981227/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zxk19981227/subscriptions",
"organizations_url": "https://api.github.com/users/zxk19981227/orgs",
"repos_url": "https://api.github.com/users/zxk19981227/repos",
"events_url": "https://api.github.com/users/zxk19981227/events{/privacy}",
"received_events_url": "https://api.github.com/users/zxk19981227/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | when i train a xlm model as following code, an error occurs that 'label' is a unexpected for forward. Anyway to solve it?
`# -*- coding: UTF-8 -*-
from transformers import XLMTokenizer, XLMModel, Trainer
from datasets import load_dataset, Dataset
from transformers import LineByLineTextDataset, TrainingArguments
from transformers.data.data_collator import DataCollatorForLanguageModeling
model = XLMModel.from_pretrained('xlm-mlm-tlm-xnli15-1024')
tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-tlm-xnli15-1024')
# train_datasets=load_dataset('text',data_files={'train':'./tmp/xxx.train.txt','valitation':'./tmp/all_val_data.txt'})
# 将我们刚刚加载好的datasets ,通过tokenizer做映射,得到input_id,也就是实际输入模型的东西。
# def tokenize_function(examples):
# # Remove empty lines
# examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
# return tokenizer(
# examples["text"],
# padding="max_length", # 进行填充
# truncation=True, # 进行截断
# max_length=256, # 设置句子的长度
# # We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it
# # receives the `special_tokens_mask`.
# return_special_tokens_mask=True,
# )
# 得到训练集和验证集
# train_dataset = tokenized_datasets["train"]
# eval_dataset = tokenized_datasets["validation"]
model.resize_token_embeddings(len(tokenizer))
train_dataset = LineByLineTextDataset(tokenizer=tokenizer,
file_path=''
block_size=512)
datacollector = DataCollatorForLanguageModeling(tokenizer, mlm=True, mlm_probability=0.15)
# train_method=Trainer(model=model,data_collator=datacollector,train_dataset=train_dataset)
training_args = TrainingArguments(output_dir='./outputs/', overwrite_output_dir=True, num_train_epochs=20,
learning_rate=6e-5,
per_device_train_batch_size=128, save_total_limit=10) # save_steps=10000
trainer = Trainer(
model=model, args=training_args, data_collator=datacollector, train_dataset=train_dataset)
trainer.train()
trainer.save_model('./outputs/')
` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12382/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12381/comments | https://api.github.com/repos/huggingface/transformers/issues/12381/events | https://github.com/huggingface/transformers/issues/12381 | 930,878,254 | MDU6SXNzdWU5MzA4NzgyNTQ= | 12,381 | A fast tokenizer for BertJapaneseTokenizer | {
"login": "dkawahara",
"id": 13016548,
"node_id": "MDQ6VXNlcjEzMDE2NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/13016548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkawahara",
"html_url": "https://github.com/dkawahara",
"followers_url": "https://api.github.com/users/dkawahara/followers",
"following_url": "https://api.github.com/users/dkawahara/following{/other_user}",
"gists_url": "https://api.github.com/users/dkawahara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkawahara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkawahara/subscriptions",
"organizations_url": "https://api.github.com/users/dkawahara/orgs",
"repos_url": "https://api.github.com/users/dkawahara/repos",
"events_url": "https://api.github.com/users/dkawahara/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkawahara/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hi @dkawahara I've just written tentative `BertJapaneseTokenizerFast`:\r\n\r\n```\r\nfrom transformers import BertJapaneseTokenizer\r\nclass BertJapaneseTokenizerFast(BertJapaneseTokenizer):\r\n def __call__(self,text,text_pair=None,return_offsets_mapping=False,**kwargs):\r\n v=super().__call__(text=text,text_pair=text_pair,return_offsets_mapping=False,**kwargs)\r\n if return_offsets_mapping:\r\n import tokenizations\r\n if type(text)==str:\r\n z=zip([v[\"input_ids\"]],[text],[text_pair] if text_pair else [\"\"])\r\n else:\r\n z=zip(v[\"input_ids\"],text,text_pair if text_pair else [\"\"]*len(text))\r\n w=[]\r\n for a,b,c in z:\r\n a2b,b2a=tokenizations.get_alignments(self.convert_ids_to_tokens(a),b+c)\r\n x=[]\r\n for i,t in enumerate(a2b):\r\n if t==[]:\r\n s=(0,0)\r\n if a[i]==self.unk_token_id:\r\n j=[[-1]]+[t for t in a2b[0:i] if t>[]]\r\n k=[t for t in a2b[i+1:] if t>[]]+[[len(b+c)]]\r\n s=(j[-1][-1]+1,k[0][0])\r\n elif t[-1]<len(b):\r\n s=(t[0],t[-1]+1)\r\n else:\r\n s=(t[0]-len(b),t[-1]-len(b)+1)\r\n x.append(s)\r\n w.append(list(x))\r\n v[\"offset_mapping\"]=w[0] if type(text)==str else w\r\n return v\r\n```\r\n\r\nBut it requires [pytokenizations](https://github.com/explosion/tokenizations) module, and in fact it's not fast. See detail in [my diary](https://srad.jp/~yasuoka/journal/651897/) written in Japanese, and in next I will try to implement `BatchEncoding.encodings`",
"@KoichiYasuoka Thank you very much for providing this work around. Without \"return_offsets_mapping\" option, it was always a pain in Japanese token classification tasks.\r\n\r\nI would like to point out a little bug when processing text containing consecutive [UNK] tokens.\r\ne.g.,\r\n```\r\ntext = \"𠮟られても平気なの☺ ☺☺\"\r\ntokenizer=BertJapaneseTokenizerFast.from_pretrained(\"cl-tohoku/bert-base-japanese\")\r\nd=tokenizer(text,return_offsets_mapping=True)\r\nfor offset in d['offset_mapping']:\r\n print((offset[0], offset[1]), text[offset[0]:offset[1]])\r\n\r\n```\r\nwould print out results like below\r\n```\r\n(0, 0) \r\n(0, 1) 𠮟\r\n(1, 3) られ\r\n(3, 4) て\r\n(4, 5) も\r\n(5, 6) 平\r\n(6, 7) 気\r\n(7, 8) な\r\n(8, 9) の\r\n(9, 13) ☺ ☺☺\r\n(9, 13) ☺ ☺☺\r\n(0, 0) \r\n```\r\n\r\nI still can't figure out any solutions to improve the mapping approach for each [UNK] token. \r\nI am just wondering if you have any ideas on this issue. Many thanks.\r\n",
"Hi @Ezekiel25c17 I've just written `BertMecabTokenizerFast`:\r\n\r\n```\r\nfrom transformers import BertTokenizerFast\r\nfrom transformers.models.bert_japanese.tokenization_bert_japanese import MecabTokenizer\r\nclass MecabPreTokenizer(MecabTokenizer):\r\n def mecab_split(self,i,normalized_string):\r\n t=str(normalized_string)\r\n z=[]\r\n e=0\r\n for c in self.tokenize(t):\r\n s=t.find(c,e)\r\n if s<0:\r\n z.append((0,0))\r\n else:\r\n e=s+len(c)\r\n z.append((s,e))\r\n return [normalized_string[s:e] for s,e in z if e>0]\r\n def pre_tokenize(self,pretok):\r\n pretok.split(self.mecab_split)\r\nclass BertMecabTokenizerFast(BertTokenizerFast):\r\n def __init__(self,vocab_file,**kwargs):\r\n from tokenizers.pre_tokenizers import PreTokenizer,BertPreTokenizer,Sequence\r\n super().__init__(vocab_file=vocab_file,**kwargs)\r\n d=kwargs[\"mecab_kwargs\"] if \"mecab_kwargs\" in kwargs else {\"mecab_dic\":\"ipadic\"}\r\n self._tokenizer.pre_tokenizer=Sequence([PreTokenizer.custom(MecabPreTokenizer(**d)),BertPreTokenizer()])\r\n```\r\n\r\nderived from `MecabPreTokenizer` of [deberta-base-japanese-juman-ud-goeswith](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-juman-ud-goeswith/blob/main/ud.py). Does it work well?",
"Hi @KoichiYasuoka Thank you for responding so quickly. This worked as a charm!",
"I've re-written `BertMecabTokenizer` to disable `do_lower_case` and `tokenize_chinese_chars`:\r\n\r\n```\r\nfrom transformers import BertTokenizerFast\r\nfrom transformers.models.bert_japanese.tokenization_bert_japanese import MecabTokenizer\r\nclass MecabPreTokenizer(MecabTokenizer):\r\n def mecab_split(self,i,normalized_string):\r\n t=str(normalized_string)\r\n e=0\r\n z=[]\r\n for c in self.tokenize(t):\r\n s=t.find(c,e)\r\n e=e if s<0 else s+len(c)\r\n z.append((0,0) if s<0 else (s,e))\r\n return [normalized_string[s:e] for s,e in z if e>0]\r\n def pre_tokenize(self,pretok):\r\n pretok.split(self.mecab_split)\r\nclass BertMecabTokenizerFast(BertTokenizerFast):\r\n def __init__(self,vocab_file,do_lower_case=False,tokenize_chinese_chars=False,**kwargs):\r\n from tokenizers.pre_tokenizers import PreTokenizer,BertPreTokenizer,Sequence\r\n super().__init__(vocab_file=vocab_file,do_lower_case=do_lower_case,tokenize_chinese_chars=tokenize_chinese_chars,**kwargs)\r\n d=kwargs[\"mecab_kwargs\"] if \"mecab_kwargs\" in kwargs else {\"mecab_dic\":\"ipadic\"}\r\n self._tokenizer.pre_tokenizer=Sequence([PreTokenizer.custom(MecabPreTokenizer(**d)),BertPreTokenizer()])\r\n```\r\n\r\nand now `BertMecabTokenizerFast` tokenizes \"平気\" into \"平\" and \"##気\". See detail in [my diary](https://srad.jp/~yasuoka/journal/660181/) written in Japanese."
] | 1,624 | 1,676 | null | NONE | null | We would like a fast tokenizer for BertJapaneseTokenizer. This is because the current token classification model (run_ner.py) requires using the fast tokenizer but BertJapaneseTokenizer does not have it. Because of this, we cannot do token classification for Japanese using cl-tohoku's BERT models.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12381/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12381/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12380/comments | https://api.github.com/repos/huggingface/transformers/issues/12380/events | https://github.com/huggingface/transformers/issues/12380 | 930,836,069 | MDU6SXNzdWU5MzA4MzYwNjk= | 12,380 | Module version identification problem | {
"login": "gaowanliang",
"id": 41822468,
"node_id": "MDQ6VXNlcjQxODIyNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/41822468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaowanliang",
"html_url": "https://github.com/gaowanliang",
"followers_url": "https://api.github.com/users/gaowanliang/followers",
"following_url": "https://api.github.com/users/gaowanliang/following{/other_user}",
"gists_url": "https://api.github.com/users/gaowanliang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaowanliang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaowanliang/subscriptions",
"organizations_url": "https://api.github.com/users/gaowanliang/orgs",
"repos_url": "https://api.github.com/users/gaowanliang/repos",
"events_url": "https://api.github.com/users/gaowanliang/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaowanliang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! What command did you run to get the first error?",
"I just executed `python xxx.py` from the command, which references this package, and the specific project is [this](https://github.com/yangjianxin1/GPT2-chitchat)\r\n\r\n\r\n\r\nDetailed local error message (which contains some of my debugging data)\r\n\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"interact.py\", line 1, in <module>\r\n import transformers\r\n File \"C:\\Users\\gaowanliang\\Miniconda3\\lib\\site-packages\\transformers\\__init__.py\", line 43, in <module>\r\n from . import dependency_versions_check\r\n File \"C:\\Users\\gaowanliang\\Miniconda3\\lib\\site-packages\\transformers\\dependency_versions_check.py\", line 41, in <module>\r\n require_version_core(deps[pkg])\r\n File \"C:\\Users\\gaowanliang\\Miniconda3\\lib\\site-packages\\transformers\\utils\\versions.py\", line 125, in require_version_core\r\n return require_version(requirement, hint)\r\n File \"C:\\Users\\gaowanliang\\Miniconda3\\lib\\site-packages\\transformers\\utils\\versions.py\", line 119, in require_version\r\n _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n File \"C:\\Users\\gaowanliang\\Miniconda3\\lib\\site-packages\\transformers\\utils\\versions.py\", line 50, in _compare_versions\r\n f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\nImportError: tqdm>=4.27 is required for a normal functioning of this module, but found tqdm==4.26.0.\r\nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I have a similar issue after upgrading to transformers 4.9.1:\r\n\r\n```\r\n>>> import transformers\r\n>>> transformers.__version__\r\n'4.9.1'\r\n>>> from transformers.utils.versions import require_version\r\n>>> require_version(\"torch>=1.5.0\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/utils/versions.py\", line 114, in require_version\r\n _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n File \"/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/utils/versions.py\", line 50, in _compare_versions\r\n f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\nImportError: torch>=1.5.0 is required for a normal functioning of this module, but found torch==1.2.0.\r\n>>> import torch\r\n>>> torch.__version__\r\n'1.7.1'\r\n```\r\n\r\n\r\nOn an Ubuntu 20.04 system with Python 3.7.6 using miniconda.\r\n\r\nNot sure why the wrong torch version is detected. \r\n\r\nThe issue happens when I want to train something with the AdamW optimizer:\r\n File \"/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/optimization.py\", line 300",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,630 | 1,630 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.1
- Platform: Windows 10 x64
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?: GTX1060
## Module version identification problem caused by "importlib_metadata"

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12379/comments | https://api.github.com/repos/huggingface/transformers/issues/12379/events | https://github.com/huggingface/transformers/issues/12379 | 930,821,042 | MDU6SXNzdWU5MzA4MjEwNDI= | 12,379 | Tracking variables other than loss during training | {
"login": "umgupta",
"id": 4678394,
"node_id": "MDQ6VXNlcjQ2NzgzOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4678394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/umgupta",
"html_url": "https://github.com/umgupta",
"followers_url": "https://api.github.com/users/umgupta/followers",
"following_url": "https://api.github.com/users/umgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/umgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/umgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/umgupta/subscriptions",
"organizations_url": "https://api.github.com/users/umgupta/orgs",
"repos_url": "https://api.github.com/users/umgupta/repos",
"events_url": "https://api.github.com/users/umgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/umgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | # 🚀 Feature request
Allow to track other variables during training with the [trainer](https://huggingface.co/transformers/main_classes/trainer.html).
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
Often during training one wants to track variables other than just the loss. For example, the loss may be consisting of two different components and the user may want to track the two separately. As of now, the trainer can only track loss. It would be great if a user could simply pass the list of keys of auxiliary losses that they may want to track.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I am happy to discuss and contribute code for this.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12379/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12378/comments | https://api.github.com/repos/huggingface/transformers/issues/12378/events | https://github.com/huggingface/transformers/issues/12378 | 930,799,292 | MDU6SXNzdWU5MzA3OTkyOTI= | 12,378 | TypeError: new(): invalid data type 'numpy.str_' | {
"login": "pn12",
"id": 64300791,
"node_id": "MDQ6VXNlcjY0MzAwNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/64300791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pn12",
"html_url": "https://github.com/pn12",
"followers_url": "https://api.github.com/users/pn12/followers",
"following_url": "https://api.github.com/users/pn12/following{/other_user}",
"gists_url": "https://api.github.com/users/pn12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pn12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pn12/subscriptions",
"organizations_url": "https://api.github.com/users/pn12/orgs",
"repos_url": "https://api.github.com/users/pn12/repos",
"events_url": "https://api.github.com/users/pn12/events{/privacy}",
"received_events_url": "https://api.github.com/users/pn12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @pn12 - the issue is likely that you have columns that are strings (not encoded). Make sure to drop all columns that aren't encoded and pass the new object to the model.",
"I have the same error, did you solve this problem? @pn12 "
] | 1,624 | 1,694 | 1,628 | NONE | null | Facing the below error while running
```
# Setting up training
trainer = Seq2SeqTrainer(
model=model,
args=args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['validation'],
)
```
on Kaggle Notebooks; while the same code runs fine in Colab Notebooks.
Below is the error log.
> ```
> ---------------------------------------------------------------------------
> TypeError Traceback (most recent call last)
> <ipython-input-23-e9826d90c0df> in <module>
> 1 # This will take around 20-25 minutes
> ----> 2 trainer.train()
>
> /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
> 1032 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
> 1033
> -> 1034 for step, inputs in enumerate(epoch_iterator):
> 1035
> 1036 # Skip past any already trained steps if resuming training
>
> /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
> 433 if self._sampler_iter is None:
> 434 self._reset()
> --> 435 data = self._next_data()
> 436 self._num_yielded += 1
> 437 if self._dataset_kind == _DatasetKind.Iterable and \
>
> /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
> 473 def _next_data(self):
> 474 index = self._next_index() # may raise StopIteration
> --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
> 476 if self._pin_memory:
> 477 data = _utils.pin_memory.pin_memory(data)
>
> /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
> 42 def fetch(self, possibly_batched_index):
> 43 if self.auto_collation:
> ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
> 45 else:
> 46 data = self.dataset[possibly_batched_index]
>
> /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
> 42 def fetch(self, possibly_batched_index):
> 43 if self.auto_collation:
> ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
> 45 else:
> 46 data = self.dataset[possibly_batched_index]
>
> /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
> 1482 format_columns=self._format_columns,
> 1483 output_all_columns=self._output_all_columns,
> -> 1484 format_kwargs=self._format_kwargs,
> 1485 )
> 1486
>
> /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
> 1471 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
> 1472 formatted_output = format_table(
> -> 1473 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
> 1474 )
> 1475 return formatted_output
>
> /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
> 417 else:
> 418 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
> --> 419 formatted_output = formatter(pa_table_to_format, query_type=query_type)
> 420 if output_all_columns:
> 421 if isinstance(formatted_output, MutableMapping):
>
> /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
> 189 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
> 190 if query_type == "row":
> --> 191 return self.format_row(pa_table)
> 192 elif query_type == "column":
> 193 return self.format_column(pa_table)
>
> /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in format_row(self, pa_table)
> 57 def format_row(self, pa_table: pa.Table) -> dict:
> 58 row = self.numpy_arrow_extractor().extract_row(pa_table)
> ---> 59 return self.recursive_tensorize(row)
> 60
> 61 def format_column(self, pa_table: pa.Table) -> "torch.Tensor":
>
> /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in recursive_tensorize(self, data_struct)
> 53
> 54 def recursive_tensorize(self, data_struct: dict):
> ---> 55 return map_nested(self._recursive_tensorize, data_struct, map_list=False)
> 56
> 57 def format_row(self, pa_table: pa.Table) -> dict:
>
> /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
> 202 if num_proc <= 1 or len(iterable) <= num_proc:
> 203 mapped = [
> --> 204 _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
> 205 ]
> 206 else:
>
> /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
> 202 if num_proc <= 1 or len(iterable) <= num_proc:
> 203 mapped = [
> --> 204 _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
> 205 ]
> 206 else:
>
> /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
> 140 # Singleton first to spare some computation
> 141 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
> --> 142 return function(data_struct)
> 143
> 144 # Reduce logging to keep things readable in multiprocessing with tqdm
>
> /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in _recursive_tensorize(self, data_struct)
> 50 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
> 51 return [self.recursive_tensorize(substruct) for substruct in data_struct]
> ---> 52 return self._tensorize(data_struct)
> 53
> 54 def recursive_tensorize(self, data_struct: dict):
>
> /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in _tensorize(self, value)
> 42 default_dtype = {"dtype": torch.float32}
> 43
> ---> 44 return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
> 45
> 46 def _recursive_tensorize(self, data_struct: dict):
>
> TypeError: new(): invalid data type 'numpy.str_'
> ``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12378/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12377/comments | https://api.github.com/repos/huggingface/transformers/issues/12377/events | https://github.com/huggingface/transformers/issues/12377 | 930,738,947 | MDU6SXNzdWU5MzA3Mzg5NDc= | 12,377 | conversion wav2vec2 model from fairseq to huggingface | {
"login": "shiva1393",
"id": 48354704,
"node_id": "MDQ6VXNlcjQ4MzU0NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/48354704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shiva1393",
"html_url": "https://github.com/shiva1393",
"followers_url": "https://api.github.com/users/shiva1393/followers",
"following_url": "https://api.github.com/users/shiva1393/following{/other_user}",
"gists_url": "https://api.github.com/users/shiva1393/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shiva1393/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shiva1393/subscriptions",
"organizations_url": "https://api.github.com/users/shiva1393/orgs",
"repos_url": "https://api.github.com/users/shiva1393/repos",
"events_url": "https://api.github.com/users/shiva1393/events{/privacy}",
"received_events_url": "https://api.github.com/users/shiva1393/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | # 📚 Migration
Hi,
trained wav2vec2 model in fairseq for my own dataset. Now i need to finetune pretrained fairseq wav2vec2 model. To train hugging face model it is asking following files.
1. config.json
2. preprocessor_config.json
3. pytorch_model.bin
4. special_tokens_map.json
5. tokenizer_config.json
6. vocab.json
To get this files i used following command for wav2vec2 base model
cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py .
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt -O ./wav2vec_small_960h.pt
mkdir dict
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
mkdir outputs
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./wav2vec_small_960h.pt --dict_path dict.ltr.txt
This following commands are working fine for wav2vec_small_960h.pt model and able to generate config.json, preprocessor_config.json, pytorch_model.bin, special_tokens_map.json, tokenizer_config.json, vocab.json
But i trained model using following command
fairseq-hydra-train \
task.data=/path/to/data \
--config-dir /path/to/fairseq-py/examples/wav2vec/config/pretraining \
--config-name wav2vec2_base_librispeech
i got checkpoint_mine.pt
Then i used
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./checkpoint_mine.pt --dict_path dict.ltr.txt
I'm getting following error
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 255, in <module>
convert_wav2vec2_checkpoint(
File "env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 240, in convert_wav2vec2_checkpoint
recursively_load_weights(model, hf_wav2vec, not is_finetuned)
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 134, in recursively_load_weights
set_recursively(hf_model, mapped_key, value, name, weight_type)
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 71, in set_recursively
hf_pointer = getattr(hf_pointer, attribute)
File "env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
Please can any one suggest what i need to change
why wav2vec_small_960h.pt and checkpoint_mine.pt are differ. I saw previous discuusions but i didn't get proper solution.
Thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12376/comments | https://api.github.com/repos/huggingface/transformers/issues/12376/events | https://github.com/huggingface/transformers/issues/12376 | 930,705,291 | MDU6SXNzdWU5MzA3MDUyOTE= | 12,376 | Issue in layer-drop implementation in TensorFlow models in graph mode | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Confirmed that the issue is reproducible at my end, we're investigating!",
"On investigation, I'm pretty sure the issue is caused by the way we're doing layerdrop: https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_tf_bart.py#L755-L772\r\n\r\nThis code is correct for eager execution, but I suspect in graph mode that this leads to the creation of new variables and graph edges whenever a layer is skipped for the first time. I can see some workarounds, but unfortunately no perfect ones - this seems like a fundamental limitation of the way graph mode works in TF.\r\n\r\nYou're welcome to investigate and try to find a solution if you like, but we're probably just going to explicitly disable layer drop in graph mode for now.",
"Yeah sure, I am also looking for some solution on this. Will keep you updated (or will make a PR) if I get some solution.",
"Cool! We'll hold off on disabling it for now - if you find a solution, let us know, and don't panic if it turns out to be impossible - just say so and we'll close this issue and disable layerdrop in graph mode instead. Thanks for your help!",
"@Rocketknight1,\r\n\r\nI think I got a solution to this:\r\n```python\r\n# we will define this in the end of __init__(...)\r\nself.step_0 = True\r\n\r\n# then we will replace layer-drop condition with this:\r\nif (not self.step_0) and inputs[\"training\"] and (dropout_probability < self.layerdrop): # skip the layer\r\n continue\r\n\r\n# in the end of layer (just before return), we will do this\r\nself.step_0 = False\r\n```\r\n\r\nCode works without any error after adding above stuff with `layer-drop > 0`. Checkout this for complete code: https://github.com/vasudevgupta7/transformers/commit/acf69cea945ebe97293621ba8730a7d988f2c2aa\r\n\r\n\r\n@Rocketknight1, do think it's correct?? Like I am not sure but if graph is built with all the layers in the first step then will `continue` in the next steps work???\r\n\r\nThanks!",
"Hi, firstly I'm extremely sorry for the slow response! I was working on another project and had to drop my Github issues for a while.\r\n\r\nI'm not sure this works, though - I *think* the value of `self.step_0` will just be treated as a constant at compilation time. As a result, this code won't cause errors, but it will never skip any layers either. Can you test it with a high value for the layerdrop probability and see if you get different answers when you run the same batch multiple times?",
"@Rocketknight1 I will test it the way you suggested. Thanks for your reply!!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,629 | 1,629 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.8.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@Rocketknight1
## Information
Model I am using: TFBartForConditionalGeneration
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import TFBartForConditionalGeneration, BartConfig
# keeping layerdrop to be very high value for demonstration error
model = TFBartForConditionalGeneration(BartConfig(encoder_layerdrop=0.5))
import tensorflow as tf
import numpy as np
array = np.random.randint(1, 300, size=(4, 256))
dataset = tf.constant(array, dtype=tf.int32)
# following cell works perfectly when `tf.function(...)` is removed
@tf.function
def train_step(tensor):
return model(tensor, training=True)
from tqdm.auto import tqdm
for tensor in tqdm(dataset, total=len(dataset)):
tensor = tf.expand_dims(tensor, 0)
output = train_step(tensor)
```
You can checkout this [small Colab notebook](https://colab.research.google.com/drive/1ACfyQcSUtv0pQGD_mZEvhjEO0tOzS2zR?usp=sharing) also for reproducing the error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
ValueError: in user code:
<ipython-input-5-ca2e97b30313>:4 train_step *
return model(tensor, training=True)
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:1393 call *
outputs = self.model(
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:1125 call *
inputs["encoder_outputs"] = self.encoder(
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:764 call *
hidden_states, attn = encoder_layer(
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:305 call *
hidden_states, self_attn_weights, _ = self.self_attn(
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:178 call *
query_states = self.q_proj(hidden_states) * self.scaling
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:1023 __call__ **
self._maybe_build(inputs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:2625 _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/core.py:1198 build
trainable=True)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:655 add_weight
caching_device=caching_device)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py:815 _add_variable_with_custom_getter
**kwargs_for_getter)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py:139 make_variable
shape=variable_shape if variable_shape else None)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:260 __call__
return cls._variable_v1_call(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:221 _variable_v1_call
shape=shape)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:67 getter
return captured_getter(captured_previous, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:769 invalid_creator_scope
"tf.function-decorated function tried to create "
ValueError: tf.function-decorated function tried to create variables on non-first call.
```
Side note: I have checked this same thing for TFWav2Vec2 also, but same issue is happening. So, possibly all TF model using layer-drop needs to be fixed.
## Expected behavior
layer drop should work perfectly in graph mode.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12375/comments | https://api.github.com/repos/huggingface/transformers/issues/12375/events | https://github.com/huggingface/transformers/issues/12375 | 930,703,620 | MDU6SXNzdWU5MzA3MDM2MjA= | 12,375 | model.generate occurs error: generation_beam_search | {
"login": "Albert-Ma",
"id": 7343136,
"node_id": "MDQ6VXNlcjczNDMxMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7343136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Albert-Ma",
"html_url": "https://github.com/Albert-Ma",
"followers_url": "https://api.github.com/users/Albert-Ma/followers",
"following_url": "https://api.github.com/users/Albert-Ma/following{/other_user}",
"gists_url": "https://api.github.com/users/Albert-Ma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Albert-Ma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Albert-Ma/subscriptions",
"organizations_url": "https://api.github.com/users/Albert-Ma/orgs",
"repos_url": "https://api.github.com/users/Albert-Ma/repos",
"events_url": "https://api.github.com/users/Albert-Ma/events{/privacy}",
"received_events_url": "https://api.github.com/users/Albert-Ma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I meet the same problem, have you solved it? @Albert-Ma ",
"@qshi95 - could you provide the full command to reproduce the error?",
"I find the reason causing this error is that the size of input ids is out of range. Sorry for disturb."
] | 1,624 | 1,635 | 1,628 | NONE | null | ## Environment info
- `transformers` version: 4.2.1
- Platform: Linux-3.10.0-1127.18.2.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): bart-base
The problem arises when using:
```
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,3,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,1,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [29,0,0], thread: [0,2,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,0,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,2,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,1,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,3,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,2,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,0,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [28,0,0], thread: [0,1,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [22,0,0], thread: [0,1,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [56,0,0], thread: [0,0,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
/opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [44,0,0], thread: [0,3,0] Ass
ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
....... similar as above
Traceback (most recent call last): [56/1924]
File "run_eval.py", line 172, in <module>
run_generate(verbose=True)
File "run_eval.py", line 133, in run_generate
runtime_metrics = generate_summaries_or_translations(
File "run_eval.py", line 67, in generate_summaries_or_translations
summaries = model.generate(
File "/home/xxx/anaconda3/envs/xxx/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/data/xxx/transformers/src/transformers/generation_utils.py", line 986, in generate
return self.beam_sample(
File "/data/xxx/transformers/src/transformers/generation_utils.py", line 1894, in beam_sample
beam_outputs = beam_scorer.process(
File "/data/xxx/transformers/src/transformers/generation_beam_search.py", line 218, in process
if self._done[batch_idx]:
RuntimeError: CUDA error: device-side assert triggered
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: seq2seq
## To reproduce
Steps to reproduce the behavior:
```
python run_eval.py \
--model_name ${MODEL_DIR} \
--input_path $DATA_DIR/val.src \
--save_path $DATA_DIR/xxx.txt \
--task summarization \
--device cuda:0 \
--bs 50 \
--min_length 2 \
--max_length 32 \
--do_sample True \
--top_k 10 \
--num_return_sequences 5
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12374/comments | https://api.github.com/repos/huggingface/transformers/issues/12374/events | https://github.com/huggingface/transformers/issues/12374 | 930,683,943 | MDU6SXNzdWU5MzA2ODM5NDM= | 12,374 | ImportError: cannot import name 'BertEncoder' from 'transformers' | {
"login": "Tikquuss",
"id": 49171640,
"node_id": "MDQ6VXNlcjQ5MTcxNjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/49171640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tikquuss",
"html_url": "https://github.com/Tikquuss",
"followers_url": "https://api.github.com/users/Tikquuss/followers",
"following_url": "https://api.github.com/users/Tikquuss/following{/other_user}",
"gists_url": "https://api.github.com/users/Tikquuss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tikquuss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tikquuss/subscriptions",
"organizations_url": "https://api.github.com/users/Tikquuss/orgs",
"repos_url": "https://api.github.com/users/Tikquuss/repos",
"events_url": "https://api.github.com/users/Tikquuss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tikquuss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! I'm failing to find a version where this worked, going back to version 1.0.0. If you have it handy, could you point me to the version that had it?\r\n\r\nIf you want to import `BertEncoder` you can do it as such:\r\n```\r\nfrom transformers.models.bert.modeling_bert import BertEncoder\r\n```"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | ```
from transformers import BertEncoder
```
A week ago this import was working normally, but this morning I ran my code and got this error.
```
ImportError: cannot import name 'BertEncoder' from 'transformers' (unknown location)
```
How to import BertEncoder? ```'BertEncoder' in dir(transformers)``` is `False`
########################################
- `transformers` version: 4.8.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12374/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12373/comments | https://api.github.com/repos/huggingface/transformers/issues/12373/events | https://github.com/huggingface/transformers/pull/12373 | 930,681,899 | MDExOlB1bGxSZXF1ZXN0Njc4MzI2NTkx | 12,373 | Added .lower() method to label | {
"login": "Himanshunitrr",
"id": 53482681,
"node_id": "MDQ6VXNlcjUzNDgyNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/53482681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Himanshunitrr",
"html_url": "https://github.com/Himanshunitrr",
"followers_url": "https://api.github.com/users/Himanshunitrr/followers",
"following_url": "https://api.github.com/users/Himanshunitrr/following{/other_user}",
"gists_url": "https://api.github.com/users/Himanshunitrr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Himanshunitrr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Himanshunitrr/subscriptions",
"organizations_url": "https://api.github.com/users/Himanshunitrr/orgs",
"repos_url": "https://api.github.com/users/Himanshunitrr/repos",
"events_url": "https://api.github.com/users/Himanshunitrr/events{/privacy}",
"received_events_url": "https://api.github.com/users/Himanshunitrr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the proposal. It's a good thought, but I actually am not sure this is what we want to do. While it may seem intuitively to make sense that different cases shouldn't affect the scores, I don't think we want to unilaterally send everything to lower case.\r\n\r\nOne reason is that casing actually can provide an important signal. For example, capitalizing the A in \"Apple\" might be useful for the model to determine whether you mean the fruit or the tech giant. I think in general, it would be best to leave this to the user to decide how they want to pass the candidate labels' casing.",
"Ok"
] | 1,624 | 1,624 | 1,624 | NONE | null | Same labels with different cases (like "Hindi", "hindi", "hIndi") can be passed and the predicted scores vary a lot. So if we add .lower() method we can solve that.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12373/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12373",
"html_url": "https://github.com/huggingface/transformers/pull/12373",
"diff_url": "https://github.com/huggingface/transformers/pull/12373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12373.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12372/comments | https://api.github.com/repos/huggingface/transformers/issues/12372/events | https://github.com/huggingface/transformers/issues/12372 | 930,674,839 | MDU6SXNzdWU5MzA2NzQ4Mzk= | 12,372 | Wav2vec2 Dataset | {
"login": "gouthamanwavicle",
"id": 82436706,
"node_id": "MDQ6VXNlcjgyNDM2NzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/82436706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gouthamanwavicle",
"html_url": "https://github.com/gouthamanwavicle",
"followers_url": "https://api.github.com/users/gouthamanwavicle/followers",
"following_url": "https://api.github.com/users/gouthamanwavicle/following{/other_user}",
"gists_url": "https://api.github.com/users/gouthamanwavicle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gouthamanwavicle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gouthamanwavicle/subscriptions",
"organizations_url": "https://api.github.com/users/gouthamanwavicle/orgs",
"repos_url": "https://api.github.com/users/gouthamanwavicle/repos",
"events_url": "https://api.github.com/users/gouthamanwavicle/events{/privacy}",
"received_events_url": "https://api.github.com/users/gouthamanwavicle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null |
I have been trying the code described in the huggingface blog https://huggingface.co/blog/fine-tune-wav2vec2-english.
While in the blog, when seeing random text from the dataset looks as shown below:

For me, trying the same code shows as below(same sentence)

Could you help to know what is this issue | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12372/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12371/comments | https://api.github.com/repos/huggingface/transformers/issues/12371/events | https://github.com/huggingface/transformers/pull/12371 | 930,669,936 | MDExOlB1bGxSZXF1ZXN0Njc4MzE3NTkw | 12,371 | [Documentation] Warn that DataCollatorForWholeWordMask is limited to BertTokenizer-like tokenizers | {
"login": "ionicsolutions",
"id": 32523967,
"node_id": "MDQ6VXNlcjMyNTIzOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32523967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ionicsolutions",
"html_url": "https://github.com/ionicsolutions",
"followers_url": "https://api.github.com/users/ionicsolutions/followers",
"following_url": "https://api.github.com/users/ionicsolutions/following{/other_user}",
"gists_url": "https://api.github.com/users/ionicsolutions/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ionicsolutions/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ionicsolutions/subscriptions",
"organizations_url": "https://api.github.com/users/ionicsolutions/orgs",
"repos_url": "https://api.github.com/users/ionicsolutions/repos",
"events_url": "https://api.github.com/users/ionicsolutions/events{/privacy}",
"received_events_url": "https://api.github.com/users/ionicsolutions/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Currently, the `DataCollatorForWholeWordMasking` added with #7925 only works for one specific family of tokenizers, but the documentation does not mention this nor is the user warned when using this data collator with an incompatible tokenizer. Since the data collator will run with all tokenizers, just not produce the desired output, this is very misleading for users.
This PR adds a note to the documentation and a warning that is issued when a user attempts to create the whole word mask with a (presumably) incompatible tokenizer.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11768
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12371/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12371",
"html_url": "https://github.com/huggingface/transformers/pull/12371",
"diff_url": "https://github.com/huggingface/transformers/pull/12371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12371.patch",
"merged_at": 1624880396000
} |
https://api.github.com/repos/huggingface/transformers/issues/12370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12370/comments | https://api.github.com/repos/huggingface/transformers/issues/12370/events | https://github.com/huggingface/transformers/pull/12370 | 930,663,070 | MDExOlB1bGxSZXF1ZXN0Njc4MzEyMjc1 | 12,370 | [WIP] DataCollatorForTextInfilling | {
"login": "ionicsolutions",
"id": 32523967,
"node_id": "MDQ6VXNlcjMyNTIzOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32523967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ionicsolutions",
"html_url": "https://github.com/ionicsolutions",
"followers_url": "https://api.github.com/users/ionicsolutions/followers",
"following_url": "https://api.github.com/users/ionicsolutions/following{/other_user}",
"gists_url": "https://api.github.com/users/ionicsolutions/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ionicsolutions/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ionicsolutions/subscriptions",
"organizations_url": "https://api.github.com/users/ionicsolutions/orgs",
"repos_url": "https://api.github.com/users/ionicsolutions/repos",
"events_url": "https://api.github.com/users/ionicsolutions/events{/privacy}",
"received_events_url": "https://api.github.com/users/ionicsolutions/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"It's still on my agenda to brush this up",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This is a wonderful effort. any update on this? also if you can add TF call that would be great.",
"@salrowili Sadly, I didn't find time for it. I'm also not sure whether this still fits with the library, there might have been some updates to the data collators in the meantime.\r\n\r\nI'm still interested in working on this but realistically I won't have time to do that unless I need it for an ongoing project. Would be up for a collaboration?",
"@ionicsolutions Thanks for replying back. What about BartForConditionalGeneration? is it enough to train BART from scratch like in this example https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_mlm_flax.py#L241 . However, as you can see it uses FlaxDataCollatorForLanguageModeling which i am not sure if it uses text in filling task?\r\nmaybe you can check this repo also https://github.com/cosmoquester/transformers-bart-pretrain . He already implemented text in filling task but with tensorflow dataset. However, this repo does not work with HF >4.11 because of some logit issue. Maybe you can contact the author of this repo and asks his permission to use his function and collaboration if he is willing to do. He is probably better than me in pushing this project forward. However, what I can help is that I can test any function you develop in this project in scale (e.g. pre-training it on BART-large from scratch) and see how it will perform and share colab example with research community. What i like about BART over T5 is the inference time and memory usage during fine-tuning and it also can achieve SOTA on SQuAD and GLUE in addition to generative tasks (e.g. summarization) so i think this project is much needed from research community.",
"@salrowili I'm also interested in infilling generation and was wondering if you've made any progress? I see your last post was three weeks ago, so I'm wondering if maybe you found an alternative approach?",
"@jbmaxwell I try out BART implementation of FLAX, XLA with TPU and Keras BART @ https://github.com/cosmoquester/transformers-bart-pretrain . Keras BART is my best model among those and hence that why i was looking for textinfliing. I think also the implementation of BART is not optimal with the hugging face library, especially for BART large. I am also working with fairseq now and torch xla and I think this will be the best among all variety that I tried out. I suggest for you ask for TPU access from google https://sites.research.google/trc/ and try out fairseq xla with BART but fix the dynamic shape by using pre-defined input shape in my frok https://github.com/salrowili/fairseq. You can see latest commits to see what changes I made. with TPUv3-8 and BART will get a speed of ~100k wps but you need to keep the log interval 10 and num_bucket=5. I run BART on my 3090 and it gives me a speed of 30K wps. 100k wps translate to ~20K steps/day which is slow compared to BERT with TF (~125K stepts/day) with batch size of 256 and max. seq. length of 512. which means it will take you around one month to finish 500K steps with BART (:\r\nIf you find an alternative solution or you are willing to improve BART implementation with text filling and JAX, TF it would be good if you share your solution as i share mine (:",
"I hadn't seen this before—thanks or the link! \r\nI'll give it a try. \r\nI'm working with compact, non-natural language inputs and small datasets (for now), and generally reduce model sizes significantly from the stock versions, so I'm not too worried about training resources. Faster is better, of course, but not a deal-breaker for me."
] | 1,624 | 1,650 | 1,630 | CONTRIBUTOR | null | # What does this PR do?
A DataCollator for the BART "Text Infilling" pre-training task.
The implementation borrows ideas from `fairseq`'s more complex [DenoisingDataset](https://github.com/pytorch/fairseq/blob/1bba712622b8ae4efb3eb793a8a40da386fe11d0/fairseq/data/denoising_dataset.py).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #5428
(Addresses #5096)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12370/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12370",
"html_url": "https://github.com/huggingface/transformers/pull/12370",
"diff_url": "https://github.com/huggingface/transformers/pull/12370.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12370.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12369/comments | https://api.github.com/repos/huggingface/transformers/issues/12369/events | https://github.com/huggingface/transformers/issues/12369 | 930,647,919 | MDU6SXNzdWU5MzA2NDc5MTk= | 12,369 | [Trainer.py] when --load_best_model_at_end is set in Distributed Training | {
"login": "logoutAgain",
"id": 23735761,
"node_id": "MDQ6VXNlcjIzNzM1NzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/23735761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/logoutAgain",
"html_url": "https://github.com/logoutAgain",
"followers_url": "https://api.github.com/users/logoutAgain/followers",
"following_url": "https://api.github.com/users/logoutAgain/following{/other_user}",
"gists_url": "https://api.github.com/users/logoutAgain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/logoutAgain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/logoutAgain/subscriptions",
"organizations_url": "https://api.github.com/users/logoutAgain/orgs",
"repos_url": "https://api.github.com/users/logoutAgain/repos",
"events_url": "https://api.github.com/users/logoutAgain/events{/privacy}",
"received_events_url": "https://api.github.com/users/logoutAgain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you try again on the latest version? This bug has normally been fixed (see [this comment](https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/src/transformers/trainer.py#L1351) and the four line below).",
"> Could you try again on the latest version? This bug has normally been fixed (see [this comment](https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/src/transformers/trainer.py#L1351) and the four line below).\r\n\r\nYes, it looks good. I download the source code to the project, so I can't update it in time. I'm sorry to take up your time。"
] | 1,624 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4, not very sure
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): ////
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed training in a single node with multi-gpus
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> @sgugger
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
trainer.py
The tasks I am working on is:
my own task or dataset
## To reproduce
I'm not sure if it's a bug or I misunderstood. So I am here for help.
Steps to reproduce the behavior:
1. set `--load_best_model_at_end` and use distributed training mode with multi-gpus.
2. When the best model appears in the last step, the main process needs to save the model in `self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)` after training. However, the subprocess may run `self.model = self.model.from_pretrained(self.state.best_model_checkpoint)` before the best model is completely saved.
3. So the subprocess will occur `OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory <best model dir>......`
Evaluate:
1. I print the time after `self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)` and time before `self.model = self.model.from_pretrained(self.state.best_model_checkpoint)`. It seems that the result has confirmed my guess.
2. I add `dist.barrier()` after `self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)` and the error doesn't appear anymore.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12369/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12368/comments | https://api.github.com/repos/huggingface/transformers/issues/12368/events | https://github.com/huggingface/transformers/pull/12368 | 930,642,639 | MDExOlB1bGxSZXF1ZXN0Njc4Mjk2NTUw | 12,368 | [Examples] Replace `print` statement with `logger.info` in QA example utils | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Earlier in `utils_qa.py`, `run_qa_beam_search.py` was using `print()` for showing states saving file paths, while `run_qa.py` using `logger.info()` which seems more appropriate.
This PR replace `print()` with `logger.info()`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Discussed in [Issue](https://github.com/huggingface/transformers/issues/12363)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12368/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12368",
"html_url": "https://github.com/huggingface/transformers/pull/12368",
"diff_url": "https://github.com/huggingface/transformers/pull/12368.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12368.patch",
"merged_at": 1624725085000
} |
https://api.github.com/repos/huggingface/transformers/issues/12367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12367/comments | https://api.github.com/repos/huggingface/transformers/issues/12367/events | https://github.com/huggingface/transformers/pull/12367 | 930,632,450 | MDExOlB1bGxSZXF1ZXN0Njc4Mjg4ODU3 | 12,367 | [Examples] Added context manager to datasets map | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12363
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12367/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12367",
"html_url": "https://github.com/huggingface/transformers/pull/12367",
"diff_url": "https://github.com/huggingface/transformers/pull/12367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12367.patch",
"merged_at": 1624896840000
} |
https://api.github.com/repos/huggingface/transformers/issues/12366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12366/comments | https://api.github.com/repos/huggingface/transformers/issues/12366/events | https://github.com/huggingface/transformers/issues/12366 | 930,628,407 | MDU6SXNzdWU5MzA2Mjg0MDc= | 12,366 | Tokens Jumbling | {
"login": "PhenomenalOnee",
"id": 47495143,
"node_id": "MDQ6VXNlcjQ3NDk1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/47495143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhenomenalOnee",
"html_url": "https://github.com/PhenomenalOnee",
"followers_url": "https://api.github.com/users/PhenomenalOnee/followers",
"following_url": "https://api.github.com/users/PhenomenalOnee/following{/other_user}",
"gists_url": "https://api.github.com/users/PhenomenalOnee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhenomenalOnee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhenomenalOnee/subscriptions",
"organizations_url": "https://api.github.com/users/PhenomenalOnee/orgs",
"repos_url": "https://api.github.com/users/PhenomenalOnee/repos",
"events_url": "https://api.github.com/users/PhenomenalOnee/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhenomenalOnee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Which model are you using exactly? `bert-base-uncased` is not a `sentence-transformers` model",
"i used \"bert-base-uncased\" architecture from here: https://www.sbert.net/docs/training/overview.html to train for sentence embedding tak. I modified the number of layers and hidden size. \r\nAlso as shown in above image, i also tested \"bert-base-nli-cls-token\" (from https://huggingface.co/sentence-transformers/bert-base-nli-cls-token) and also tested (https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens). But all shows same issue\r\n",
"Pinging @nreimers ",
"Hi @PhenomenalOnee \r\nthe two models you linked are deprecated, I recommend to use the paraphrase v2 models. \r\n\r\nBut they will show the same behavior. That is how it is. The models don't check if a sentence makes sense or is grammatically correct. They try to infer the semantics of the sentence and they try to be robust for spelling mistakes, grammatical errors, word shuffleing etc.\r\n\r\nIf this is undesired for your task, you must create training data that teaches the network that these examples should not be close in the vector space.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | NONE | null | I have trained a "bert-base-uncased" model (with lesser layers) on my dataset with my tokenizer for sentence similarity task. The final sentence embedding is formed using mean pooling strategy. Now during inference if my tokens for sentence1 is [t1,t2,t3,t4,t5] and for senetence2, I randomly shuffle these tokens example [t3,t1,t2,t5,t4], the score is really high, but sentence2 does't make any sense.
I tested the pretrained bert-base-uncased model and found out same problem, as shown below
Enter text1:a man is riding a horse
Enter text2:a riding man is a horse
Tokens1: [101, 1037, 2158, 2003, 5559, 1037, 3586, 102]
Tokens2: [101, 1037, 5559, 2158, 2003, 1037, 3586, 102]
Similarity: 0.703365683555603
Enter text1:A boy can throw a stone up to a maximum height
Enter text2:A stone up to a boy can maximum throw a height
Tokens1: [101, 1037, 2879, 2064, 5466, 1037, 2962, 2039, 2000, 1037, 4555, 4578, 102]
Tokens2: [101, 1037, 2962, 2039, 2000, 1037, 2879, 2064, 4555, 5466, 1037, 4578, 102]
Similarity: 0.9277969598770142

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12365/comments | https://api.github.com/repos/huggingface/transformers/issues/12365/events | https://github.com/huggingface/transformers/pull/12365 | 930,605,355 | MDExOlB1bGxSZXF1ZXN0Njc4MjY3NDk0 | 12,365 | [Examples] Update Example Template for `--log_level` feature | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12295
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Discussed on this [PR](https://github.com/huggingface/transformers/pull/12359)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12365/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12365",
"html_url": "https://github.com/huggingface/transformers/pull/12365",
"diff_url": "https://github.com/huggingface/transformers/pull/12365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12365.patch",
"merged_at": 1624679430000
} |
https://api.github.com/repos/huggingface/transformers/issues/12364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12364/comments | https://api.github.com/repos/huggingface/transformers/issues/12364/events | https://github.com/huggingface/transformers/pull/12364 | 930,563,883 | MDExOlB1bGxSZXF1ZXN0Njc4MjMzNDcw | 12,364 | [CI] add dependency table sync verification | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"So now when the table is out of sync it fails with:\r\n\r\n\r\n\r\nLet me know if this is good and I will re-test with when it's in sync."
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | Sometimes version is being modified in `setup.py` but not updated in the autogenerated versions file `src/transformers/dependency_versions_table.py` and then all devs start getting this uncommitted change in their clone on `make fixup/style`. This PR adds a new make target to do the checking and adds it to the `check_code_quality` CI job.
Expecting it to fail in this PR initially as I didn't sync the table.
TODO: backout the setup.py version changes before merging.
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12364/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12364",
"html_url": "https://github.com/huggingface/transformers/pull/12364",
"diff_url": "https://github.com/huggingface/transformers/pull/12364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12364.patch",
"merged_at": 1624895759000
} |
https://api.github.com/repos/huggingface/transformers/issues/12363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12363/comments | https://api.github.com/repos/huggingface/transformers/issues/12363/events | https://github.com/huggingface/transformers/issues/12363 | 930,539,443 | MDU6SXNzdWU5MzA1Mzk0NDM= | 12,363 | [examples] add `main_process_first` context manager to datasets map calls | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Can I take this?\r\nSince it will not take much time for me",
"Yes, thank you, @bhadreshpsavani ",
"Hi @stas00 and @sgugger,\r\nIn the earlier PR, I wanted to ask one thing\r\nin the below code,\r\nhttps://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/examples/pytorch/question-answering/utils_qa.py#L416-L425\r\nShall we use `logger.info()` instead `print()` like we did in below code\r\nhttps://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/examples/pytorch/question-answering/utils_qa.py#L228-L237\r\nor is it intensionally written like this?\r\n\r\nBecause of this when we run the `run_qa_beam_search.py` script we get the below kind of prints for the train, eval, and test stage even when we pass `--log_level error`\r\n```\r\nSaving predictions to /tmp/debug_squad/predict_predictions.json. | 0/5 [00:00<?, ?it/s]\r\nSaving nbest_preds to /tmp/debug_squad/predict_nbest_predictions.json.\r\nSaving null_odds to /tmp/debug_squad/predict_null_odds.json.\r\n```",
"good catch, @bhadreshpsavani! `logger.info()` please as you suggested.\r\n\r\nPlease feel free to make a separate PR if you don't want to mix this with this particular change.\r\n",
"Hi @stas00 and @sgugger,\r\nThere is a minor thing,\r\nat this line \r\nhttps://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/examples/pytorch/text-classification/run_glue.py#L529\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nwe are getting\r\n```\r\nexamples/pytorch/text-classification/run_glue.py:530: FutureWarning: remove_columns_ is deprecated and will be removed in the next major version of datasets. Use Dataset.remove_columns instead.\r\n predict_dataset.remove_columns_(\"label\")\r\n```\r\nfix is,\r\n```python\r\npredict_dataset.remove_columns(\"label\") \r\n```\r\nshall we change it?\r\n\r\nit is also present at below line\r\nhttps://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py#L506",
"yes, except you now need to assign the return value since this is no longer an inplace edit. Therefore in both places it'll be now be:\r\n```\r\nx = x.remove_columns(\"label\")\r\n```\r\nwith the right x of course.\r\n\r\nthank you for fixing it.\r\n\r\nreference: https://huggingface.co/docs/datasets/processing.html#removing-one-or-several-columns-remove-columns\r\n",
"I have committed changes in the open PR for the fix of this warning!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | We need to replay this addition that has been modelled in `run_translation.py` in https://github.com/huggingface/transformers/pull/12351 to all other pytorch examples
The actual changes for the model example are:
https://github.com/huggingface/transformers/pull/12351/files#diff-09777f56cee1060a535a72ce99a6c96cdb7f330c8cc3f9dcca442b3f7768237a
(just `run_translation.py`)
Here is a time-saver:
```
find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(train_dataset = train_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="train dataset map pre-processing"):\n$p$t] } }' {} \;
find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(eval_dataset = eval_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="validation dataset map pre-processing"):\n$p$t] } }' {} \;
find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(predict_dataset = predict_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="prediction dataset map pre-processing"):\n$p$t] } }' {} \;
git checkout examples/pytorch/translation/run_translation.py
make fixup
```
I noticed other scripts may have other `datasets.map` calls, which get automatically rewritten by the scripts above, so please review the changes to see if the `desc` needs to be modified. But we want to use the context manager on all of these calls, it's possible that the perl rewrite scripts didn't catch some.
- also this template needs to have this change as well:
`templates/adding_a_new_example_script/\{\{cookiecutter.directory_name\}\}/run_\{\{cookiecutter.example_shortcut\}\}.py`
can do via perl or manually or whatever other way works for you.
And please validate that scripts still work, by either running:
```
RUN_SLOW=1 pytest examples/pytorch/test_examples.py
```
or running each script manually as explained in its corresponding `README.md` file.
This issue is open to all and should be very simple to complete, the main effort is to validate.
And thank you for your contribution!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12363/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12362/comments | https://api.github.com/repos/huggingface/transformers/issues/12362/events | https://github.com/huggingface/transformers/pull/12362 | 930,493,099 | MDExOlB1bGxSZXF1ZXN0Njc4MTcyMDUw | 12,362 | fixed multiplechoice tokenization | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for your PR but the documentation is correct: when tokenizing several pairs of sentences, the tokenizer API takes the list of first sentences, then the list of second sentences. Here the first sentences are the prompt (twice) and the second sentences are the choices.",
"@sgugger that doesn't seem to be correct:\r\n```python\r\nfrom transformers import BertTokenizer, BertForMultipleChoice\r\nimport torch\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertForMultipleChoice.from_pretrained('bert-base-uncased')\r\n\r\nprompt = \"In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced.\"\r\nchoice0 = \"It is eaten with a fork and a knife.\"\r\nchoice1 = \"It is eaten while held in the hand.\"\r\nlabels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1\r\n\r\nencoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True, return_attention_mask=False, return_token_type_ids=False)\r\nprint(tokenizer.decode(encoding.input_ids[0]))\r\nprint(tokenizer.decode(encoding.input_ids[1]))\r\n```\r\nOutput:\r\n```\r\n[CLS] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP]\r\n[CLS] it is eaten with a fork and a knife. [SEP] it is eaten while held in the hand. [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]\r\n```",
"An alternative fix is to:\r\n```python\r\nencoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True, return_attention_mask=False, return_token_type_ids=False)\r\nprint(tokenizer.decode(encoding.input_ids[0]))\r\nprint(tokenizer.decode(encoding.input_ids[1]))\r\n```\r\nOutput:\r\n```\r\n[CLS] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP] it is eaten with a fork and a knife. [SEP]\r\n[CLS] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP] it is eaten while held in the hand. [SEP] [PAD]\r\n```\r\nThe problem is you are currently not giving the tokenizer two lists. You are only giving him one list (first sentence).",
"Yes your second fix is the good one! I missed the extra pair of brackets.",
"@sgugger: pushed!",
"Thanks a lot!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
The model would have seen two sequences:
1. [CLS]prompt[SEP]prompt[SEP]
2. [CLS]choice0[SEP]choice1[SEP]
That is not correct as we want a contextualized embedding of prompt and choice.
This PR fixes the documentation
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12362/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12362",
"html_url": "https://github.com/huggingface/transformers/pull/12362",
"diff_url": "https://github.com/huggingface/transformers/pull/12362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12362.patch",
"merged_at": 1624657268000
} |
https://api.github.com/repos/huggingface/transformers/issues/12361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12361/comments | https://api.github.com/repos/huggingface/transformers/issues/12361/events | https://github.com/huggingface/transformers/pull/12361 | 930,479,416 | MDExOlB1bGxSZXF1ZXN0Njc4MTYwMjM3 | 12,361 | Easily train a new fast tokenizer from a given one | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Failure is spurious, so merging!"
] | 1,624 | 1,624 | 1,624 | COLLABORATOR | null | # What does this PR do?
This PR does two different things at the same time:
- it allows to instantiate a subclass of a `PreTrainedTokenizerFast` with just the tokenizer object by making arguments like vocab or merges optional (only done for three models here but can complete if the design is accepted)
- adds a method to train a new fast tokenizer from an existing one, using the same normalizer, pre-tokenizers and post-processors.
With this done, one can do:
```
from transformers import AutoTokenizer
checkpoint = "bert-base-cased" # or any checkpoint that has a fast tokenizer.
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
assert tokenizer.is_fast, "This only works for fast tokenizers."
# Should be a generator of list of texts.
training_corpus = [
["This is the first sentence.", "This is the second one."],
["This sentence (contains #) over symbols and numbers 12 3.", "But not this one."],
]
new_tokenizer = tokenizer.train_new_from_iterator(training_corpus, vocab_size=25000)
```
The new tokenizer can then be used, saved, pushed to the hub. It has the same type as `tokenizer`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12361/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12361/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12361",
"html_url": "https://github.com/huggingface/transformers/pull/12361",
"diff_url": "https://github.com/huggingface/transformers/pull/12361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12361.patch",
"merged_at": 1624993208000
} |
https://api.github.com/repos/huggingface/transformers/issues/12360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12360/comments | https://api.github.com/repos/huggingface/transformers/issues/12360/events | https://github.com/huggingface/transformers/pull/12360 | 930,459,642 | MDExOlB1bGxSZXF1ZXN0Njc4MTQ0MDE3 | 12,360 | [examples] remove extra white space from log format | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR removes the extraneous triple white space from log format in all examples.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12360/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12360",
"html_url": "https://github.com/huggingface/transformers/pull/12360",
"diff_url": "https://github.com/huggingface/transformers/pull/12360.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12360.patch",
"merged_at": 1624652416000
} |
https://api.github.com/repos/huggingface/transformers/issues/12359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12359/comments | https://api.github.com/repos/huggingface/transformers/issues/12359/events | https://github.com/huggingface/transformers/pull/12359 | 930,429,416 | MDExOlB1bGxSZXF1ZXN0Njc4MTIwNTYz | 12,359 | [Examples] Replicates the new --log_level feature to all trainer-based pytorch | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also as mentioned earlier, let's add:\r\n```\r\n transformers.utils.logging.enable_default_handler()\r\n transformers.utils.logging.enable_explicit_format()\r\n```\r\neverywhere they are missing - I see there are quite a few places.\r\n\r\nThank you!",
"@bhadreshpsavani, we forgot to take care of the template `templates/adding_a_new_example_script/\\{\\{cookiecutter.directory_name\\}\\}/run_\\{\\{cookiecutter.example_shortcut\\}\\}.py`\r\nif you don't mind adding this change there too in another PR.\r\n \r\nThank you!\r\n",
"Sure I will add it,\nYa, i totally forgot!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12295
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Notes:
- The required changes are added for all the trainer-based examples except `run_generation.py` since it seems very different.
- Please let me know if any modification needed.
## Who can review?
@stas00 @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12359",
"html_url": "https://github.com/huggingface/transformers/pull/12359",
"diff_url": "https://github.com/huggingface/transformers/pull/12359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12359.patch",
"merged_at": 1624658322000
} |
https://api.github.com/repos/huggingface/transformers/issues/12358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12358/comments | https://api.github.com/repos/huggingface/transformers/issues/12358/events | https://github.com/huggingface/transformers/pull/12358 | 930,398,176 | MDExOlB1bGxSZXF1ZXN0Njc4MDk2MTc5 | 12,358 | Tensorflow LM examples | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | CLM and MLM examples for Tensorflow - despite the TF docs' insistence, I think we can use the dataset-generator methods here to stream data too large for memory to a multi-GPU or TPU setup! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12358/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12358",
"html_url": "https://github.com/huggingface/transformers/pull/12358",
"diff_url": "https://github.com/huggingface/transformers/pull/12358.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12358.patch",
"merged_at": 1624905104000
} |
https://api.github.com/repos/huggingface/transformers/issues/12357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12357/comments | https://api.github.com/repos/huggingface/transformers/issues/12357/events | https://github.com/huggingface/transformers/pull/12357 | 930,331,079 | MDExOlB1bGxSZXF1ZXN0Njc4MDQzNjA2 | 12,357 | Replace NotebookProgressReporter by ProgressReporter in Ray Tune run | {
"login": "krfricke",
"id": 14904111,
"node_id": "MDQ6VXNlcjE0OTA0MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krfricke",
"html_url": "https://github.com/krfricke",
"followers_url": "https://api.github.com/users/krfricke/followers",
"following_url": "https://api.github.com/users/krfricke/following{/other_user}",
"gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krfricke/subscriptions",
"organizations_url": "https://api.github.com/users/krfricke/orgs",
"repos_url": "https://api.github.com/users/krfricke/repos",
"events_url": "https://api.github.com/users/krfricke/events{/privacy}",
"received_events_url": "https://api.github.com/users/krfricke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Has this fix been packaged with the latest release? I see that the fix was merged into the master branch 18 days ago and the latest release (v4.8.2) was 13 days ago but then I don't see the issue mentioned in the patch release notes. \r\n\r\nI assume it hasn't since I am still getting the same output."
] | 1,624 | 1,626 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
This PR replaces the local trainer NotebookProgressReporter callback by a ProgressReporter. Generally we cannot guarantee correct display of IPython renderables in remote processes (e.g. because they get redirected into files), so it's better to replace these by text based reporters.
Otherwise, these are produced: `<IPython.core.display.HTML object>`
See also https://github.com/ray-project/ray/issues/16197
<!-- Remove if not applicable -->
Fixes https://github.com/ray-project/ray/issues/16197
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12357/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12357",
"html_url": "https://github.com/huggingface/transformers/pull/12357",
"diff_url": "https://github.com/huggingface/transformers/pull/12357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12357.patch",
"merged_at": 1624644725000
} |
https://api.github.com/repos/huggingface/transformers/issues/12356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12356/comments | https://api.github.com/repos/huggingface/transformers/issues/12356/events | https://github.com/huggingface/transformers/pull/12356 | 930,097,684 | MDExOlB1bGxSZXF1ZXN0Njc3ODQ5MTE0 | 12,356 | Fixed a typo in readme | {
"login": "MichalPitr",
"id": 21157924,
"node_id": "MDQ6VXNlcjIxMTU3OTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/21157924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichalPitr",
"html_url": "https://github.com/MichalPitr",
"followers_url": "https://api.github.com/users/MichalPitr/followers",
"following_url": "https://api.github.com/users/MichalPitr/following{/other_user}",
"gists_url": "https://api.github.com/users/MichalPitr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichalPitr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichalPitr/subscriptions",
"organizations_url": "https://api.github.com/users/MichalPitr/orgs",
"repos_url": "https://api.github.com/users/MichalPitr/repos",
"events_url": "https://api.github.com/users/MichalPitr/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichalPitr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Fixes a simple typo in the readme file from "pr" to "or".
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12356",
"html_url": "https://github.com/huggingface/transformers/pull/12356",
"diff_url": "https://github.com/huggingface/transformers/pull/12356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12356.patch",
"merged_at": 1624621770000
} |
https://api.github.com/repos/huggingface/transformers/issues/12355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12355/comments | https://api.github.com/repos/huggingface/transformers/issues/12355/events | https://github.com/huggingface/transformers/pull/12355 | 930,076,143 | MDExOlB1bGxSZXF1ZXN0Njc3ODMwNTYx | 12,355 | [Flax] Add T5 pretraining script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just wanted to ask one last question about the tokenizer for norwegian. :slightly_smiling_face: \r\n\r\nIt seems to me that the tokenizer vocabularies of the \"original\" T5 models have `extras_ids` (100 for `T5-small`). It seems to me that in the proposed version of the tokenizer for norwegian, no extras_ids are introduced and I'm not sure where the [`extras_ids`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5_fast.py#L112) argument would be redefined when initializing `T5TokenizerFast`.",
"> I just wanted to ask one last question about the tokenizer for norwegian. \r\n> \r\n> It seems to me that the tokenizer vocabularies of the \"original\" T5 models have `extras_ids` (100 for `T5-small`). It seems to me that in the proposed version of the tokenizer for norwegian, no extras_ids are introduced and I'm not sure where the [`extras_ids`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5_fast.py#L112) argument would be redefined when initializing `T5TokenizerFast`.\r\n\r\nThose additional tokens will be added when the tokenizer is loaded with `T5TokenizerFast` as follows:\r\n\r\n```python\r\nfrom transformers import T5TokenizerFast\r\ntokenizer = T5TokenizerFast.from_pretrained(\"patrickvonplaten/t5-small-norwegian\")\r\n```\r\n\r\nWhen you print out the tokenzier:\r\n\r\n```python\r\nprint(tokenizer)\r\n```\r\n\r\nyou should see that the extra ids have been added :-) It is done automatically [here](https://github.com/huggingface/transformers/blob/e27707488911a4bae5936a1bdad0cfdb2018cebd/src/transformers/models/t5/tokenization_t5_fast.py#L117)\r\n\r\n",
"@patrickvonplaten , run_t5_mlm_flax.py uses different lr schedule than paper. Any specific reason for that?",
"Hey @danshirron! \r\n\r\nUsually, it shouldn't make a big difference. Original T5 model was trained using an inverse square root scheduler. From my experiments, linear scheduler happens to be slightly more robust for faster convergence. \r\n\r\nEither way, there are various inverse square root scheduler implementations (e.g., [optax](https://github.com/formermagic/git-t5/blob/main/git_t5/core/schedulers.py#L124) or [pytorch](https://github.com/pytorch/fairseq/blob/master/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py))."
] | 1,624 | 1,628 | 1,624 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds T5 pretraining in Flax. Thanks to @craffel a lot of the preprocessing code was copied from .
Tokenizer training code was largely copied from @SaulLu .
Also took this PR as a chance to integrate the new `push_to_hub` functionality that includes tensorboard logs to test out the new tensorboard functionality (cc @sgugger @LysandreJik @julien-c).
The tensorboard logs aren't correctly displayed though :-/- an example can be seen [here](https://huggingface.co/patrickvonplaten/dummy-t5-test).
Code is working, and the model seems to train. Will test it on a full training run over the weekend!
## Who can review?
@LysandreJik @sgugger - would be great if you could check the README.md and the `push_to_hub=True` logic / process to see if the workflow fits
@SaulLu - would be great if you could take a look at the tokenizer code, since it's 99% copied from yours :-) (it seems to work well)
@patil-suraj @sgugger - would be awesome if you could make a more general review to see if code is written according to examples and you are fine with having a rather model-specific training script in the general examples. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12355/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12355",
"html_url": "https://github.com/huggingface/transformers/pull/12355",
"diff_url": "https://github.com/huggingface/transformers/pull/12355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12355.patch",
"merged_at": 1624907489000
} |
https://api.github.com/repos/huggingface/transformers/issues/12354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12354/comments | https://api.github.com/repos/huggingface/transformers/issues/12354/events | https://github.com/huggingface/transformers/issues/12354 | 929,949,418 | MDU6SXNzdWU5Mjk5NDk0MTg= | 12,354 | Input structure has type class tuple while shallow structure has type class transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput | {
"login": "seahrh",
"id": 4428622,
"node_id": "MDQ6VXNlcjQ0Mjg2MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4428622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seahrh",
"html_url": "https://github.com/seahrh",
"followers_url": "https://api.github.com/users/seahrh/followers",
"following_url": "https://api.github.com/users/seahrh/following{/other_user}",
"gists_url": "https://api.github.com/users/seahrh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seahrh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seahrh/subscriptions",
"organizations_url": "https://api.github.com/users/seahrh/orgs",
"repos_url": "https://api.github.com/users/seahrh/repos",
"events_url": "https://api.github.com/users/seahrh/events{/privacy}",
"received_events_url": "https://api.github.com/users/seahrh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, our Tensorflow examples are in flux right now, because we're in the process of updating them and generally trying to replace TFTrainer with native Keras code. The examples on that page may be outdated, but you can see an up-to-date example of using TF for QA here: https://github.com/huggingface/transformers/blob/master/examples/tensorflow/question-answering/run_qa.py\r\n\r\nThat said, thank you for the report - I'll make a point of taking a look at that page at some point and ensuring our examples there are also up-to-date!",
"Thank you! Studying the code in the link helped me to set up a minimal working example. We should probably close this issue after the TF example docs have been updated. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
- transformers version: 4.5.1
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- tensorflow: @Rocketknight1
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Official tensorflow example: https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Squad2
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Executed the official example in [my notebook](https://github.com/seahrh/kaggle-coleridge-initiative/blob/8087c6189d8e96679ec9a60816910ad86ed20480/hf_tf_squad2_finetune_example.ipynb) but encountered the following error on `model.fit`:
```
TypeError: The two structures don't have the same sequence type. Input structure has type <class 'tuple'>, while shallow structure has type <class 'transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput'>.
```
In cell 11, I made sure to `model.distilbert.return_dict = False` (if using 🤗 Transformers >3.02, make sure outputs are tuples) and had mapped the `start_positions` and `end_positions` as tuples in cell 9 (Keras will expect a tuple when dealing with labels).
I noted the following warning emitted by `model.fit`. If `return_dict` is always set to True at training time, then wouldn't this conflict with the "labels as tuple" requirement?
```
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
```
## Expected behavior
Official example is working.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12354/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12353/comments | https://api.github.com/repos/huggingface/transformers/issues/12353/events | https://github.com/huggingface/transformers/issues/12353 | 929,877,089 | MDU6SXNzdWU5Mjk4NzcwODk= | 12,353 | ERROR: Failed building wheel for tokenizers | {
"login": "JaheimLee",
"id": 18062264,
"node_id": "MDQ6VXNlcjE4MDYyMjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18062264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JaheimLee",
"html_url": "https://github.com/JaheimLee",
"followers_url": "https://api.github.com/users/JaheimLee/followers",
"following_url": "https://api.github.com/users/JaheimLee/following{/other_user}",
"gists_url": "https://api.github.com/users/JaheimLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JaheimLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JaheimLee/subscriptions",
"organizations_url": "https://api.github.com/users/JaheimLee/orgs",
"repos_url": "https://api.github.com/users/JaheimLee/repos",
"events_url": "https://api.github.com/users/JaheimLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/JaheimLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you open an issue on `tokenizers` instead? https://github.com/huggingface/tokenizers"
] | 1,624 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.1
- Platform: Macbook pro M1 16G
- Python version: 3.8.10
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.5.0
### Who can help
@LysandreJik
When I install transformers in my mac, this error occurred!
`ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12353/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12352/comments | https://api.github.com/repos/huggingface/transformers/issues/12352/events | https://github.com/huggingface/transformers/pull/12352 | 929,801,495 | MDExOlB1bGxSZXF1ZXN0Njc3NTk5NDIz | 12,352 | [WIP][FIX] Prevent output some config files when using fast tokenizer | {
"login": "europeanplaice",
"id": 38364983,
"node_id": "MDQ6VXNlcjM4MzY0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38364983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/europeanplaice",
"html_url": "https://github.com/europeanplaice",
"followers_url": "https://api.github.com/users/europeanplaice/followers",
"following_url": "https://api.github.com/users/europeanplaice/following{/other_user}",
"gists_url": "https://api.github.com/users/europeanplaice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/europeanplaice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/europeanplaice/subscriptions",
"organizations_url": "https://api.github.com/users/europeanplaice/orgs",
"repos_url": "https://api.github.com/users/europeanplaice/repos",
"events_url": "https://api.github.com/users/europeanplaice/events{/privacy}",
"received_events_url": "https://api.github.com/users/europeanplaice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Does this resolve #12308?",
"Hello!\r\nSorry if I may have been unclear.\r\nAssuming the difference between `spm.SentencePieceTrainer` and `ReformerTokenizerFast` is anticipated, this PR tentatively resolve the difference between `ReformerTokenizerFast` and `AutoTokenizer.from_pretrained('test')`.\r\n\r\nOutput is below.\r\n```\r\n sentencepiece 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']\r\n transformers 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']\r\n AutoTokenizer 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12308
One of the candidates of this issue's cause is the crash of configurations of fast tokenizer and tokenizer.
With fast tokenizer, `tokenizer.json` do all things and `special_tokens_map.json` and `tokenizer_config.json` are not needed.
When using fast tokenizer, this PR avoids output the two files.
This is a quick fix and I have not written test and checked the effects of this code on other programs.
I'd like to receive your reviews and criticism.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12352",
"html_url": "https://github.com/huggingface/transformers/pull/12352",
"diff_url": "https://github.com/huggingface/transformers/pull/12352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12352.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12351/comments | https://api.github.com/repos/huggingface/transformers/issues/12351/events | https://github.com/huggingface/transformers/pull/12351 | 929,788,166 | MDExOlB1bGxSZXF1ZXN0Njc3NTg4MzA2 | 12,351 | [trainer] add main_process_first context manager | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"So I won't lose it a magic perl to rewrite all other examples:\r\n\r\n```\r\nfind examples -type f -exec perl -0777 -pi -e 's|^(\\s+)(train_dataset = train_dataset.map\\(.*?\\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc=\"train dataset map pre-processing\"):\\n$p$t] } }' {} \\;\r\n```"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR
- [x] implements `main_process_first` context manager as discussed in https://github.com/huggingface/transformers/issues/12345
This make the `datasets` pre-processing much faster in the distributed environment as only one process is doing it and not all replicas too at once.
- [x] modifies `run_translation.py` example to use it as a model.
- [x] starts using `log.debug` - since now we have the new shiny `--log_level` trainer arg, so many too-much-information `log.info` can be switched to `log.debug`, and the user can run `--log_level debug` to generate a lot more info when debugging or when filing a bug report.
**Question: not sure what should be done on multi-node setups, since one may or may not use a shared filesystem.**
TODO:
- once merged replicate to other examples
**Kudos for the cool function name goes to @sgugger**
Fixes: https://github.com/huggingface/transformers/issues/12345
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12351/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12351/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12351",
"html_url": "https://github.com/huggingface/transformers/pull/12351",
"diff_url": "https://github.com/huggingface/transformers/pull/12351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12351.patch",
"merged_at": 1624658283000
} |
https://api.github.com/repos/huggingface/transformers/issues/12350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12350/comments | https://api.github.com/repos/huggingface/transformers/issues/12350/events | https://github.com/huggingface/transformers/pull/12350 | 929,710,134 | MDExOlB1bGxSZXF1ZXN0Njc3NTIxMDUx | 12,350 | Fix exception in prediction loop occurring for certain batch sizes | {
"login": "jglaser",
"id": 1899768,
"node_id": "MDQ6VXNlcjE4OTk3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1899768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jglaser",
"html_url": "https://github.com/jglaser",
"followers_url": "https://api.github.com/users/jglaser/followers",
"following_url": "https://api.github.com/users/jglaser/following{/other_user}",
"gists_url": "https://api.github.com/users/jglaser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jglaser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jglaser/subscriptions",
"organizations_url": "https://api.github.com/users/jglaser/orgs",
"repos_url": "https://api.github.com/users/jglaser/repos",
"events_url": "https://api.github.com/users/jglaser/events{/privacy}",
"received_events_url": "https://api.github.com/users/jglaser/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks, I can't push to your branch, so could you push an empty commit since circleCI decided to not run on your PR?\r\n```\r\ngit commit --allow-empty -m \"Trigger CI\"\r\n```\r\nand then a push.",
"Ok, there are lots of failures that seem related to the PyTorch release. Could you rebase on master (I really want to make sure this does not break any example before merging)? Thank you.",
"> Ok, there are lots of failures that seem related to the PyTorch release. Could you rebase on master (I really want to make sure this does not break any example before merging)? Thank you.\r\n\r\nDone, I must have based this off from a fork that was a few weeks old....",
"I'll be off now for the weekend, but the box \"Allow edits by maintainers\" is checked, so feel free to adapt as necessary.",
"We're all good, thanks for walking through this with me!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12349
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12350/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12350",
"html_url": "https://github.com/huggingface/transformers/pull/12350",
"diff_url": "https://github.com/huggingface/transformers/pull/12350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12350.patch",
"merged_at": 1624632915000
} |
https://api.github.com/repos/huggingface/transformers/issues/12349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12349/comments | https://api.github.com/repos/huggingface/transformers/issues/12349/events | https://github.com/huggingface/transformers/issues/12349 | 929,709,015 | MDU6SXNzdWU5Mjk3MDkwMTU= | 12,349 | Prediction fails for certain batch sizes | {
"login": "jglaser",
"id": 1899768,
"node_id": "MDQ6VXNlcjE4OTk3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1899768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jglaser",
"html_url": "https://github.com/jglaser",
"followers_url": "https://api.github.com/users/jglaser/followers",
"following_url": "https://api.github.com/users/jglaser/following{/other_user}",
"gists_url": "https://api.github.com/users/jglaser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jglaser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jglaser/subscriptions",
"organizations_url": "https://api.github.com/users/jglaser/orgs",
"repos_url": "https://api.github.com/users/jglaser/repos",
"events_url": "https://api.github.com/users/jglaser/events{/privacy}",
"received_events_url": "https://api.github.com/users/jglaser/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: IBM open-ce / ppcle64
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 NVIDIA V100
- Tensorflow version (GPU?): N/A
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed (but only 1 rank per process group)
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Custom-implementation combining two BERT models with a multilayer perceptron that implements a regression on the last hidden layer outputs
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
a regression on pairs of protein sequence and molecule SMILES strings, plus binding affinity
## To reproduce
Steps to reproduce the behavior:
1. Call `trainer.predict()` on the encoded inputs
2. If the number of inputs is 1+`per_device_eval_batch_size`, the last batch has just one member and is treated as a scalar
3. the prediction loop fails with
```
File "../affinity_pred/infer_mpi.py", line 189, in main
df_pred = predict(df)
File "../affinity_pred/infer_mpi.py", line 180, in predict
out = trainer.predict(x)
File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer.py", line 2065, in predict
output = eval_loop(
File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
logits = self._nested_gather(logits)
File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather
tensors = distributed_concat(tensors)
File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 158, in distributed_concat
concat = torch.cat(output_tensors, dim=0)
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
```
**Example**:
last two batches (`per_device_eval_batch_size` is 32)
```
[tensor([ 0.5737, -0.1372, -0.5283, -0.0641, -0.0641, 0.6353, 0.1035, -0.4148,
0.2314, -0.3879, -0.4431, -0.3931, -0.2642, 0.5039, -0.4187, 0.0679,
0.0679, -0.3167, -0.4783, -0.6724, -0.6724, -0.3257, 0.4922, 0.4922,
-0.4189, -0.3652, -0.4468, -0.2358, -0.3696, 0.1646, -0.2004, -1.0234],
device='cuda:0', dtype=torch.float16)]
[tensor(0.7144, device='cuda:0', dtype=torch.float16)]
RuntimeError('zero-dimensional tensor (at position 0) cannot be concatenated')
```
## Expected behavior
No exception | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12349/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12348/comments | https://api.github.com/repos/huggingface/transformers/issues/12348/events | https://github.com/huggingface/transformers/issues/12348 | 929,695,277 | MDU6SXNzdWU5Mjk2OTUyNzc= | 12,348 | Generate text until condition | {
"login": "StellaAthena",
"id": 15899312,
"node_id": "MDQ6VXNlcjE1ODk5MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StellaAthena",
"html_url": "https://github.com/StellaAthena",
"followers_url": "https://api.github.com/users/StellaAthena/followers",
"following_url": "https://api.github.com/users/StellaAthena/following{/other_user}",
"gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions",
"organizations_url": "https://api.github.com/users/StellaAthena/orgs",
"repos_url": "https://api.github.com/users/StellaAthena/repos",
"events_url": "https://api.github.com/users/StellaAthena/events{/privacy}",
"received_events_url": "https://api.github.com/users/StellaAthena/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Maybe of interest to @patrickvonplaten @patil-suraj ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Related PR: https://github.com/huggingface/transformers/pull/12219\r\n\r\n"
] | 1,624 | 1,631 | null | CONTRIBUTOR | null | Is there a simple way to have the model generate text until a condition is met? I'm interested in data memorization and want to prompt the model with some tokens from the training data and then have it generate text until it makes a mistake (aka deviates from the training data). The naive approach with a while loop has significant overhead, and I was wondering if there was something smarter I can be doing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12348/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/12348/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12347/comments | https://api.github.com/repos/huggingface/transformers/issues/12347/events | https://github.com/huggingface/transformers/issues/12347 | 929,674,491 | MDU6SXNzdWU5Mjk2NzQ0OTE= | 12,347 | TypeError: __init__() got an unexpected keyword argument 'report_to' | {
"login": "TatProg",
"id": 43710369,
"node_id": "MDQ6VXNlcjQzNzEwMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/43710369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TatProg",
"html_url": "https://github.com/TatProg",
"followers_url": "https://api.github.com/users/TatProg/followers",
"following_url": "https://api.github.com/users/TatProg/following{/other_user}",
"gists_url": "https://api.github.com/users/TatProg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TatProg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TatProg/subscriptions",
"organizations_url": "https://api.github.com/users/TatProg/orgs",
"repos_url": "https://api.github.com/users/TatProg/repos",
"events_url": "https://api.github.com/users/TatProg/events{/privacy}",
"received_events_url": "https://api.github.com/users/TatProg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Do you mind sharing a reproducible code example in colab? I ran the following on the bare `run_mlm.py` script:\r\n```\r\npython run_mlm.py --output_dir=here --report_to=tensorboard --dataset_name=wikitext --dataset_config_name=wikitext-2-raw-v1 --model_name_or_path=bert-base-cased --do_train\r\n```\r\nwhich works without issue. Maybe @sgugger has more insights.\r\n\r\nNice username and Pycharm project :)",
"@LysandreJik \r\n\r\nI found problem: I used `report_to=` in `TrainingArguments` without `run_name=`\r\n\r\nEven if I add `--run_name=` in command `python3 run_mlm.py --report_to wandb --run_name new_run_name` it didn't work. I think this might be fixed quite easy: Throw exception, if `report_to=` is 'initialized' in `TrainingArguments` without `run_name=`\r\n\r\n\r\nMy solution. Also tested not only with 'wandb', but with 'tensorboard' and both of them works very well. (All imports are above)\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir='My_lovely_mlm_model',\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n do_eval=True,\r\n per_device_train_batch_size=100,\r\n per_device_eval_batch_size=50,\r\n evaluation_strategy='steps',\r\n logging_steps=10_000,\r\n eval_steps=None,\r\n prediction_loss_only=True,\r\n num_train_epochs=50,\r\n save_steps=10_000,\r\n save_total_limit=10,\r\n\r\n report_to='wandb'\r\n run_name=\"new_run\" #ADDED THIS.\r\n\r\n)\r\n```\r\n\r\nP.S. If I want to continue training my model from checkpoint, i should change `model = AutoModelForMaskedLM.from_pretrained('bert-base-multilingual-cased')` to `model = AutoModelForMaskedLM.from_pretrained('project/pretrained_model_name/checkpoint-190000')`right?\r\n\r\n",
"If you use the Trainer, just use `resume_from_checkpoint=path_to_checkpoint` when calling `trainer.train`."
] | 1,624 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-135-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, 8 Tesla V100
- Using distributed or parallel set-up in script?: training_args.parallel_mode = ParallelMode.NOT_DISTRIBUTED
### Who can help
@sgugger @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using BERT - 'bert-base-multilingual-cased'
The problem arises when using:
* [x] the official example scripts.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Masked Language Modelling
## To reproduce
Steps to reproduce the behavior:
0. Install wandb and tensorboard via Command Line
```
pip install wandb
wandb login
WANDB_PROJECT=mlm_project
pip install tf-nightly
pip install tensorboard
```
1. Initialize training_args
```python
training_args = TrainingArguments(
output_dir='My_lovely_mlm_model',
overwrite_output_dir=True,
do_train=True,
do_eval=True,
per_device_train_batch_size=100,
per_device_eval_batch_size=50,
evaluation_strategy='steps',
logging_steps=10_000,
eval_steps=None,
prediction_loss_only=True,
num_train_epochs=50,
save_steps=10_000,
save_total_limit=10,
report_to=['wandb', 'tensorboard']
)
```
2. Run script with button in PyCharm or in console with python `run_mlm.py --report_to wandb --run_name new_run_name`
3. Enjoy error message
```
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/LysandreJik_is_the_best/Documents/PyCharmProjects/in_sgugger_we_trust/mlm/run_mlm.py", line 21, in <module>
from arguments_for_mlm import model_data_args
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "/Users/LysandreJik_is_the_best/Documents/PyCharmProjects/in_sgugger_we_trust/mlm/arguments_for_mlm.py", line 299, in <module>
report_to=['wandb', 'tensorboard']
TypeError: __init__() got an unexpected keyword argument 'report_to'
```
Also, i tried to remove 'tensorboard' or 'wandb', but caught same error again and again
## Expected behavior
Script must run without this error. Wandb folder must be created. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12347/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12346/comments | https://api.github.com/repos/huggingface/transformers/issues/12346/events | https://github.com/huggingface/transformers/issues/12346 | 929,664,782 | MDU6SXNzdWU5Mjk2NjQ3ODI= | 12,346 | All evaluation processes overload one GPU, when other 7 are available. While Training process fine and is distributed across all 8 cards | {
"login": "TatProg",
"id": 43710369,
"node_id": "MDQ6VXNlcjQzNzEwMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/43710369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TatProg",
"html_url": "https://github.com/TatProg",
"followers_url": "https://api.github.com/users/TatProg/followers",
"following_url": "https://api.github.com/users/TatProg/following{/other_user}",
"gists_url": "https://api.github.com/users/TatProg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TatProg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TatProg/subscriptions",
"organizations_url": "https://api.github.com/users/TatProg/orgs",
"repos_url": "https://api.github.com/users/TatProg/repos",
"events_url": "https://api.github.com/users/TatProg/events{/privacy}",
"received_events_url": "https://api.github.com/users/TatProg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, I think you are using `nn.DataParallel` and not `nn.DistributedDataParallel` and hence 1 GPU is taking more memory. In case of `torch.nn.DistributedDataParallel`, `training_args.parallel_mode` will be `ParallelModel.Distributed`.\r\n\r\nIn order to use `nn.DistributedDataParallel`, launch with this CMD: `python3 -m torch.distributed.launch --nproc_per_node=8 <your-script>.py`",
"@vasudevgupta7 Thank you for interesting idea. Sadly, it doesn't work well. May I ask you to inspect my error messages on pastebin? \r\n\r\nFirst one for `python3 -m torch.distributed.launch --nproc_per_node=8 run_mlm_copy.py > log_BERTtugan_pretrained_5.txt &`\r\nhttps://pastebin.com/MLkR8MQp\r\n\r\nSecond one for `python3 -m torch.distributed.launch --nproc_per_node=8 run_mlm_copy.py &`\r\nhttps://pastebin.com/UeVg0YpV",
"### UPD:\r\nFirst of all, I remove all arguments for evaluation:\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir='My_lovely_mlm_model',\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n per_device_train_batch_size=150,\r\n logging_steps=10_000,\r\n prediction_loss_only=True,\r\n num_train_epochs=50,\r\n save_steps=10_000,\r\n save_total_limit=10\r\n)\r\n\r\n```\r\nAnd all worked fine, but this **not a solution**\r\n\r\nSecondly, when I run script on all 8 GPUs, I caught an Error (error message below). But when I choose only 7 with `export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6`, everything works fine.\r\n\r\n\r\nHow can I solve these problems? \r\n\r\nCode in \"for_github\" folder in https://github.com/TatProg/bertugan_sample/tree/master/mlm/for_github\r\n\r\n```bash\r\nRuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=8, worker_count=16, timeout=0:30:00)\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 5977) of binary: /usr/bin/python3\r\nERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 2/3 attempts left; will restart worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvousing worker group\r\nINFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:\r\n restart_count=2\r\n master_addr=127.0.0.1\r\n master_port=29500\r\n group_rank=0\r\n group_world_size=1\r\n local_ranks=[0, 1, 2, 3, 4, 5, 6, 7]\r\n role_ranks=[0, 1, 2, 3, 4, 5, 6, 7]\r\n global_ranks=[0, 1, 2, 3, 4, 5, 6, 7]\r\n role_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]\r\n global_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]\r\n```",
"Hey, not sure why that's happening. May be someone else can help.",
"Your first pastebin shows that the 8 processes are not properly initialized and joined, so this is more of a PyTorch error. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-135-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, 8 Tesla V100
- Using distributed or parallel set-up in script?: training_args.parallel_mode = ParallelMode.NOT_DISTRIBUTED
### Who can help
@sgugger @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using BERT - 'bert-base-multilingual-cased'
The problem arises when using:
* [ ] the official example scripts.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Masked Language Modelling
## To reproduce
0. Enter
`nvidia-smi`
`export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7`
<img width="557" alt="Снимок экрана 2021-06-24 в 18 35 25" src="https://user-images.githubusercontent.com/43710369/123291660-f9b22600-d51a-11eb-8972-0c2cbdfc1503.png">
1. Initialize training_args
```python
training_args = TrainingArguments(
output_dir='My_lovely_mlm_model',
overwrite_output_dir=True,
do_train=True,
do_eval=True,
per_device_train_batch_size=100,
per_device_eval_batch_size=50,
evaluation_strategy='steps',
logging_steps=10_000,
eval_steps=None,
prediction_loss_only=True,
learning_rate=5e-5,
weight_decay=0,
adam_epsilon=1e-8,
max_grad_norm=1.0,
num_train_epochs=50,
save_steps=10_000,
save_total_limit=10
)
```
2. Catch RuntimeError: CUDA out of memory
```
RuntimeError: CUDA out of memory.
Tried to allocate 17.81 GiB (GPU 0; 31.72 GiB total capacity;
10.49 GiB already allocated; 3.57 GiB free;
26.52 GiB reserved in total by PyTorch)
```
3. Change batch size and catch another Error
`per_device_train_batch_size = 80`
`per_device_eval_batch_size = 4`
```
RuntimeError: CUDA out of memory.
Tried to allocate 14.25 GiB (GPU 0; 31.72 GiB total capacity;
8.96 GiB already allocated; 8.80 GiB free;
21.29 GiB reserved in total by PyTorch)
```
4. Change batch size and only then trainin was started
`per_device_train_batch_size = 64`
`per_device_eval_batch_size = 4`
<img width="560" alt="Снимок экрана 2021-06-25 в 1 24 39" src="https://user-images.githubusercontent.com/43710369/123340021-42d29c00-d554-11eb-9849-9aa682ef97e3.png">
**As you see, first GPU in list is loaded with 30.95 GB and other seven with only 10.5 GB. I have no idea how to fix this with default Transformers functions (or PyTorch lib).**
Before each run i deleted cache with `torch.cuda.empty_cache()`
Also, This might be helpful:
```python
print(training_args.n_gpu)
print(training_args.parallel_mode)
print(training_args.train_batch_size)
print(training_args.eval_batch_size)
print(training_args.device)
```
Got
```
8
ParallelMode.NOT_DISTRIBUTED
512
32
cuda:0
```
## Expected behavior
Equal distribution to all 8 GPUs. Normal training and evaluation process.
P.S. Also, continued training from checkpoint ( `AutoModelWithLMHead.from_pretrained(path_to_checkpoint_model)`) cause exception. Should I write another issue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12346/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12345/comments | https://api.github.com/repos/huggingface/transformers/issues/12345/events | https://github.com/huggingface/transformers/issues/12345 | 929,566,485 | MDU6SXNzdWU5Mjk1NjY0ODU= | 12,345 | [examples] [distributed] process datasets.map only on main process in | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # 🚀 Feature request
Switch examples to process `datasets.map` only once on a main process and have the other processes wait, which would make pre-processing much faster. This is based on: https://huggingface.co/docs/datasets/processing.html?highlight=map#mapping-in-a-distributed-setting
@sgugger suggested to add a new context manager `training_args.main_process_first()` to make it simple.
context: https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or-fairscale/7229/ (albeit the title is misleading, the issue the user experienced is with any distributed framework).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12345/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.