url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/11438
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11438/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11438/comments
https://api.github.com/repos/huggingface/transformers/issues/11438/events
https://github.com/huggingface/transformers/pull/11438
867,255,476
MDExOlB1bGxSZXF1ZXN0NjIyOTUzMTIz
11,438
[docs] fix invalid class name
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
This PR fixes misnamed `TrainerArgument` The CI failures are unrelated - this can be safely merged. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11438/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11438", "html_url": "https://github.com/huggingface/transformers/pull/11438", "diff_url": "https://github.com/huggingface/transformers/pull/11438.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11438.patch", "merged_at": 1619451452000 }
https://api.github.com/repos/huggingface/transformers/issues/11437
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11437/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11437/comments
https://api.github.com/repos/huggingface/transformers/issues/11437/events
https://github.com/huggingface/transformers/pull/11437
867,255,238
MDExOlB1bGxSZXF1ZXN0NjIyOTUyOTI2
11,437
[Makefile] make sure to test against the local checkout
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
Currently some scripts in `Makefile` run against the pre-installed `transformers` rather than the checkout it's supposed to test. This PR fixes that by setting ` PYTHONPATH="src"`. I had to fix that as I was getting at the end of `make fixup`: ``` python utils/check_repo.py 2021-04-25 21:54:53.850434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Checking all models are properly tested. Traceback (most recent call last): File "utils/check_repo.py", line 481, in <module> check_repo_quality() File "utils/check_repo.py", line 473, in check_repo_quality check_all_models_are_tested() File "utils/check_repo.py", line 233, in check_all_models_are_tested modules = get_model_modules() File "utils/check_repo.py", line 147, in get_model_modules modeling_module = getattr(model_module, submodule) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/file_utils.py", line 1666, in __getattr__ value = self._get_module(name) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/albert/__init__.py", line 120, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/albert/modeling_tf_albert.py", line 43, in <module> from ...modeling_tf_utils import ( File "src/transformers/modeling_tf_utils.py", line 32, in <module> from .file_utils import ( ImportError: cannot import name 'PushToHubMixin' from 'transformers.file_utils' (/mnt/nvme1/code/huggingface/transformers-master/src/transformers/file_utils.py) ``` The errors are from the pre-installed `transformers` and not the clone I'm working on. The CI failures are unrelated - this can be safely merged. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11437/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11437", "html_url": "https://github.com/huggingface/transformers/pull/11437", "diff_url": "https://github.com/huggingface/transformers/pull/11437.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11437.patch", "merged_at": 1619451763000 }
https://api.github.com/repos/huggingface/transformers/issues/11436
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11436/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11436/comments
https://api.github.com/repos/huggingface/transformers/issues/11436/events
https://github.com/huggingface/transformers/issues/11436
867,168,368
MDU6SXNzdWU4NjcxNjgzNjg=
11,436
梯度爆炸问题
{ "login": "JiTingyu", "id": 67445472, "node_id": "MDQ6VXNlcjY3NDQ1NDcy", "avatar_url": "https://avatars.githubusercontent.com/u/67445472?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JiTingyu", "html_url": "https://github.com/JiTingyu", "followers_url": "https://api.github.com/users/JiTingyu/followers", "following_url": "https://api.github.com/users/JiTingyu/following{/other_user}", "gists_url": "https://api.github.com/users/JiTingyu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JiTingyu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JiTingyu/subscriptions", "organizations_url": "https://api.github.com/users/JiTingyu/orgs", "repos_url": "https://api.github.com/users/JiTingyu/repos", "events_url": "https://api.github.com/users/JiTingyu/events{/privacy}", "received_events_url": "https://api.github.com/users/JiTingyu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
我在运行使用apex后用bert进行微调训练,最后报错提示"No module named 'amp_C'",并且还会梯度爆炸,请问是什么原因,该如何解决呢? 错误如下: Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods. Defaults for this optimization level are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)... After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",) Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11436/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11435
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11435/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11435/comments
https://api.github.com/repos/huggingface/transformers/issues/11435/events
https://github.com/huggingface/transformers/issues/11435
867,136,132
MDU6SXNzdWU4NjcxMzYxMzI=
11,435
convert gpt2 from tensorflow to pytorch
{ "login": "7AM7", "id": 24973739, "node_id": "MDQ6VXNlcjI0OTczNzM5", "avatar_url": "https://avatars.githubusercontent.com/u/24973739?v=4", "gravatar_id": "", "url": "https://api.github.com/users/7AM7", "html_url": "https://github.com/7AM7", "followers_url": "https://api.github.com/users/7AM7/followers", "following_url": "https://api.github.com/users/7AM7/following{/other_user}", "gists_url": "https://api.github.com/users/7AM7/gists{/gist_id}", "starred_url": "https://api.github.com/users/7AM7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/7AM7/subscriptions", "organizations_url": "https://api.github.com/users/7AM7/orgs", "repos_url": "https://api.github.com/users/7AM7/repos", "events_url": "https://api.github.com/users/7AM7/events{/privacy}", "received_events_url": "https://api.github.com/users/7AM7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @7AM7 \r\n\r\nI think this is because there is `_step` in TF checkpoint, which should be ignored when loading the weights.\r\n\r\nfor this you should write your own conversion script. You could take and modify this function\r\nhttps://github.com/huggingface/transformers/blob/30f065890e77f2917895b175b9a1df503b89e202/src/transformers/models/gpt2/modeling_gpt2.py#L68\r\n\r\nadding some check like this here would solve this\r\n```python\r\nfor name, shape in init_vars:\r\n if \"_step\" not in name name:\r\n```\r\n", "i modified this function def load_tf_weights_in_gpt2 and ignored \"_step\" Like if name!=\"_step\" or if \"_step\" not in name And still same error\r\n\r\n", "in that case, you could check what extra variables are there in the `names` and then remove those from `names` and `arrays`. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` = 4.5.1 - PyTorch version (GPU?) = = 1.8.1+cu101 command : !python3 /content/transformers/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py \ --gpt2_checkpoint_path=/content/drive/MyDrive/tensorflowCheckpoints/model.ckpt-50000 \ --pytorch_dump_folder_path=/content/drive/MyDrive/convertpyorch/torch_model-500gpt2.bin \ --gpt2_config_file=/content/drive/MyDrive/tensorflowCheckpoints/config2.json Error : Traceback (most recent call last): File "/content/transformers/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 68, in <module> convert_gpt2_checkpoint_to_pytorch(args.gpt2_checkpoint_path, args.gpt2_config_file, args.pytorch_dump_folder_path) File "/content/transformers/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 39, in convert_gpt2_checkpoint_to_pytorch load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path) File "/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 109, in load_tf_weights_in_gpt2 pointer = getattr(pointer, scope_names[0]) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 948, in __getattr__ type(self).__name__, name)) AttributeError: 'GPT2Model' object has no attribute '_step'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11435/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11434
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11434/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11434/comments
https://api.github.com/repos/huggingface/transformers/issues/11434/events
https://github.com/huggingface/transformers/pull/11434
867,118,239
MDExOlB1bGxSZXF1ZXN0NjIyODQzMTky
11,434
Updating checkpoint for GPT2ForSequenceClassification #11334
{ "login": "abiolaTresor", "id": 48957493, "node_id": "MDQ6VXNlcjQ4OTU3NDkz", "avatar_url": "https://avatars.githubusercontent.com/u/48957493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abiolaTresor", "html_url": "https://github.com/abiolaTresor", "followers_url": "https://api.github.com/users/abiolaTresor/followers", "following_url": "https://api.github.com/users/abiolaTresor/following{/other_user}", "gists_url": "https://api.github.com/users/abiolaTresor/gists{/gist_id}", "starred_url": "https://api.github.com/users/abiolaTresor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abiolaTresor/subscriptions", "organizations_url": "https://api.github.com/users/abiolaTresor/orgs", "repos_url": "https://api.github.com/users/abiolaTresor/repos", "events_url": "https://api.github.com/users/abiolaTresor/events{/privacy}", "received_events_url": "https://api.github.com/users/abiolaTresor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? This PR fixes the checkpoint for GPT2ForSequenceClassification. It sets it from `microsoft/dialogrpt` to `microsoft/DialogRPT-updown` Fixes # (issue) The identifier `microsoft/dialogrpt` is incorrect. When used, the weights of the linear layer at top are differently initialized at each execution, which gives different prediction results for same inputs. The checkpoint `microsoft/DialogRPT-updown` fixes that issue since it offers a pretrained classification head. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? Yes - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Yes [https://github.com/huggingface/transformers/issues/11334](url) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? No ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Hello @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11434/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11434/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11434", "html_url": "https://github.com/huggingface/transformers/pull/11434", "diff_url": "https://github.com/huggingface/transformers/pull/11434.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11434.patch", "merged_at": 1619413131000 }
https://api.github.com/repos/huggingface/transformers/issues/11433
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11433/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11433/comments
https://api.github.com/repos/huggingface/transformers/issues/11433/events
https://github.com/huggingface/transformers/issues/11433
867,075,662
MDU6SXNzdWU4NjcwNzU2NjI=
11,433
tensorflow version is not able to pick the trained model from local directory in an air gapped system
{ "login": "hiteshkum123", "id": 77171610, "node_id": "MDQ6VXNlcjc3MTcxNjEw", "avatar_url": "https://avatars.githubusercontent.com/u/77171610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hiteshkum123", "html_url": "https://github.com/hiteshkum123", "followers_url": "https://api.github.com/users/hiteshkum123/followers", "following_url": "https://api.github.com/users/hiteshkum123/following{/other_user}", "gists_url": "https://api.github.com/users/hiteshkum123/gists{/gist_id}", "starred_url": "https://api.github.com/users/hiteshkum123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hiteshkum123/subscriptions", "organizations_url": "https://api.github.com/users/hiteshkum123/orgs", "repos_url": "https://api.github.com/users/hiteshkum123/repos", "events_url": "https://api.github.com/users/hiteshkum123/events{/privacy}", "received_events_url": "https://api.github.com/users/hiteshkum123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Did you install TensorFlow in your environment? You might need a more recent TensorFlow version if so.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
hi , i have trained the TFBertForSequenceClassification model and i have to deploy the trained model in an air gapped server. code - from transformers import BertTokenizer, TFBertForSequenceClassification from transformers import InputExample, InputFeatures model1=TFBertForSequenceClassification.from_pretrained(local_path) tokenizer1=BertTokenizer.from_pretrained(loacal_path) ImportError: cannot import name 'TFBertForSequenceClassification' from 'transformers' (unknown location) Same code works if i am using the PyTorch version( BertForSequenceClassification)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11433/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11432
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11432/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11432/comments
https://api.github.com/repos/huggingface/transformers/issues/11432/events
https://github.com/huggingface/transformers/pull/11432
867,068,910
MDExOlB1bGxSZXF1ZXN0NjIyODA3MTUz
11,432
Typo fixes
{ "login": "LSinev", "id": 12072891, "node_id": "MDQ6VXNlcjEyMDcyODkx", "avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LSinev", "html_url": "https://github.com/LSinev", "followers_url": "https://api.github.com/users/LSinev/followers", "following_url": "https://api.github.com/users/LSinev/following{/other_user}", "gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}", "starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LSinev/subscriptions", "organizations_url": "https://api.github.com/users/LSinev/orgs", "repos_url": "https://api.github.com/users/LSinev/repos", "events_url": "https://api.github.com/users/LSinev/events{/privacy}", "received_events_url": "https://api.github.com/users/LSinev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? Fix some typos in docs, comments, logging/errors ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11432/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11432", "html_url": "https://github.com/huggingface/transformers/pull/11432", "diff_url": "https://github.com/huggingface/transformers/pull/11432.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11432.patch", "merged_at": 1619442865000 }
https://api.github.com/repos/huggingface/transformers/issues/11431
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11431/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11431/comments
https://api.github.com/repos/huggingface/transformers/issues/11431/events
https://github.com/huggingface/transformers/pull/11431
867,012,943
MDExOlB1bGxSZXF1ZXN0NjIyNzY3MDA0
11,431
Accepts BatchEncoding in LengthGroupedSampler
{ "login": "tma15", "id": 481227, "node_id": "MDQ6VXNlcjQ4MTIyNw==", "avatar_url": "https://avatars.githubusercontent.com/u/481227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tma15", "html_url": "https://github.com/tma15", "followers_url": "https://api.github.com/users/tma15/followers", "following_url": "https://api.github.com/users/tma15/following{/other_user}", "gists_url": "https://api.github.com/users/tma15/gists{/gist_id}", "starred_url": "https://api.github.com/users/tma15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tma15/subscriptions", "organizations_url": "https://api.github.com/users/tma15/orgs", "repos_url": "https://api.github.com/users/tma15/repos", "events_url": "https://api.github.com/users/tma15/events{/privacy}", "received_events_url": "https://api.github.com/users/tma15/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Expands `LengthGroupedSampler` to accept `BatchEncoding`-based `Dataset` by auto inference of lengths of them as well as `dict`-based `Dataset`. Because `BatchEncoding` can be seen as a special type of dictionary in Python, it is useful to be accepted by `LengthGroupedSampler` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11431/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11431", "html_url": "https://github.com/huggingface/transformers/pull/11431", "diff_url": "https://github.com/huggingface/transformers/pull/11431.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11431.patch", "merged_at": 1619785666000 }
https://api.github.com/repos/huggingface/transformers/issues/11430
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11430/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11430/comments
https://api.github.com/repos/huggingface/transformers/issues/11430/events
https://github.com/huggingface/transformers/pull/11430
867,008,712
MDExOlB1bGxSZXF1ZXN0NjIyNzY0MTMy
11,430
Fix `sp_model_kwargs` param missing at unpickle in `XLMRobertaTokenizer`
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This PR ist ready for merging from my point of view." ]
1,619
1,619
1,619
CONTRIBUTOR
null
fix for #11429
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11430/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11430", "html_url": "https://github.com/huggingface/transformers/pull/11430", "diff_url": "https://github.com/huggingface/transformers/pull/11430.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11430.patch", "merged_at": 1619768699000 }
https://api.github.com/repos/huggingface/transformers/issues/11429
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11429/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11429/comments
https://api.github.com/repos/huggingface/transformers/issues/11429/events
https://github.com/huggingface/transformers/issues/11429
867,007,451
MDU6SXNzdWU4NjcwMDc0NTE=
11,429
`sp_model_kwargs` param missing at unpickle in `XLMRobertaTokenizer`
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "fix pr at #11430", "Closed by #11430, thanks @PhilipMay!" ]
1,619
1,619
1,619
CONTRIBUTOR
null
When `XLMRobertaTokenizer` is unpickled the `sp_model_kwargs` is not set. See: https://github.com/huggingface/transformers/blob/35cd8eed887891bee60194a95adc35b884f68f55/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L178 PS: I will provide a fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11429/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11428
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11428/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11428/comments
https://api.github.com/repos/huggingface/transformers/issues/11428/events
https://github.com/huggingface/transformers/issues/11428
866,994,628
MDU6SXNzdWU4NjY5OTQ2Mjg=
11,428
RoBERTa: ValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4.
{ "login": "PremalMatalia", "id": 42915124, "node_id": "MDQ6VXNlcjQyOTE1MTI0", "avatar_url": "https://avatars.githubusercontent.com/u/42915124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PremalMatalia", "html_url": "https://github.com/PremalMatalia", "followers_url": "https://api.github.com/users/PremalMatalia/followers", "following_url": "https://api.github.com/users/PremalMatalia/following{/other_user}", "gists_url": "https://api.github.com/users/PremalMatalia/gists{/gist_id}", "starred_url": "https://api.github.com/users/PremalMatalia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PremalMatalia/subscriptions", "organizations_url": "https://api.github.com/users/PremalMatalia/orgs", "repos_url": "https://api.github.com/users/PremalMatalia/repos", "events_url": "https://api.github.com/users/PremalMatalia/events{/privacy}", "received_events_url": "https://api.github.com/users/PremalMatalia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can someone look into it? as I am passing the same data structure as mentioned in above error i n trainer.train(). PFB.\r\n\r\n<_AssertCardinalityDataset shapes: ({input_ids: (None,), attention_mask: (None,), feature_index: (), qas_id: ()}, {start_positions: (), end_positions: (), cls_index: (), p_mask: (None,), is_impossible: ()}), types: ({input_ids: tf.int32, attention_mask: tf.int32, feature_index: tf.int64, qas_id: tf.string}, {start_positions: tf.int64, end_positions: tf.int64, cls_index: tf.int64, p_mask: tf.int32, is_impossible: tf.int32})>", "Hi @PremalMatalia, thank you for opening an issue. Pinging @Rocketknight1 as the TensorFlow developer.\r\n\r\nPlease be aware that we're in the process of deprecating the `TFTrainer` and that we will not be maintaining it anymore as it doesn't offer features that cannot be handled by Keras directly. We're in the process of moving examples to Keras, and @Rocketknight1 has already started with the text classification example [here](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_text_classification.py).\r\n\r\nQA is sure to follow.", "Thanks @LysandreJik for your response. Good to know that TFTrainer is being deprecated so I can focus on some other ways to fine-tune.\r\n\r\nAny other references to follow at this moment for question answering of SQuAD? ", "Hi! I'll take a look, but the error is quite convoluted. Can you link me to any examples you're following for this? If our code examples aren't working we definitely want to fix that.", "@Rocketknight1 ...I was following below run_tf_squad.py file for fine tuning.\r\n\r\nhttps://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/examples/question-answering/run_tf_squad.py\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info `transformers` version: 4.5.1 Platform: Google Colab Python version: 3.7 PyTorch version (GPU?): NA Tensorflow version (GPU?): 2.4.1 Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ### Who can help: @LysandreJik for Roberta issue @sgugger for trainer (tf_trainer) issue Models:Roberta: @LysandreJik Library: /transformers/trainer_tf.py: @sgugger ## Information Model I am using RoBERTa (roberta-base) for SQuAD 2.0 Question Answering task fine tuning exercise: The tasks I am working on is: I am using functions from official transformer scripts only with an official SQUaD 2.0 dataset task: (give the name) [RoBERTa_transformer_fine_tune_pre_trained_model_tftpu.zip](https://github.com/huggingface/transformers/files/6371875/RoBERTa_transformer_fine_tune_pre_trained_model_tftpu.zip) ## To reproduce Steps to reproduce the behavior: Run the attached script shared with GPU on Google Colab and it will give error when it tries to run trainer.train() with SQuAD 2.0 TF dataset. Error message: InvalidArgumentError: TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'start_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [0, 1779, 222, 12674, 1755, 386, 1959, 1406, 116, 2, 2, 12674, 12695, 272, 354, 6591, 10690, 1634, 12, 43732, 48229, 5605, 43621, 16948, 49066, 267, 35423, 10659, 282, 1090, 35423, 10278, 73, 19417, 12, 975, 2191, 12, 28357, 43, 36, 5400, 772, 204, 6, 14130, 43, 16, 41, 470, 3250, 6, 2214, 9408, 6, 638, 3436, 8, 3390, 4, 8912, 8, 1179, 11, 2499, 6, 1184, 6, 79, 3744, 11, 1337, 6970, 8, 7950, 9150, 25, 10, 920, 6, 8, 1458, 7, 9444, 11, 5, 628, 4525, 29, 25, 483, 3250, 9, 248, 947, 387, 1816, 12, 13839, 23313, 18, 7442, 4, 1554, 4628, 30, 69, 1150, 6, 4101, 16152, 10690, 1634, 6, 5, 333, 1059, 65, 9, 5, 232, 18, 275, 12, 11393, 1816, 1134, 9, 70, 86, 4, 2667, 25224, 794, 5, 800, 9, 12674, 12695, 18, 2453, 2642, 6, 34880, 9412, 11, 3437, 36, 35153, 238, 61, 2885, 69, 25, 10, 5540, 3025, 3612, 6, 2208, 292, 12727, 4229, 8, 3520, 5, 18919, 6003, 727, 346, 12, 1264, 7695, 22, 347, 36616, 11, 3437, 113, 8, 22, 30047, 5637, 845, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 897, in generator_py_func flattened_values = nest.flatten_up_to(output_types, values) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 396, in flatten_up_to assert_shallow_structure(shallow_tree, input_tree) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 324, in assert_shallow_structure check_types=check_types) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 311, in assert_shallow_structure % (len(input_tree), len(shallow_tree))) ValueError: The two structures don't have the same sequence length. Input structure has length 5, while shallow structure has length 4. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/script_ops.py", line 249, in __call__ ret = func(*args) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 620, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 905, in generator_py_func sys.exc_info()[2]) File "/usr/local/lib/python3.7/dist-packages/six.py", line 702, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 897, in generator_py_func flattened_values = nest.flatten_up_to(output_types, values) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 396, in flatten_up_to assert_shallow_structure(shallow_tree, input_tree) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 324, in assert_shallow_structure check_types=check_types) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/nest.py", line 311, in assert_shallow_structure % (len(input_tree), len(shallow_tree))) TypeError: `generator` yielded an element that did not match the expected structure. The expected structure was ({'input_ids': tf.int32, 'attention_mask': tf.int32, 'feature_index': tf.int64, 'qas_id': tf.string}, {'start_positions': tf.int64, 'end_positions': tf.int64, 'cls_index': tf.int64, 'p_mask': tf.int32, 'is_impossible': tf.int32}), but the yielded element was ({'input_ids': [0, 1779, 222, 12674, 1755, 386, 1959, 1406, 116, 2, 2, 12674, 12695, 272, 354, 6591, 10690, 1634, 12, 43732, 48229, 5605, 43621, 16948, 49066, 267, 35423, 10659, 282, 1090, 35423, 10278, 73, 19417, 12, 975, 2191, 12, 28357, 43, 36, 5400, 772, 204, 6, 14130, 43, 16, 41, 470, 3250, 6, 2214, 9408, 6, 638, 3436, 8, 3390, 4, 8912, 8, 1179, 11, 2499, 6, 1184, 6, 79, 3744, 11, 1337, 6970, 8, 7950, 9150, 25, 10, 920, 6, 8, 1458, 7, 9444, 11, 5, 628, 4525, 29, 25, 483, 3250, 9, 248, 947, 387, 1816, 12, 13839, 23313, 18, 7442, 4, 1554, 4628, 30, 69, 1150, 6, 4101, 16152, 10690, 1634, 6, 5, 333, 1059, 65, 9, 5, 232, 18, 275, 12, 11393, 1816, 1134, 9, 70, 86, 4, 2667, 25224, 794, 5, 800, 9, 12674, 12695, 18, 2453, 2642, 6, 34880, 9412, 11, 3437, 36, 35153, 238, 61, 2885, 69, 25, 10, 5540, 3025, 3612, 6, 2208, 292, 12727, 4229, 8, 3520, 5, 18919, 6003, 727, 346, 12, 1264, 7695, 22, 347, 36616, 11, 3437, 113, 8, 22, 30047, 5637, 845, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,... ## Expected behavior * 'squad_convert_examples_to_features' function should have taken care of what all features to be passed to the model based on pre-trained tokenizer defined. * If not then what should be steps to remove/add features required for albert model training/fine-tuning tasks should be documented somewhere [As per my knowledge it is not available anywhere]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11428/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11427
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11427/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11427/comments
https://api.github.com/repos/huggingface/transformers/issues/11427/events
https://github.com/huggingface/transformers/pull/11427
866,983,604
MDExOlB1bGxSZXF1ZXN0NjIyNzQ2MDkx
11,427
Fix link to the TPU launcher script in the pytorch examples
{ "login": "amineabdaoui", "id": 17952908, "node_id": "MDQ6VXNlcjE3OTUyOTA4", "avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amineabdaoui", "html_url": "https://github.com/amineabdaoui", "followers_url": "https://api.github.com/users/amineabdaoui/followers", "following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}", "gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}", "starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions", "organizations_url": "https://api.github.com/users/amineabdaoui/orgs", "repos_url": "https://api.github.com/users/amineabdaoui/repos", "events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}", "received_events_url": "https://api.github.com/users/amineabdaoui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hi @sgugger, @patil-suraj, The link to the TPU launcher script in the pytorch examples is broken. Thanks ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11427/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11427", "html_url": "https://github.com/huggingface/transformers/pull/11427", "diff_url": "https://github.com/huggingface/transformers/pull/11427.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11427.patch", "merged_at": 1619442523000 }
https://api.github.com/repos/huggingface/transformers/issues/11426
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11426/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11426/comments
https://api.github.com/repos/huggingface/transformers/issues/11426/events
https://github.com/huggingface/transformers/pull/11426
866,978,723
MDExOlB1bGxSZXF1ZXN0NjIyNzQyNTU3
11,426
[Flax] Add Electra models
{ "login": "CoderPat", "id": 11250483, "node_id": "MDQ6VXNlcjExMjUwNDgz", "avatar_url": "https://avatars.githubusercontent.com/u/11250483?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CoderPat", "html_url": "https://github.com/CoderPat", "followers_url": "https://api.github.com/users/CoderPat/followers", "following_url": "https://api.github.com/users/CoderPat/following{/other_user}", "gists_url": "https://api.github.com/users/CoderPat/gists{/gist_id}", "starred_url": "https://api.github.com/users/CoderPat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CoderPat/subscriptions", "organizations_url": "https://api.github.com/users/CoderPat/orgs", "repos_url": "https://api.github.com/users/CoderPat/repos", "events_url": "https://api.github.com/users/CoderPat/events{/privacy}", "received_events_url": "https://api.github.com/users/CoderPat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think I address most comments (the only thing missing is removing the `from_pt` when the flax checkpoint gets upload)! Thanks for the swift feedback", "Hey @CoderPat,\r\n\r\nWe've merged a last design extension for the Flax design today, [here](https://github.com/huggingface/transformers/commit/f748bd424213ca8e76e6ad9ffe2beece2ff2655e) -> could you merge master into your PR one last time and adapt the code to add those extensions (example docstring + model outputs + all_attentions + all_hidden_states) - super sorry for the merge conflict again, but this will be the last one!" ]
1,619
1,620
1,620
CONTRIBUTOR
null
# What does this PR do? - Implement Flax version of Electra model and classes for different downstream tasks: - `FlaxElectraModel` - `FlaxElectraForMaskedLM` - `FlaxElectraForPreTraining` - `FlaxElectraForMultipleChoice` - `FlaxElectraForQuestionAnswering` - `FlaxElectraForSequenceClassification` - `FlaxElectraForTokenClassification` Most of the code taken from FlaxBert code and the Pytorch Electra code, and also adapted code from the original PR from @chris-tng (credit where it's due since he started this in #9172 ). Running the tests (including the slow ones) works, and I already tested in my downstream task and seems to be working ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Tagging @patrickvonplaten @sgugger @chris-tng , feel free to tag other people
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11426/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11426/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11426", "html_url": "https://github.com/huggingface/transformers/pull/11426", "diff_url": "https://github.com/huggingface/transformers/pull/11426.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11426.patch", "merged_at": 1620154570000 }
https://api.github.com/repos/huggingface/transformers/issues/11425
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11425/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11425/comments
https://api.github.com/repos/huggingface/transformers/issues/11425/events
https://github.com/huggingface/transformers/issues/11425
866,946,095
MDU6SXNzdWU4NjY5NDYwOTU=
11,425
ALBERT: The following keyword arguments are not supported by this model: ['cls_index', 'p_mask', 'is_impossible'].
{ "login": "PremalMatalia", "id": 42915124, "node_id": "MDQ6VXNlcjQyOTE1MTI0", "avatar_url": "https://avatars.githubusercontent.com/u/42915124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PremalMatalia", "html_url": "https://github.com/PremalMatalia", "followers_url": "https://api.github.com/users/PremalMatalia/followers", "following_url": "https://api.github.com/users/PremalMatalia/following{/other_user}", "gists_url": "https://api.github.com/users/PremalMatalia/gists{/gist_id}", "starred_url": "https://api.github.com/users/PremalMatalia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PremalMatalia/subscriptions", "organizations_url": "https://api.github.com/users/PremalMatalia/orgs", "repos_url": "https://api.github.com/users/PremalMatalia/repos", "events_url": "https://api.github.com/users/PremalMatalia/events{/privacy}", "received_events_url": "https://api.github.com/users/PremalMatalia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The function prepares the dataset for the standard models as well the XLNet model that requires more arguments. You will need to drop those columns for an Albert models.\r\n\r\nWe are in the process of reworking the TensorFlow examples, so there should be one clearer example for QA soon!", "Thanks @sgugger ...\r\nI tried to modify `squad_convert_examples_to_features` as in attached file to return without these columns but encountered anther error as below:\r\n[transformer_fine_tune_pre_trained_model_tftpu (1).zip](https://github.com/huggingface/transformers/files/6379976/transformer_fine_tune_pre_trained_model_tftpu.1.zip)\r\n\r\n\r\n\r\n\r\n\r\n##ERROR:\r\nValueError Traceback (most recent call last)\r\n<ipython-input-59-d85abec8ae26> in <module>()\r\n 1 # Training\r\n 2 if training_args.do_train:\r\n----> 3 trainer.train()\r\n 4 trainer.save_model()\r\n 5 tokenizer.save_pretrained(training_args.output_dir)\r\n\r\n10 frames\r\n/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)\r\n 975 except Exception as e: # pylint:disable=broad-except\r\n 976 if hasattr(e, \"ag_error_metadata\"):\r\n--> 977 raise e.ag_error_metadata.to_exception(e)\r\n 978 else:\r\n 979 raise\r\n\r\nValueError: in user code:\r\n\r\n /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:697 distributed_training_steps *\r\n self.args.strategy.run(self.apply_gradients, inputs)\r\n /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:641 apply_gradients *\r\n self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables)))\r\n /usr/local/lib/python3.7/dist-packages/transformers/optimization_tf.py:232 apply_gradients *\r\n return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:604 apply_gradients **\r\n self._create_all_weights(var_list)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:783 _create_all_weights\r\n self._create_slots(var_list)\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/adam.py:127 _create_slots\r\n self.add_slot(var, 'm')\r\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:844 add_slot\r\n .format(strategy, var))\r\n\r\n ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7f12026a45d0>), which is different from the scope used for the original variable (<tf.Variable 'tf_albert_for_question_answering/albert/embeddings/word_embeddings/weight:0' shape=(30000, 128) dtype=float32, numpy=\r\n array([[ 0.01270407, 0.05987824, -0.06812993, ..., -0.01226719,\r\n 0.00817283, 0.00785217],\r\n [ 0.00733008, -0.00101211, -0.01069043, ..., -0.00968418,\r\n -0.0400394 , -0.04233308],\r\n [-0.02059615, 0.007892 , 0.02363562, ..., 0.01533034,\r\n -0.00429517, -0.01246009],\r\n ...,\r\n [ 0.0135935 , 0.00349383, 0.01223597, ..., -0.05456466,\r\n 0.09235671, -0.05717891],\r\n [-0.00492554, -0.05208753, -0.00323149, ..., 0.03003517,\r\n 0.0196551 , 0.06015572],\r\n [ 0.03892251, -0.024089 , -0.01364627, ..., 0.04010094,\r\n 0.05124779, -0.03588157]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope", "Can someone please help..? I am stuck here", "I have managed to remove extra tokens from the dataset but then TFTrainer.train() got stuck in infinite loop without any errors or logs.\r\n\r\nBelow is the link with latest code. Please suggest.\r\nhttps://colab.research.google.com/drive/17Rx2rkiqag6YAz_FnU9HyHYtqHJgpNs0?usp=sharing", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info - `transformers` version: 4.5.1 - Platform: Google Colab - Python version: 3.7 - PyTorch version (GPU?): NA - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help: @LysandreJik for albert issue @sgugger for trainer (tf_trainer) issue Models:/transformers/models/albert/modeling_tf_albert.py: @LysandreJik Library: /transformers/trainer_tf.py: @sgugger ## Information Model I am using albert (albert-base-v2) for SQuAD 2.0 Question Answering task fine tuning exercise: The tasks I am working on is: - I am using functions from official transformer scripts only with an official SQUaD 2.0 dataset task: (give the name) https://colab.research.google.com/drive/13_ZEQJa_SNMTUh1OkOL1UfkWwObpGY2i?usp=sharing [transformer_fine_tune_pre_trained_model_tftpu.zip](https://github.com/huggingface/transformers/files/6371487/transformer_fine_tune_pre_trained_model_tftpu.zip) ## To reproduce Steps to reproduce the behavior: 1. Run the above script shared with GPU on Google Colab and it will give error when it tries to run trainer.train() with SQuAD 2.0 TF dataset. ## Error message: ValueError: in user code: /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:697 distributed_training_steps * self.args.strategy.run(self.apply_gradients, inputs) /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:639 apply_gradients * gradients = self.training_step(features, labels, nb_instances_in_global_batch) /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:622 training_step * per_example_loss, _ = self.run_model(features, labels, True) /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:742 run_model * outputs = self.model(features, training=training, **labels)[:2] /usr/local/lib/python3.7/dist-packages/transformers/models/albert/modeling_tf_albert.py:1341 call * inputs = input_processing( /usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py:351 input_processing * raise ValueError( ValueError: The following keyword arguments are not supported by this model: ['cls_index', 'p_mask', 'is_impossible']. ## Expected behavior - 'squad_convert_examples_to_features' function should have taken care of what all features to be passed to the model based on pre-trained tokenizer defined. - If not then what should be steps to remove/add features required for albert model training/fine-tuning tasks should be documented somewhere [As per my knowledge it is not available anywhere]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11425/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11424
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11424/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11424/comments
https://api.github.com/repos/huggingface/transformers/issues/11424/events
https://github.com/huggingface/transformers/issues/11424
866,946,073
MDU6SXNzdWU4NjY5NDYwNzM=
11,424
Simple questions about EncoderDecoderModel
{ "login": "qute012", "id": 33983084, "node_id": "MDQ6VXNlcjMzOTgzMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/33983084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qute012", "html_url": "https://github.com/qute012", "followers_url": "https://api.github.com/users/qute012/followers", "following_url": "https://api.github.com/users/qute012/following{/other_user}", "gists_url": "https://api.github.com/users/qute012/gists{/gist_id}", "starred_url": "https://api.github.com/users/qute012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qute012/subscriptions", "organizations_url": "https://api.github.com/users/qute012/orgs", "repos_url": "https://api.github.com/users/qute012/repos", "events_url": "https://api.github.com/users/qute012/events{/privacy}", "received_events_url": "https://api.github.com/users/qute012/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @qute012 \r\n\r\nThe `tie_weights` method ties all the weights of the encoder and decoder including the embedding. For this to work, the encoder and decoder need to be the same model (same class) i.e either BERT2BERT or ROBERTA2ROBERTA2 and with same size.\r\n\r\n> If i want to use different tokenizer between encoder and decoder inputs, does tie function ignore sharing embedding such as strict option?\r\n\r\nno, it does not ignore sharing embedding in that case because as I wrote above it expects both encoder and decoder to be the same model so implicitly assumes that the tokenizer will also be the same.\r\n\r\nBut if that's what you want to do you could manually untie the embeddings or just re-initialize both of them so they won't be shared/tied.", "Thanks to reply @patil-suraj \r\n\r\nFor example, is it right that encoder's embedding weights can be adjusted by decoder's input?\r\n\r\nThen if i want to untie, should i remove or comment out below code manually?\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L183\r\n\r\ndo you have any plan to add parameter that choosing tie function in EncoderDecoderModel class? I know bert2bert is better performance than random decoder's embedding weights, but it requires to extending for experiment newly when each encoder and decoder use different vocabulary. If you okay, i will PR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
First, thank you for great works! 1. Does that tie function work for sharing pretrained weight of encoder's embedding with weight of decoder's embedding?https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L185 If i want to use different tokenizer between encoder and decoder inputs, does tie function ignore sharing embedding such as strict option?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11424/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11423
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11423/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11423/comments
https://api.github.com/repos/huggingface/transformers/issues/11423/events
https://github.com/huggingface/transformers/issues/11423
866,913,366
MDU6SXNzdWU4NjY5MTMzNjY=
11,423
IBert: What would be the possible reason `IntLayerNorm` does not decrease the loss?
{ "login": "kyoungrok0517", "id": 1051900, "node_id": "MDQ6VXNlcjEwNTE5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1051900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoungrok0517", "html_url": "https://github.com/kyoungrok0517", "followers_url": "https://api.github.com/users/kyoungrok0517/followers", "following_url": "https://api.github.com/users/kyoungrok0517/following{/other_user}", "gists_url": "https://api.github.com/users/kyoungrok0517/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoungrok0517/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoungrok0517/subscriptions", "organizations_url": "https://api.github.com/users/kyoungrok0517/orgs", "repos_url": "https://api.github.com/users/kyoungrok0517/repos", "events_url": "https://api.github.com/users/kyoungrok0517/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoungrok0517/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It is likely that the IntLayerNorm layers are not warmed up. IntLayerNorm layer has to adjust its internal parameter (`self.shift`) during the quantization-aware training process. It is the one that prevents overflow (i.e. keeps the internal activation values to be less than 2**32) and is initialized with zero. Here is the relevant part in the code: https://github.com/huggingface/transformers/blob/master/src/transformers/models/ibert/quant_modules.py#L508\r\n\r\nTherefore, if you skip the quantization-aware training process and immediately use the model for inference, those layers may produce some unexpected outcomes. Could this be your case?", "Yeah I'm not using quantization-aware training so that'll be the reason. Thanks for the answer!" ]
1,619
1,619
1,619
NONE
null
### Who can help @kssteven418 ## Information IBert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## Problem Hello. I'm trying to use `IntLayerNorm` in my model. The model without the layer trains properly, but if I add the layer as follow the model suddenly stops being learned (loss does not change). What would be the possible reason? ```python def forward(self, x, k: int = None): x, scaling_factor = self.pre_linear_act(x) # QuantAct x, scaling_factor = self.linear(x, scaling_factor) # QuantLinear x, scaling_factor = self.post_linear_act(x, scaling_factor) # QuantAct # normalize if self.normalize: x, scaling_factor = self.layer_norm(x, scaling_factor) # IntLayerNorm x, scaling_factor = self.post_layernorm_act(x, scaling_factor) # QuantAct ``` ## Update I observed that the output tensor of `self.layer_norm` is all zero.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11423/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11422
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11422/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11422/comments
https://api.github.com/repos/huggingface/transformers/issues/11422/events
https://github.com/huggingface/transformers/issues/11422
866,904,892
MDU6SXNzdWU4NjY5MDQ4OTI=
11,422
Transformers 4.1.1 & Tensorflow 2.0, AttributeError: module'tensorflow_core.keras.activations' has no attribute'swish'
{ "login": "kangzhiheng", "id": 30773002, "node_id": "MDQ6VXNlcjMwNzczMDAy", "avatar_url": "https://avatars.githubusercontent.com/u/30773002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kangzhiheng", "html_url": "https://github.com/kangzhiheng", "followers_url": "https://api.github.com/users/kangzhiheng/followers", "following_url": "https://api.github.com/users/kangzhiheng/following{/other_user}", "gists_url": "https://api.github.com/users/kangzhiheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/kangzhiheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kangzhiheng/subscriptions", "organizations_url": "https://api.github.com/users/kangzhiheng/orgs", "repos_url": "https://api.github.com/users/kangzhiheng/repos", "events_url": "https://api.github.com/users/kangzhiheng/events{/privacy}", "received_events_url": "https://api.github.com/users/kangzhiheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The setup was updated super late: we already required tensorflow >= 2.3 for a while when we finally went to it. I don't know which version of Transformers supports tensorflow 2.0 but I would guess it's 3.0 or even below.", "Okay, hope the document can be clearer. Every time I read `README.md`, it says `This repository is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for examples) and TensorFlow 2.0` similar words, I really think it is the specific version 2.0 of tensorflow. However, it is not(tensorflow>=2.3 in `setup.py`). It would be better if the version information is clearer I guess.\r\nHope `transformers` get better and better.", "Yes, this part of the README has not been updated in a while (the PyTorch version is also wrong). Will adjust!", "> Yes, this part of the README has not been updated in a while (the PyTorch version is also wrong). Will adjust!\r\n\r\nThanks 🤗." ]
1,619
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Mac - Python version: 3.6 - PyTorch version (GPU?): No - Tensorflow version (GPU?): 2.0.0 No GPU - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information @Rocketknight1 @sgugger I had to use `tensorflow2.0` for some reasons. I checked all transfomer released versions and found that `before 4.1.1`, tensorflow>=2.0, after 4.1.1, tensorflow>=2.3 (in `setup.py`), so I `4.1.1` is installed. When I run ``` from transformers import AutoTokenizer, AutoModel code ``` I raise ``` AttributeError: module'tensorflow_core.keras.activations' has no attribute'swish' ``` So I checked the `keras` document of tf2.0 (https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations), `there is indeed no swish function`, but it does exist in the `"transformers\activations_tf.py"`(line 64~74) in transformers `4.1.1`: ``` ACT2FN = { "gelu": tf.keras.layers.Activation(gelu), "relu": tf.keras.activations.relu, "swish": tf.keras.activations.swish, "silu": tf.keras.activations.swish, "gelu_new": tf.keras.layers.Activation(gelu_new), "mish": tf.keras.layers.Activation(mish), "tanh": tf.keras.activations.tanh, "gelu_fast": tf.keras.layers.Activation(gelu_fast), } ``` ## Expected behavior When I modify `activations_tf.py`, it seems like ok... ``` ACT2FN = { "gelu": tf.keras.layers.Activation(gelu), "relu": tf.keras.activations.relu, # "swish": tf.keras.activations.swish, # "silu": tf.keras.activations.swish, "gelu_new": tf.keras.layers.Activation(gelu_new), "mish": tf.keras.layers.Activation(mish), "tanh": tf.keras.activations.tanh, "gelu_fast": tf.keras.layers.Activation(gelu_fast), } ``` or define `swish` and `silu` in `activations_tf.py` like ``` def swish(): xxxx def silu(): xxxx ACT2FN = { "gelu": tf.keras.layers.Activation(gelu), "relu": tf.keras.activations.relu, "swish": tf.keras.layers.Activation(swish), "silu": tf.keras.layers.Activation(swish), "gelu_new": tf.keras.layers.Activation(gelu_new), "mish": tf.keras.layers.Activation(mish), "tanh": tf.keras.activations.tanh, "gelu_fast": tf.keras.layers.Activation(gelu_fast), } ``` I don’t know if there is such a bug. @Rocketknight1 @sgugger. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11422/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11421
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11421/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11421/comments
https://api.github.com/repos/huggingface/transformers/issues/11421/events
https://github.com/huggingface/transformers/issues/11421
866,868,752
MDU6SXNzdWU4NjY4Njg3NTI=
11,421
Race condition when using --save_total_limit, --load_best_model_at_end and deepspeed zero2+cpu_offload
{ "login": "chitkwan", "id": 22551285, "node_id": "MDQ6VXNlcjIyNTUxMjg1", "avatar_url": "https://avatars.githubusercontent.com/u/22551285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chitkwan", "html_url": "https://github.com/chitkwan", "followers_url": "https://api.github.com/users/chitkwan/followers", "following_url": "https://api.github.com/users/chitkwan/following{/other_user}", "gists_url": "https://api.github.com/users/chitkwan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chitkwan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chitkwan/subscriptions", "organizations_url": "https://api.github.com/users/chitkwan/orgs", "repos_url": "https://api.github.com/users/chitkwan/repos", "events_url": "https://api.github.com/users/chitkwan/events{/privacy}", "received_events_url": "https://api.github.com/users/chitkwan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I believe that is a correct workaround. Would you like to make a PR with it?", "Sure, happy to.", "@chitkwan, are you still inspired to make a PR to fix this? Thank you!", "Oh this has been fixed in #11748 I believe. Sorry I did not reference it in this issue.", "ah yes! \r\n\r\n@chitkwan, could you please validate that the `master` branch with the fix works for you and close this issue if it is so? Thank you!", "Sorry -- this fell off my todo list but thank you for the fix. \r\n\r\nThe original race condition I reported may not be easy to reproduce but I'll give it a go and report back. ", "I reran my failure condition and it no longer fails, so I think this can be closed. Thanks!" ]
1,619
1,624
1,624
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes, AWS p4d.24xlarge - Using distributed or parallel set-up in script?: yes, deepspeed ### Who can help Library: - deepspeed: @stas00 - trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): roberta-large The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce I'm fine-tuning using run_mlm.py. A race condition seems to exist when: 1. you limit the number of checkpoints with `--save_total_limit` 2. you enable `--load_best_model_at_end --metric_for_best_model eval_loss` 3. you use multigpu training with deepspeed zero2 + cpu_offload 4. when the best model happens to be at the head of the list returned by Trainer._sorted_checkpoints() This corner case happens because the checkpoint being deleted is the most recent one due to the swapping logic in `Trainer._sorted_checkpoints()` at https://github.com/huggingface/transformers/blob/bf2e0cf70b68e0d46cdf15a4ece1f5c0a03de084/src/transformers/trainer.py#L1818-L1821 When (by chance) the `best_model_index == 0`, the swapping logic will cause the most recent checkpoint to go to the head of the list. When `Trainer._rotate_checkpoints()` is then called, it starts deleting from the head and consequently deletes the most recent checkpoint. (Aside: this is actually probably another bug in itself -- you would never be able to resume training from the most recent checkpoint.) However, at this point, deepspeed has not finished writing its own global_checkpoint to the current checkpoint directory, causing the following error to be thrown: ``` INFO|trainer.py:1648] 2021-04-25 00:08:06,377 >> Saving model checkpoint to /mnt/experiments/roberta-large-mlm/checkpoint-23000 [INFO|configuration_utils.py:329] 2021-04-25 00:08:06,378 >> Configuration saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/config.json [INFO|modeling_utils.py:831] 2021-04-25 00:08:09,054 >> Model weights saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/pytorch_model.bin [INFO|tokenization_utils_base.py:1901] 2021-04-25 00:08:09,055 >> tokenizer config file saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/tokenizer_config.json [INFO|tokenization_utils_base.py:1907] 2021-04-25 00:08:09,055 >> Special tokens file saved in /mnt/experiments/roberta-large-mlm/checkpoint-23000/special_tokens_map.json [2021-04-25 00:08:09,211] [INFO] [logging.py:60:log_dist] [Rank 0] Saving model checkpoint: /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/mp_rank_00_model_states.pt [2021-04-25 00:08:13,004] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,004] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_0_mp_rank_00_optim_states.pt [INFO|trainer.py:1715] 2021-04-25 00:08:13,012 >> Deleting older checkpoint [/mnt/experiments/roberta-large-mlm/checkpoint-23000] due to args.save_total_limit [2021-04-25 00:08:13,015] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,016] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_5_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,035] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,036] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_4_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,148] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,148] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_1_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,192] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,193] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_7_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,193] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,194] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_2_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,219] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,220] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_6_mp_rank_00_optim_states.pt [2021-04-25 00:08:13,330] [INFO] [engine.py:1717:_copy_recovery_script] creating recovery script /mnt/experiments/roberta-large-mlm/checkpoint-23000/zero_to_fp32.py [2021-04-25 00:08:13,331] [INFO] [engine.py:1730:_save_zero_checkpoint] zero checkpoint saved /mnt/experiments/roberta-large-mlm/checkpoint-23000/global_step23000/zero_pp_rank_3_mp_rank_00_optim_states.pt Traceback (most recent call last): File "run_mlm.py", line 535, in <module> main() File "run_mlm.py", line 482, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1172, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1269, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1346, in _save_checkpoint self._rotate_checkpoints(use_mtime=True, output_dir=run_dir) File "/home/cklin/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1716, in _rotate_checkpoints shutil.rmtree(checkpoint) File "/usr/lib/python3.6/shutil.py", line 490, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/usr/lib/python3.6/shutil.py", line 488, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/mnt/experiments/roberta-large-mlm/checkpoint-23000' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Instead of swapping logic in the lines referenced above, `Trainer._sort_checkpoints()` might instead do ``` checkpoints_sorted.append(checkpoints_sorted[best_model_index]) checkpoints_sorted.remove(checkpoints_sorted[best_model_index]) ``` i.e., just move the best model to the end of the list. I believe this will guarantee that the checkpoints (excluding the best model) will be deleted earliest first. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11421/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11420
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11420/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11420/comments
https://api.github.com/repos/huggingface/transformers/issues/11420/events
https://github.com/huggingface/transformers/issues/11420
866,845,185
MDU6SXNzdWU4NjY4NDUxODU=
11,420
[Question] Implementing character based tokenizer
{ "login": "ethen8181", "id": 12273134, "node_id": "MDQ6VXNlcjEyMjczMTM0", "avatar_url": "https://avatars.githubusercontent.com/u/12273134?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ethen8181", "html_url": "https://github.com/ethen8181", "followers_url": "https://api.github.com/users/ethen8181/followers", "following_url": "https://api.github.com/users/ethen8181/following{/other_user}", "gists_url": "https://api.github.com/users/ethen8181/gists{/gist_id}", "starred_url": "https://api.github.com/users/ethen8181/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ethen8181/subscriptions", "organizations_url": "https://api.github.com/users/ethen8181/orgs", "repos_url": "https://api.github.com/users/ethen8181/repos", "events_url": "https://api.github.com/users/ethen8181/events{/privacy}", "received_events_url": "https://api.github.com/users/ethen8181/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
Hi team, what's the recommended approach for implementing a character based tokenizer? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11420/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11420/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11419
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11419/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11419/comments
https://api.github.com/repos/huggingface/transformers/issues/11419/events
https://github.com/huggingface/transformers/issues/11419
866,842,340
MDU6SXNzdWU4NjY4NDIzNDA=
11,419
Parameter in `DebertaV2Tokenizer.__init__()` without documentation: `split_by_punct`
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@BigBird01 : could you please help with this?", "I believe this parameter additionally splits the text on the punctuation, as it can be seen from the method it's calling:\r\n\r\nhttps://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L446-L464\r\n\r\nI think this docstring should help out:\r\n\r\nhttps://github.com/huggingface/transformers/blob/afe479adb5474250215438fe27db9dc9dbbbde09/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L500-L512\r\n\r\nTo see it in practice, you can try it with:\r\n\r\n```py\r\n>>> tok = DebertaV2Tokenizer.from_pretrained(\"microsoft/deberta-v2-xlarge\")\r\n>>> tok._tokenizer._run_split_on_punc(\"Hey, how's he doing?\")\r\n['Hey', ',', ' how', \"'\", 's he doing', '?']\r\n```", "Thanks @LysandreJik for the explanation. Yes, it will split input sentences by punctuation then tokenize the segments by SPM tokenizer. We found this can help the performance of **SQuAD** task, but not on other tasks, e.g. MNLI. So we set it to false by default.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,624
1,624
CONTRIBUTOR
null
The `split_by_punct` parameter in `DebertaV2Tokenizer.__init__()` should be documented: https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L89 @BigBird01 : could you please check this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11419/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11418
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11418/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11418/comments
https://api.github.com/repos/huggingface/transformers/issues/11418/events
https://github.com/huggingface/transformers/pull/11418
866,838,034
MDExOlB1bGxSZXF1ZXN0NjIyNjQ5MzY5
11,418
[Deepspeed] ZeRO-Infinity integration plus config revamp
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "> Great job @stas00! I like the solution you picked to be able to start initializing part of deepspeed inside the training arguments. Does it fully solve the chicken and egg problem you add?\r\n\r\nThank you!\r\n\r\nFor the current needs of `zero.Init()`, yes! As long as the user separates creating `TrainingArguments` from creating the `Trainer` and calling `from_pretrained` in between, which is what all examples do. It took quite a lot of trial and error, but I think it's pretty clean now. \r\n\r\nSplitting the configuration processing in 2 stages helped a lot too.\r\n\r\nI hope that me using a weak ref global object is a good solution, since w/o it we would somehow have to make the framework aware of deepspeed in multiple places and somehow pass the config to it - most likely by sticking the DS object into the model object. The neat thing is that it being a weak ref it goes away automatically as soon as the TrainerArguments are `gc`'ed. If down the road we discover a better way nothing prevents us from switching to it.\r\n\r\nI will clean up all the XXX's, I was originally planning to wait for 0.3.16 and include all the fixes there, but last time it took more than 10 days for them to make a new release, so I decided it'd be better for users to be able to use this code already, and will make another PR with extra changes next for deepspeed==0.3.16.\r\n", "The weakref is okay by me. The only other way to achieve this would be to have some \"singleton\" class where all instances share the same state, but the weakref is actually more adapted in this case." ]
1,619
1,619
1,619
CONTRIBUTOR
null
This PR: - [x] integrates ZeRO-Infinity - [x] revamps the configuration process, instead of the confusing to users sometimes-we-override-values, sometimes-we-don't - all values are now explicit unless they are set to `auto`, then and only then the Trainer will set them to the correct or recommended values. - [x] massively revamps the way the configuration is done. now splitting the config parsing into 2 phases - one happening at the very end of `TrainingArguments` and then a weak ref global module var is created which can then be queried by various `transformers` components w/o needing to change any APIs. The global object cleanly goes away when `TrainingArguments` goes out of scope. Users no longer need to make any special calls - just need to ensure the `TrainingArguments` object is created before `model.from_pretrained()` is called (like we do in all examples). Phase 2 happens during `train` where we get a few variables that weren't there during `TrainingArguments`, so the config gets completed here. - [x] ds_config is now passed to `zero.Init` in `from_pretrained` under ZeRO-3 since it now needs several configuration values - this is in preparation for fp32 and other important features. - [x] adds new tests for ZeRO-Inf and configuration. - [x] adds a minor fix in `get_regression_trainer` If you're testing this PR please make sure you install deepspeed master branch: ``` git clone https://github.com/microsoft/DeepSpeed cd DeepSpeed pip install -e . ``` ## Important changes Please note a major change is that now only params that are set to `auto` will get automatically overriden/set to the correct/recommended values, everything else is left as is. This is to avoid the previously confusing behavior of never being quite sure what gets overridden and what not despite the logger telling what it did override. The new behavior is completely unambiguous. See: examples - [zero2](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero2.json) - [zero3](https://github.com/huggingface/transformers/blob/0f221d2cce751182c455295ef2c03a2c1bd3d66b/tests/deepspeed/ds_config_zero3.json) It's ready to release now. 0.3.15 just has a debug print that is loud, fixed in their master. <!-- TODO: The following is probably best saved for the next PR as it'd probably require waiting for deepspeed==0.3.16 - [ ] may be revamp the resume to avoid first loading the model weights. can do it in another PR. PRs waiting to be integrated before this PR can be merged: - [ ] zero.init() ds_config arg - not yet created - [ ] new release is needed 0.3.16 --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11418/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11418", "html_url": "https://github.com/huggingface/transformers/pull/11418", "diff_url": "https://github.com/huggingface/transformers/pull/11418.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11418.patch", "merged_at": 1619458833000 }
https://api.github.com/repos/huggingface/transformers/issues/11417
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11417/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11417/comments
https://api.github.com/repos/huggingface/transformers/issues/11417/events
https://github.com/huggingface/transformers/pull/11417
866,823,012
MDExOlB1bGxSZXF1ZXN0NjIyNjM4NTU1
11,417
Enable option for subword regularization in more tokenizers.
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I found this somehow obscure function argument called `sample` at `AlbertTokenizer`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/albert/tokenization_albert.py#L189\r\n\r\nIt seems to enable subword regularization but with fixed parameters for `nbest_size` and `alpha`.\r\n\r\nhttps://github.com/google/sentencepiece/blob/351600c2971401f4e849147579aa1b5d42f614e1/python/src/sentencepiece/__init__.py#L110-L111\r\n\r\nI would remove that `sample` parameter and replace that with my solution which is more flexible. But that would mean we have a breaking change. As an alternative I could add my solution but keep the `sample` argument. But that would add more complexity to the code.\r\n\r\nWhat do you think? @sgugger @LysandreJik @stefan-it \r\n\r\nPS: Same here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/bert_generation/tokenization_bert_generation.py#L113\r\n\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/big_bird/tokenization_big_bird.py#L143\r\n\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/pegasus/tokenization_pegasus.py#L169\r\n\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/reformer/tokenization_reformer.py#L109\r\n\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/t5/tokenization_t5.py#L237\r\n\r\nhttps://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/src/transformers/models/xlnet/tokenization_xlnet.py#L191", "This argument is not called from anywhere so it's only accessible if users somehow rewrote the tokenize method to pass it along to the private method `_tokenize`. Therefore I think it's fine to do the breaking change and clean up the code using `sample=True`, but let's see what @patrickvonplaten and @LysandreJik think before going forward (note that Lysandre is on vacation until this Wednesday so he'll reply at the end of the week :-) ).", "Yes, removing the `sample` and cleaning up the `_tokenize()` method sounds good to me. As @sgugger said, it is private and nowhere is a `sample` or a `**kwargs` passed to that method.", "> Yes, removing the `sample` and cleaning up the `_tokenize()` method sounds good to me. As @sgugger said, it is private and nowhere is a `sample` or a `**kwargs` passed to that method.\r\n\r\nAgree!", "rebase upstream/master done", "> Yes, LGTM! Thanks a lot.\r\n\r\nHey @LysandreJik - this is not done yet. Please do not merge now. ;-)", "Oh, I was misled! There are indeed a few tokenizers remaining. Thank you for letting me know!", "This is ready to be merged from my point of view.", "Can you take care of the merge conflicts? Will review tomorrow :-)", "> Can you take care of the merge conflicts? Will review tomorrow :-)\r\n\r\n@sgugger All conflicts resolved & green CI\r\n\r\n\r\n", "> Great work on the tests, this is great. The tests could indeed be refactored in a common test if you feel like it.\r\n\r\nI will refactor the tests the next days. Shame on me that I criticized the lack of DRY in the tokenizers but did not follow the DRY principle in the tests.", "This is strange:\r\n\r\n`FAILED tests/test_hf_api.py::HfApiEndpointsTest::test_list_repos_objs - reque...`\r\n\r\nSee here: https://app.circleci.com/pipelines/github/huggingface/transformers/23276/workflows/bf1ad505-efdc-4394-8852-a07702b9f5be/jobs/209965/parallel-runs/0/steps/0-108\r\n\r\nWill trigget CI again,,,", "@LysandreJik @sgugger Tests are refactored and DRY now. CI is green again.\r\nIMO ready for merge.\r\n\r\nMaybe you want to investigate the flaky test (see my comment above)." ]
1,619
1,622
1,620
CONTRIBUTOR
null
see https://github.com/huggingface/transformers/pull/11149#pullrequestreview-643686428 ## To-do ### `AlbertTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] remove obscure function argument called `sample` - [x] check - [x] refactor test to follow DRY ### `BarthezTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `BertGenerationTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] remove obscure function argument called `sample` - [x] check - [x] refactor test to follow DRY ### `BigBirdTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] remove obscure function argument called `sample` - [x] check - [x] refactor test to follow DRY ### `CamembertTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `DebertaV2Tokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `M2M100Tokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `MarianTokenizer` - has src and target tokenizer - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `MBart50Tokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `PegasusTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] remove obscure function argument called `sample` - [x] check - [x] refactor test to follow DRY ### `ReformerTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] remove obscure function argument called `sample` - [x] check - [x] refactor test to follow DRY ### `Speech2TextTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `T5Tokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] remove obscure function argument called `sample` - [x] check - [x] refactor test to follow DRY ### `XLMProphetNetTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] check - [x] refactor test to follow DRY - <s>remove obscure function argument called `sample`</s> ### `XLNetTokenizer` - [x] add `sp_model_kwargs` param with test - [x] add pickle support with test - [x] remove obscure function argument called `sample` - [x] check - [x] refactor test to follow DRY ### `XML RoBERTa` - [x] refactor test to follow DRY ### General - [x] check if we changed all tokenizers - [x] add typing - [x] check if tok. is used in other functions - [x] also add changes to XLM RoBERTa tokenizer ### After review - [x] fix type comments with default `None` - [x] possibly remove `test_sentencepiece_skip_back_convert_check`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11417/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11417/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11417", "html_url": "https://github.com/huggingface/transformers/pull/11417", "diff_url": "https://github.com/huggingface/transformers/pull/11417.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11417.patch", "merged_at": 1620888295000 }
https://api.github.com/repos/huggingface/transformers/issues/11416
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11416/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11416/comments
https://api.github.com/repos/huggingface/transformers/issues/11416/events
https://github.com/huggingface/transformers/issues/11416
866,806,712
MDU6SXNzdWU4NjY4MDY3MTI=
11,416
Transformers Pegasus - how do I fine tune another language?
{ "login": "seregadgl20-oss", "id": 80334862, "node_id": "MDQ6VXNlcjgwMzM0ODYy", "avatar_url": "https://avatars.githubusercontent.com/u/80334862?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seregadgl20-oss", "html_url": "https://github.com/seregadgl20-oss", "followers_url": "https://api.github.com/users/seregadgl20-oss/followers", "following_url": "https://api.github.com/users/seregadgl20-oss/following{/other_user}", "gists_url": "https://api.github.com/users/seregadgl20-oss/gists{/gist_id}", "starred_url": "https://api.github.com/users/seregadgl20-oss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seregadgl20-oss/subscriptions", "organizations_url": "https://api.github.com/users/seregadgl20-oss/orgs", "repos_url": "https://api.github.com/users/seregadgl20-oss/repos", "events_url": "https://api.github.com/users/seregadgl20-oss/events{/privacy}", "received_events_url": "https://api.github.com/users/seregadgl20-oss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @seregadgl20-oss \r\n\r\nIt would be nice if you use the forum (https://discuss.huggingface.co/) to ask such general questions. Issues are for bugs and feature requests. Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
How do I fine tune another language? Who will tell you?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11416/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11416/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11415
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11415/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11415/comments
https://api.github.com/repos/huggingface/transformers/issues/11415/events
https://github.com/huggingface/transformers/issues/11415
866,783,083
MDU6SXNzdWU4NjY3ODMwODM=
11,415
Roberta Tokenizer cannot handle inputs with `<mask>` token
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, the tokenizer of RoBERTa is a byte-level BPE tokenizer. It is a subword tokenizer.\r\n\r\nIn your example with `app ` it completes the word, but there also exists full words within it's dictionary, as you can see when the input sequence is appropriate:\r\n\r\n```py\r\n>>> nlp('Hello, how are you <mask> sir?')\r\n[\r\n {'sequence': 'Hello, how are you doing sir?', 'score': 0.6784416437149048, 'token': 608, 'token_str': ' doing'}, \r\n {'sequence': 'Hello, how are you feeling sir?', 'score': 0.08236288279294968, 'token': 2157, 'token_str': ' feeling'}, \r\n {'sequence': 'Hello, how are you, sir?', 'score': 0.06469670683145523, 'token': 6, 'token_str': ','}, \r\n {'sequence': 'Hello, how are you looking sir?', 'score': 0.04527667537331581, 'token': 546, 'token_str': ' looking'}, \r\n {'sequence': 'Hello, how are you going sir?', 'score': 0.02970985323190689, 'token': 164, 'token_str': ' going'}]\r\n```", "@LysandreJik Thanks for your reply! I think this behavior is proper during the pretraining process: roberta completes or predicts the next token when we give the following input `app<mask>`. But I believe this is not expected when we call `fill-mask` pipeline.\r\n\r\nTake `app <mask>` as example, there is a space in `app <mask>`, which means the `app` token is a complete token and roberta should predict the next token. However, after tokenizing, the space vanished. To realize what I said, maybe the only way is to rewrite the predict function to find those token with a space in front of it.\r\n\r\nP.S. add an extra space `app <mask>` do not bring expected results as well.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: macOS-11.1-arm64-arm-64bit - Python version: 3.9.1 - PyTorch version (GPU?): 1.8.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Description It'a bizarre bug. When I encode a string with a `<mask>` token and decode it immediately, the space in front of the `<mask>` token disappears. ``` >>> from transformers import AutoTokenizer >>> tokenizer=AutoTokenizer.from_pretrained('roberta-base') >>> tokenized_inputs=tokenizer('I <mask> you')['input_ids'] >>> tokenizer.decode(tokenized_inputs) '<s>I<mask> you</s>' >>> ``` It will lead to many problems. Such as `pipeline('fill-mask')` cannot provides valid results. ``` >>> from transformers import pipeline >>> nlp=pipeline('fill-mask') >>> nlp.tokenizer PreTrainedTokenizerFast(name_or_path='distilroberta-base', vocab_size=50265, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=False)}) >>> nlp('app <mask>') {'score': 0.09366267919540405, 'sequence': 'appeal', 'token': 18696, 'token_str': 'eal'} ``` It seems that the mask-filling process omits the space, which is not what we expected (we expect the token to be filled in mask is another word rather than a sub-word, since there is a space as a separator.) Does anyone notice the same issue?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11415/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11414
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11414/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11414/comments
https://api.github.com/repos/huggingface/transformers/issues/11414/events
https://github.com/huggingface/transformers/issues/11414
866,766,765
MDU6SXNzdWU4NjY3NjY3NjU=
11,414
checkpointing is not still covering all cases
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: linux - Python version: 3.8 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): 1.8 - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger ## Information Hi sometime ago, you improved the checkopinting in huggingface repo, but this is not covering all cases, here is one example, let assume a user is wrapping a model into a class as below: ``` model = .... // one of huggingface model here # lets wrap it model = intrinsic_dimension_said(model, intrinsic_dim, training_args.output_dir, set()) ``` I put the wrap class for completeness, but feel free to ignore it, this can be any class: ``` class IntrinsicDimensionLight: def __init__(self, module: nn.Module, intrinsic_dimension: int, output_dir, str_filter: Set[str] = set(), said=False, random_seed=1997): torch.manual_seed(random_seed) np.random.seed(random_seed) self.initial_value_path = os.path.join(output_dir, "initial_value") self.fastfood_params_path = os.path.join(output_dir, "fastfood_params") self.name_base_localname = [] self.initial_value = dict() self.fastfood_params = {} self.said = said self.said_size = len(list(module.named_parameters())) if self.said: assert intrinsic_dimension > self.said_size intrinsic_dimension -= self.said_size self.intrinsic_parameter = nn.Parameter( torch.zeros((intrinsic_dimension)).cpu()) module.register_parameter( "intrinsic_parameter", self.intrinsic_parameter) setattr(module, "intrinsic_parameter", self.intrinsic_parameter) length = 0 for name, param in module.named_parameters(): if param.requires_grad and all([x not in name for x in str_filter]): length += 1 self.initial_value[name] = v0 = ( param.clone().detach().requires_grad_(False).to(self.intrinsic_parameter.device) ) DD = np.prod(v0.size()) self.fastfood_params[name] = fastfood_vars( DD, self.intrinsic_parameter.device) base, localname = module, name while "." in localname: prefix, localname = localname.split(".", 1) base = base.__getattr__(prefix) self.name_base_localname.append((name, base, localname)) if "intrinsic_parameter" not in name: param.requires_grad_(False) if said: self.intrinsic_parameter_said = nn.Parameter( torch.ones((length)).cpu()) module.register_parameter( "intrinsic_parameter_said", self.intrinsic_parameter_said) setattr(module, "intrinsic_parameter_said", self.intrinsic_parameter_said) # If this is created before, here we save it and here it loads it. if not self.is_projection_params_saved(): self.save_required_params() self.load_required_params() def is_projection_params_saved(self): return os.path.isfile(self.fastfood_params_path) and\ os.path.isfile(self.initial_value_path) def load_required_params(self): # check and if intrinsic porjection mats exists load them. if self.is_projection_params_saved(): self.fastfood_params = torch.load(self.fastfood_params_path) self.initial_value = torch.load(self.initial_value_path) def save_required_params(self): # Saves the generates projection params. torch.save(self.initial_value, self.initial_value_path) torch.save(self.fastfood_params, self.fastfood_params_path) def move_to(self, x_tuple, target): if isinstance(x_tuple, torch.Tensor): return x_tuple.to(target) a = [] for x in x_tuple: if isinstance(x, torch.Tensor): a.append(x.to(target)) else: a.append(x) return tuple(a) def requires_to(self, x_tuple, target): if isinstance(x_tuple, torch.Tensor): x_tuple.requires_grad_(target) for x in x_tuple: if isinstance(x, torch.Tensor): x.requires_grad_(target) def fastfood_vars_requires_grad_(self, requires_grad): for item in self.fastfood_params.items(): self.requires_to(item, requires_grad) def __call__(self, module, inputs): index = 0 with torch.enable_grad(): for name, base, localname in self.name_base_localname: if localname == "intrinsic_parameter": continue self.initial_value[name] = self.initial_value[name].to( getattr(base, localname)) device_dtype = getattr(base, localname).dtype init_shape = self.initial_value[name].size() DD = np.prod(init_shape) self.fastfood_params[name] = self.move_to( self.fastfood_params[name], module.intrinsic_parameter.device) # Fastfood transform te replace dence P ray = fastfood_torched(module.intrinsic_parameter, DD, self.fastfood_params[name]).view( init_shape ) if self.said: ray = ray * self.intrinsic_parameter_said[index] param = (self.initial_value[name] + ray).to(device_dtype) delattr(base, localname) setattr(base, localname, param) index += 1 @staticmethod def apply(module, intrinsic_dimension, output_dir, str_filter=set(), said=False): for k, hook in module._forward_pre_hooks.items(): if isinstance(hook, IntrinsicDimensionLight) and hook.name == name: raise RuntimeError("Cannot register two intrinsic dimension hooks on " "the same parameter {}".format(name)) fn = IntrinsicDimensionLight( module, intrinsic_dimension, output_dir, str_filter, said) module.register_forward_pre_hook(fn) return fn @staticmethod def apply_with_tensor(module, intrinsic_vector, str_filter=set()): assert isinstance(intrinsic_vector, torch.Tensor) and intrinsic_vector.ndim == 1 for k, hook in module._forward_pre_hooks.items(): if isinstance(hook, IntrinsicDimensionLight) and hook.name == name: raise RuntimeError("Cannot register two intrinsic dimension hooks on " "the same parameter {}".format(name)) fn = IntrinsicDimensionLight( module, intrinsic_vector.size(0), str_filter, False) fn.intrinsic_parameter = intrinsic_vector module.register_forward_pre_hook(fn) return fn def intrinsic_dimension(module, intrinsic_dimension, output_dir, str_filter): IntrinsicDimensionLight.apply( module, intrinsic_dimension, output_dir, str_filter, False) return module def intrinsic_dimension_said(module, intrinsic_dimension, output_dir, str_filter): IntrinsicDimensionLight.apply( module, intrinsic_dimension, output_dir, str_filter, True) return module ``` Now if you see the model after reloading in trainer, this is without this wrapper class, could you also cover this case that user might wrap a model into a class? thanks ## Expected behavior wrapped model needs to be reloaded fine in checkpoint reloading
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11414/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11413
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11413/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11413/comments
https://api.github.com/repos/huggingface/transformers/issues/11413/events
https://github.com/huggingface/transformers/issues/11413
866,731,325
MDU6SXNzdWU4NjY3MzEzMjU=
11,413
Allow adding custom logits processors in the `generate` method
{ "login": "wadimiusz", "id": 22571281, "node_id": "MDQ6VXNlcjIyNTcxMjgx", "avatar_url": "https://avatars.githubusercontent.com/u/22571281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wadimiusz", "html_url": "https://github.com/wadimiusz", "followers_url": "https://api.github.com/users/wadimiusz/followers", "following_url": "https://api.github.com/users/wadimiusz/following{/other_user}", "gists_url": "https://api.github.com/users/wadimiusz/gists{/gist_id}", "starred_url": "https://api.github.com/users/wadimiusz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wadimiusz/subscriptions", "organizations_url": "https://api.github.com/users/wadimiusz/orgs", "repos_url": "https://api.github.com/users/wadimiusz/repos", "events_url": "https://api.github.com/users/wadimiusz/events{/privacy}", "received_events_url": "https://api.github.com/users/wadimiusz/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "I think I could submit a pull request to this, if I had \r\n1) feedback on the idea (do you think it makes sense to do that?)\r\n2) a little help changing existing tests and/or implementing new tests to reflect the change.\r\n\r\nAlso, maybe one would need the new argument to be `Optional[LogitsProcessor]` instead of `Optional[LogitsProcessorList]`. Because `LogitsProcessotList` is a subclass of `LogitsProcessor`, this would allow adding both a list of logits processors and a single logits processor.\r\n\r\nWhat do you folks think? Would you accept this pull request (after maybe giving me some tips related to the tests)? ", "Hey @wadimiusz,\r\n\r\nSorry to only come back to you now! I think in general, I'm fine with such an extension. The only problem I see is that a user could add a custom logits processor that already exists (*e.g.* a user would create his own `LengthPenaltyLogitsProcessor`) and also pass `length_penalty=...` . But even in this case I guess we could just apply both processors and there shouldn't be a big problem. \r\n\r\n=> So I'm ok with this extension. Interested in hearing your thoughts about this @patil-suraj @Narsil", "I think it's a very nice idea !.\r\n\r\nThe problem you mention @patrickvonplaten I think will be relevant mostly for power users (that want to add a LogitsProcessor) so they should be careful in terms of how they use this tool. I guess we could emphasis this in the documentation for the `generate` function, that the simpler arguments are preferred for non advanced usage.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@wadimiusz Is there any update on this? I think it would be a great addition.", "Hi @ScientiaEtVeritas, the feature seems not hard to implement and I think I already have the code somewhere, but it would require nice and thorough tests that I don't have the time to write right now. If you could help me with the tests, we could submit a pull request together :)", "There used to be a PR that might be used as a starting point:\r\n\r\nhttps://github.com/huggingface/transformers/pull/12219\r\n\r\nThanks if you can work on this ! " ]
1,619
1,637
1,623
NONE
null
# 🚀 Feature request Hello, I'd like to request a new feature in the `generate` method of the `GenerationMixin` class from `generation_utils`. Specifically, I'd like a feature that allows a user to pass custom LogitsProcessors by adding a new argument `logit_processors: Optional[LogitsProcessorList] = None` to the `generate` method. <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation I'd like to run generation on a pre-trained model, and I'd like to modify its output logits according to my custom function before the search or sampling or whatever is used. I think that this could be a common use case for controlled natural generation because one often wants to implement some trivial restrictions over generated logits. Here is an example of how this could be used: ``` import torch from transformers import GPT2LMHeadModel, GPT2Tokenizer, LogitsProcessor, LogitsProcessorList class MyLogitsProcessor(LogitsProcessor): def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor): something_useful() model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2') logit_processors = LogitsProcessorList([MyLogitsProcessor()]) input_ids = tokenizer('This dog is cute', return_tensors='pt').input_ids model.generate(input_ids=input_ids, logit_processors=logit_processors) ``` <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I have no experience in open source, but I can try to help if you need a hand. I think that the general approach to implementing this is to do the following: 1) Add the `logit_processors: Optional[LogitsProcessorList] = None` argument to the `generate` method, 2) Add the same argument to the `_get_logits_processor` method of GenerationMixin and add the custom logit processors after all the other logit processors are in place. 3) Pass the custom logits processors to every call of `_get_logits_processor` in the `generate` method. What do you think? <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11413/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11412
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11412/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11412/comments
https://api.github.com/repos/huggingface/transformers/issues/11412/events
https://github.com/huggingface/transformers/issues/11412
866,680,047
MDU6SXNzdWU4NjY2ODAwNDc=
11,412
Small bug while converting wav2vec2 model trained using fairseq to huggingface
{ "login": "harveenchadha", "id": 30959215, "node_id": "MDQ6VXNlcjMwOTU5MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/30959215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harveenchadha", "html_url": "https://github.com/harveenchadha", "followers_url": "https://api.github.com/users/harveenchadha/followers", "following_url": "https://api.github.com/users/harveenchadha/following{/other_user}", "gists_url": "https://api.github.com/users/harveenchadha/gists{/gist_id}", "starred_url": "https://api.github.com/users/harveenchadha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harveenchadha/subscriptions", "organizations_url": "https://api.github.com/users/harveenchadha/orgs", "repos_url": "https://api.github.com/users/harveenchadha/repos", "events_url": "https://api.github.com/users/harveenchadha/events{/privacy}", "received_events_url": "https://api.github.com/users/harveenchadha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @harveenchadha \r\n\r\nif you pass `skip_special_tokens=True` to `decode` method, it will skip all the special tokens. ", "Hi Suraj,\r\n\r\nThanks! That works. But in the inference service that is deployed to test in browser, how will this change?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,623
1,623
NONE
null
Hi, I was trying to convert a wav2vec2 model trained using fairseq to have support with HuggingFace but there is a small error in inference. When I use the code below: ``` import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # load pretrained model processor = Wav2Vec2Processor.from_pretrained('hf/output') model = Wav2Vec2ForCTC.from_pretrained("hf/output") # load audio audio_input, sample_rate = sf.read('004-M-23_001.wav') # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0]) ``` I get this as the output: ``` <s>ह<s>ॉ<s>ट<s>ल<s> <s>र<s>ॉ<s>य<s>ल<s> <s>ह<s>े<s>र<s>ि<s>ट<s>े<s>ज<s> क<s>े<s> <s>च<s>ी<s>ज<s> <s>क<s>े<s> <s>ए<s>क<s> <s>ब<s>ह<s>ु<s>त<s> <s>अ<s>च्<s>छ<s>ा<s> <s>ह<s>ै<s> <s>क<s>्य<s>ा<s> <s> ``` but, I should ideally get this: ``` हॉटल रॉयल हेरिटेज के चीज के एक बहुत अच्छा है क्या ``` I can easily solve this by using: ``` print(transcription.replace('<s>', '')) ``` But if I deploy the model, the output of inference will contain ```<s>``` as I cannot change the output of deployed model. Can you please let me know if I am doing any mistake in the conversion process. My vocab.json looks like this: ``` {"<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3, "|": 4, "0": 5, "1": 6, "2": 7, "3": 8, "4": 9, "5": 10, "6": 11, "7": 12, "8": 13, "9": 14, "ँ": 15, "ं": 16, "ः": 17, "अ": 18, "आ": 19, "इ": 20, "ई": 21, "उ": 22, "ऊ": 23, "ऋ": 24, "ए": 25, "ऐ": 26, "ऑ": 27, "ओ": 28, "औ": 29, "क": 30, "ख": 31, "ग": 32, "घ": 33, "ङ": 34, "च": 35, "छ": 36, "ज": 37, "झ": 38, "ञ": 39, "ट": 40, "ठ": 41, "ड": 42, "ढ": 43, "ण": 44, "त": 45, "थ": 46, "द": 47, "ध": 48, "न": 49, "प": 50, "फ": 51, "ब": 52, "भ": 53, "म": 54, "य": 55, "र": 56, "ल": 57, "व": 58, "श": 59, "ष": 60, "स": 61, "ह": 62, "ा": 63, "ि": 64, "ी": 65, "ु": 66, "ू": 67, "ृ": 68, "ॅ": 69, "े": 70, "ै": 71, "ॉ": 72, "ो": 73, "ौ": 74, "्": 75, "क़": 76, "ख़": 77, "ग़": 78, "ज़": 79, "ड़": 80, "ढ़": 81, "फ़": 82, "य़": 83} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11412/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11411
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11411/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11411/comments
https://api.github.com/repos/huggingface/transformers/issues/11411/events
https://github.com/huggingface/transformers/issues/11411
866,668,503
MDU6SXNzdWU4NjY2Njg1MDM=
11,411
What do these model parameters mean?
{ "login": "roshan-k-patel", "id": 48667731, "node_id": "MDQ6VXNlcjQ4NjY3NzMx", "avatar_url": "https://avatars.githubusercontent.com/u/48667731?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roshan-k-patel", "html_url": "https://github.com/roshan-k-patel", "followers_url": "https://api.github.com/users/roshan-k-patel/followers", "following_url": "https://api.github.com/users/roshan-k-patel/following{/other_user}", "gists_url": "https://api.github.com/users/roshan-k-patel/gists{/gist_id}", "starred_url": "https://api.github.com/users/roshan-k-patel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roshan-k-patel/subscriptions", "organizations_url": "https://api.github.com/users/roshan-k-patel/orgs", "repos_url": "https://api.github.com/users/roshan-k-patel/repos", "events_url": "https://api.github.com/users/roshan-k-patel/events{/privacy}", "received_events_url": "https://api.github.com/users/roshan-k-patel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
"params_classifier.dense.weight" "params_classifier.dense.bias" "params_classifier.out_proj.weight" "params_classifier.out_proj.bias" Could someone please briefly explain these parameters to me? Using the DeBERTa model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11411/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11410
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11410/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11410/comments
https://api.github.com/repos/huggingface/transformers/issues/11410/events
https://github.com/huggingface/transformers/pull/11410
866,498,614
MDExOlB1bGxSZXF1ZXN0NjIyMzg0NDMy
11,410
wrong parentclass in documentation
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "unsubscribe<https://github.com/notifications/unsubscribe-auth/ATPBJ2FNGKBXXXSN6ZH6POLTKIDLZANCNFSM43PQRSGA>.\r\n‏/\r\n0Merged #11410<https://github.com/huggingface/transformers/pull/11410> into master.\r\n\r\n—\r\nYou are receiving this because you are subscribed to this thread.\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/11410#event-4639275602>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ATPBJ2CW5M6UJ4RCYT2YIOLTKIFCTANCNFSM43PQRSGA>.\r\n" ]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? The documentation linked to the parent class PreTrainedTokenizerFast but it should be the slow tokenizer class ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11410/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11410", "html_url": "https://github.com/huggingface/transformers/pull/11410", "diff_url": "https://github.com/huggingface/transformers/pull/11410.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11410.patch", "merged_at": 1619223555000 }
https://api.github.com/repos/huggingface/transformers/issues/11409
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11409/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11409/comments
https://api.github.com/repos/huggingface/transformers/issues/11409/events
https://github.com/huggingface/transformers/issues/11409
866,495,729
MDU6SXNzdWU4NjY0OTU3Mjk=
11,409
How to use GPU when running run_summarization.py
{ "login": "xuyeliu", "id": 31730733, "node_id": "MDQ6VXNlcjMxNzMwNzMz", "avatar_url": "https://avatars.githubusercontent.com/u/31730733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xuyeliu", "html_url": "https://github.com/xuyeliu", "followers_url": "https://api.github.com/users/xuyeliu/followers", "following_url": "https://api.github.com/users/xuyeliu/following{/other_user}", "gists_url": "https://api.github.com/users/xuyeliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/xuyeliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xuyeliu/subscriptions", "organizations_url": "https://api.github.com/users/xuyeliu/orgs", "repos_url": "https://api.github.com/users/xuyeliu/repos", "events_url": "https://api.github.com/users/xuyeliu/events{/privacy}", "received_events_url": "https://api.github.com/users/xuyeliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @liubest \r\n\r\nPlease make sure that your torch installation can detect the GPU. All scripts will run on GPU if it's available.\r\nIf you want to run on multiple GPUs, follow the docs [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch#distributed-training-and-mixed-precision).\r\n\r\nAnd if you just want to use a single GPU, you could select the device by setting `CUDA_VISIBLE_DEVICES=0\" which will select the first GPU.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
When I run run_summarization.py on my computer and my computer has 2 GPUs. But the code didn't run with gpu cuda and is very very slow. Can anyone tell me how to use transformers with GPU? Thank you very much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11409/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11408
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11408/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11408/comments
https://api.github.com/repos/huggingface/transformers/issues/11408/events
https://github.com/huggingface/transformers/issues/11408
866,469,888
MDU6SXNzdWU4NjY0Njk4ODg=
11,408
[CI] solving the pytest crashing and hanging CI job
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" }, { "id": 2991663546, "node_id": "MDU6TGFiZWwyOTkxNjYzNTQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Testing", "name": "Testing", "color": "19A601", "default": false, "description": "" } ]
open
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "`/usr/bin/time -v`'s output for `-n3`:\r\n```\r\n Command being timed: \"python -m pytest -n 3 --dist=loadfile -s --make-reports=tests_torch ./tests/\"\r\n User time (seconds): 1507.88\r\n System time (seconds): 56.49\r\n Percent of CPU this job got: 261%\r\n Elapsed (wall clock) time (h:mm:ss or m:ss): 9:59.30\r\n Average shared text size (kbytes): 0\r\n Average unshared data size (kbytes): 0\r\n Average stack size (kbytes): 0\r\n Average total size (kbytes): 0\r\n Maximum resident set size (kbytes): 7038804\r\n Average resident set size (kbytes): 0\r\n Major (requiring I/O) page faults: 1\r\n Minor (reclaiming a frame) page faults: 11559434\r\n Voluntary context switches: 3084008\r\n Involuntary context switches: 171112\r\n Swaps: 0\r\n File system inputs: 16456\r\n File system outputs: 3261440\r\n Socket messages sent: 0\r\n Socket messages received: 0\r\n Signals delivered: 0\r\n Page size (bytes): 4096\r\n Exit status: 0\r\n```\r\nwith `-n 4`:\r\n\r\n```\r\n Command being timed: \"python -m pytest -n 4 --dist=loadfile -s --make-reports=tests_torch ./tests/\"\r\n User time (seconds): 1533.02\r\n System time (seconds): 56.00\r\n Percent of CPU this job got: 306%\r\n Elapsed (wall clock) time (h:mm:ss or m:ss): 8:37.98\r\n Average shared text size (kbytes): 0\r\n Average unshared data size (kbytes): 0\r\n Average stack size (kbytes): 0\r\n Average total size (kbytes): 0\r\n Maximum resident set size (kbytes): 5797344\r\n Average resident set size (kbytes): 0\r\n Major (requiring I/O) page faults: 1231\r\n Minor (reclaiming a frame) page faults: 11090301\r\n Voluntary context switches: 2680563\r\n Involuntary context switches: 433387\r\n Swaps: 0\r\n File system inputs: 277920\r\n File system outputs: 3261200\r\n Socket messages sent: 0\r\n Socket messages received: 0\r\n Signals delivered: 0\r\n Page size (bytes): 4096\r\n Exit status: 0\r\n```\r\n\r\nSo clearly this is not right max rss is smaller for `-n 4` then `-n 3` so it appears not to include `pytest` workers. The online information has very conflicting statements about whether forked processes are accounted for or not.\r\n\r\nSo we can't use this one.\r\n", "Thank you for this very in-depth analysis of the situation. It would probably be helpful to have a visualization of each test and how much memory it takes, it could help in singling out memory outliers; and it could also help to detect whether we actually have a memory leak.", "Yes, this is all a big project. Just little time to do it.\r\n\r\nI think the low-hanging fruit is to use `flake-finder` on some tests and see if the memory grows, to first identify if we have a leak. Normally unit-test refuses running the same test more than once.\r\n\r\nhttps://huggingface.co/transformers/testing.html#repeat-tests\r\n\r\nSo may be even an exhaustive search:\r\n\r\nfor each test record mem usage while:\r\n - run test once\r\n - run test 10 times\r\n \r\nI will see if I find some resources to try that.", "This pytest plugins looked promising: https://github.com/CFMTech/pytest-monitor but I can't get it to work.\r\n\r\nAccording to docs you just:\r\n```\r\npip install pytest-monitor\r\n```\r\nand then run `pytest` normally, and it should create a sqlite db with all the data in it, but when I open it I get no test records in it:\r\n\r\n```\r\npytest tests/test_logging.py\r\nsqlite3 .pymon\r\nsqlite> select * from TEST_METRICS;\r\n```\r\n\r\nIt should print the resource stats here, but it doesn't.\r\n\r\nI do get the sessions recorded, but it's not what we want:\r\n```\r\nsqlite> select * from EXECUTION_CONTEXTS;\r\n266c14dea4f9f8a6dae5e46be30e70b3|12|4367.12125|x86_64|Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz|128696|hope|x86_64|64bit|Linux - 5.4.0-70-generic|3.8.8 (default, Feb 24 2021, 21:46:12)\r\n[GCC 7.3.0]\r\nfaa034f4c783dc951159d07212d3a200|12|4300.12775|x86_64|Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz|128696|hope|x86_64|64bit|Linux - 5.4.0-70-generic|3.8.8 (default, Feb 24 2021, 21:46:12)\r\n[GCC 7.3.0]\r\n```\r\n\r\nPerhaps I'm missing something - read through the whole long docs at https://pytest-monitor.readthedocs.io/en/latest/index.html but I don't see that I'm doing anything wrong.\r\n\r\n**edit**: it doesn't work with unittest - too bad it doesn't mention that fact on their website. I created a normal test and then it records data.\r\n", "A little bit at a time I've been trying to work on this issue. At the moment trying to find a reliable way to take the measurements.\r\n\r\nI have thought of at least 3 main ways the leak could be occurring.\r\n\r\n1. leak in some API \r\n2. leak in badly written test that doesn't clean up after itself - so some object is created in the test and somehow it doesn't get destroyed (see also 3) \r\n3. \"functional leak\" as a side-effect of loading extra libraries - say we have 10 tests each loading 10 different libraries - each test will then make `pytest` grow just because it loaded something new - which is a variation on (2) - but how could a test unload the libraries it loaded. It'd be very inefficient practically.\r\n\r\nDetection:\r\n\r\n1) should be easy to detect by re-running the same test and noticing memory grow. My current algorithm - is to run the test once, ignore the memory usage because it could be loading a new module/lib, and run it second time to notice any difference here.\r\n\r\n2 and 3.) these are difficult to make sense of and thus much harder to catch (2) because by just looking at numbers one doesn't know if it was just a new library loaded, or was some object not cleaned up after the test." ]
1,619
1,622
null
CONTRIBUTOR
null
So as of recent we have the `run_tests_torch` CI job randomly and frequently failing. We couldn't find any fault with any tests because there is never a traceback, just hanging `pytest` that sends no output. This usually is a symptom that the process used more resources than it was allowed and it was killed - of course the python interpreted doesn't get a chance to make a peep - so no traceback. e.g. on colab processes get killed in the same way. ## Diagnostics 1. Go to the CI report and "rerun job with SSH" it then enables SSH and gives you the cmd to access the CI instance. Use those instructions which it shows to you in `Enable SSH` to ssh to the instance. when done remember to exit the ssh shells and `Cancel Job`, since otherwise the instance will continue running at $$$. 2. CI doesn't run docker with `--priveleged` flag, so most normal system tools are disabled, so it's almost impossible to debug anything. Things like `dmesg` or `/var/sys/log` are not there, you can `sudo`, but you can't do almost anything with it. Ideally in such situations it'd be a good idea to switch from `docker` back to `machine` where we would have full root access. 3. Resource limit ``` resource_class: xlarge ``` as of this writing gives you 16GB RAM. This is very confusing since when you log into the instance there are 70GB of memory reported in the top. And if you try to monitor %MEM you get a very misleading low usage. It gives you the report for out of 70GB, not out of the cgroups memory limit of 16GB. How do we know the real limit: ``` $ cat /sys/fs/cgroup/memory/memory.limit_in_bytes | perl -ne 'print $_ / 2**30' 16 ``` Yup, 16GB 4. Now it's very difficult to measure how much memory several forked processes use together, you can't use `top` for that. I had 2 consoles opened, one with top and another with running `pytest -n 8` that I started manually I did notice that the once all 8 processes were around 2-2.5GB RSS after awhile one of the workers crashed, Then I found this handy tool thanks to https://unix.stackexchange.com/a/169129/291728 ``` apt install smem ``` ``` circleci@fc02c746bf66:~$ smem -t PID User Command Swap USS PSS RSS 6 circleci /bin/sh 0 88 88 92 1 circleci /sbin/docker-init -- /bin/s 0 48 123 740 17567 circleci /usr/bin/time -v python -m 0 96 145 1216 17568 circleci tee tests_output.txt 0 140 225 1828 495 circleci /bin/bash -eo pipefail -c w 0 292 526 1692 1511 circleci -bash 0 608 1066 3140 476 circleci -bash 0 620 1079 3148 18170 circleci /usr/bin/python /usr/bin/sm 0 13160 13286 15424 7 circleci /bin/circleci-agent --confi 0 29424 29424 29428 17569 circleci python -m pytest -n 8 --dis 0 151172 163118 254684 17588 circleci /usr/local/bin/python -u -c 0 348860 371932 526452 17594 circleci /usr/local/bin/python -u -c 0 1863416 1887735 2048128 17579 circleci /usr/local/bin/python -u -c 0 2028784 2052674 2210400 17591 circleci /usr/local/bin/python -u -c 0 2031872 2056217 2214712 17574 circleci /usr/local/bin/python -u -c 0 2098124 2122054 2282392 17585 circleci /usr/local/bin/python -u -c 0 2226080 2247464 2401880 17582 circleci /usr/local/bin/python -u -c 0 2226864 2249367 2404832 17597 circleci /usr/local/bin/python -u -c 0 2643552 2665199 2818968 ------------------------------------------------------------------------------- 18 1 0 15663200 15861722 17219156 ``` The PSS column seems to be able to do correct totals on, so I did: ``` watch -n 1 'smem -t | tail -1' ``` and indeed, once the total PSS hit ~16GB pytest crashed. The failure we get is intermittent because the tests are run randomly and sometimes we get 4 "fatter" tests run concurrently, and at all other times when it succeeds we are lucky not to hit the bad combination. I tried to switch to: ``` resource_class: 2xlarge ``` which would give us 32GB, but apparently we aren't allowed to do so and need to ask for a special permission from CircleCI admins. 5. what happens to the hanging processes? clearly `pytest` doesn't recover from crash. I think it can recover from other failures of its workers, but not when a kernel nukes one of its workers. When the resource limit gets hit, all but one workers were hanging in some strange place: ``` Thread 0x00007f65d91bb700 (most recent call first): File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 400 in read File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 432 in from_io File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 967 in _thread_receiver File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 220 in run File "/home/circleci/.local/lib/python3.7/site-packages/execnet/gateway_base.py", line 285 in _perform_spawn ``` if I look in `top` 7 but 1 pytest workers stop working blocking on the above. I figured that out by adding to `tests/conftest.py`: ``` import faulthandler faulthandler.dump_traceback_later(20, repeat=True) ``` So now every 20 secs I was getting tb reports on where things were hanging... But I'm not 100% sure it's why they are hanging, I will have to spend more time with it if we really want to understand why the other workers stop processing. So please don't take it as a truth, it's just one of the possibilities to check. But since it doesn't help our situation to understand why they can't recover I'm not going to waste time on it. ## Summary 1. we probably have a very small leak that grows over hundreds of tests as the memory usage slowly, but consistently goes up 2. 16GB is just enough for our `pytest -n 4` - probably 75% of the time, until we add more tests 3. so we either need to ask for the 2xlarge instance, or use `-n 3` 4. ~probably it'd be a good idea to add~ (see next comment) ``` apt install time ``` and run `pytest` with: ``` /usr/bin/time -v python -m pytest ... ``` which will give us an indepth resource usage report - so overtime we should see if our test suite consumes more and more resources. @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11408/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/11407
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11407/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11407/comments
https://api.github.com/repos/huggingface/transformers/issues/11407/events
https://github.com/huggingface/transformers/pull/11407
866,450,130
MDExOlB1bGxSZXF1ZXN0NjIyMzQ0Mzg4
11,407
Add basic support for FP16 in SageMaker model parallelism
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
COLLABORATOR
null
# What does this PR do? **Note:** This is not full support yet as SageMaker Model Parallelism does not support gradient clipping, so a user has to change the default of `max_grad_norm` to 0 if they want to use it. Otherwise, this adds support for mixed precision training in SageMaker Model Parallelism mode. The script has been tested on `run_glue.py` without error (as long as the caveat above is respected). A defensive check is added so that the user gets an obvious error message if they don't change the default value of `max_grad_norm`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11407/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11407", "html_url": "https://github.com/huggingface/transformers/pull/11407", "diff_url": "https://github.com/huggingface/transformers/pull/11407.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11407.patch", "merged_at": 1619441714000 }
https://api.github.com/repos/huggingface/transformers/issues/11406
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11406/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11406/comments
https://api.github.com/repos/huggingface/transformers/issues/11406/events
https://github.com/huggingface/transformers/pull/11406
866,445,997
MDExOlB1bGxSZXF1ZXN0NjIyMzQxMDUz
11,406
Pass along seed to DistributedSampler
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Should also be passed to the constructor for `DistributedLengthGroupedSampler`?", "Good point, added it!" ]
1,619
1,619
1,619
COLLABORATOR
null
# What does this PR do? This PR passes along the seed to `DistributedSampler` otherwise it always uses 0 for setting its RNG. See #11389 for more context. Fixes #11389
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11406/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11406", "html_url": "https://github.com/huggingface/transformers/pull/11406", "diff_url": "https://github.com/huggingface/transformers/pull/11406.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11406.patch", "merged_at": 1619447213000 }
https://api.github.com/repos/huggingface/transformers/issues/11405
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11405/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11405/comments
https://api.github.com/repos/huggingface/transformers/issues/11405/events
https://github.com/huggingface/transformers/pull/11405
866,320,654
MDExOlB1bGxSZXF1ZXN0NjIyMjM3NTA3
11,405
Default to accuracy metric in run_glue_no_trainer
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
COLLABORATOR
null
# What does this PR do? In `run_glue_no_trainer`, the metric is not properly initialized when no task name is passed, this PR fixes that. Fixes #11403
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11405/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11405/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11405", "html_url": "https://github.com/huggingface/transformers/pull/11405", "diff_url": "https://github.com/huggingface/transformers/pull/11405.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11405.patch", "merged_at": 1619203800000 }
https://api.github.com/repos/huggingface/transformers/issues/11404
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11404/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11404/comments
https://api.github.com/repos/huggingface/transformers/issues/11404/events
https://github.com/huggingface/transformers/issues/11404
866,314,907
MDU6SXNzdWU4NjYzMTQ5MDc=
11,404
Documentation 404 error
{ "login": "gwc4github", "id": 3164663, "node_id": "MDQ6VXNlcjMxNjQ2NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3164663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwc4github", "html_url": "https://github.com/gwc4github", "followers_url": "https://api.github.com/users/gwc4github/followers", "following_url": "https://api.github.com/users/gwc4github/following{/other_user}", "gists_url": "https://api.github.com/users/gwc4github/gists{/gist_id}", "starred_url": "https://api.github.com/users/gwc4github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gwc4github/subscriptions", "organizations_url": "https://api.github.com/users/gwc4github/orgs", "repos_url": "https://api.github.com/users/gwc4github/repos", "events_url": "https://api.github.com/users/gwc4github/events{/privacy}", "received_events_url": "https://api.github.com/users/gwc4github/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "thanks alot ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "[https://huggingface.co/transformers/examples.html](https://huggingface.co/transformers/examples.html) now points towards [https://huggingface.co/docs/transformers/main/en/examples](https://huggingface.co/docs/transformers/main/en/examples), which is a 404." ]
1,619
1,659
1,622
NONE
null
For this page: https://huggingface.co/transformers/examples.html several (all?) of the links in the "The Big Table of Tasks" section are getting a 404 error. Ex: https://github.com/huggingface/transformers/tree/master/examples/token-classification
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11404/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11404/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11403
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11403/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11403/comments
https://api.github.com/repos/huggingface/transformers/issues/11403/events
https://github.com/huggingface/transformers/issues/11403
866,290,221
MDU6SXNzdWU4NjYyOTAyMjE=
11,403
metric is uninitialized when csv data is supplied to example/pytorch/text-classification/run_glue_no_trainer.py
{ "login": "daraghhartnett", "id": 31020255, "node_id": "MDQ6VXNlcjMxMDIwMjU1", "avatar_url": "https://avatars.githubusercontent.com/u/31020255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daraghhartnett", "html_url": "https://github.com/daraghhartnett", "followers_url": "https://api.github.com/users/daraghhartnett/followers", "following_url": "https://api.github.com/users/daraghhartnett/following{/other_user}", "gists_url": "https://api.github.com/users/daraghhartnett/gists{/gist_id}", "starred_url": "https://api.github.com/users/daraghhartnett/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daraghhartnett/subscriptions", "organizations_url": "https://api.github.com/users/daraghhartnett/orgs", "repos_url": "https://api.github.com/users/daraghhartnett/repos", "events_url": "https://api.github.com/users/daraghhartnett/events{/privacy}", "received_events_url": "https://api.github.com/users/daraghhartnett/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for flagging. The PR mentioned above will make it default to accuracy. Of course, you're free to change it to whatever you need!", "Glad to be of help!\r\nAh, I did not see that - I only looked in the Issues to see if it had already been reported. I will check the PR's as well the next time.\r\nThanks very much!", "The PR did not exist before you flagged the issue ;-) I opened it to fix it!", "Ah! Excellent! I am using the PR you proposed locally so I am back in business :)" ]
1,619
1,619
1,619
NONE
null
## Environment info - `transformers` version: 4.5.1 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.0 - PyTorch version (GPU?): 1.8.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: distributed - OS type and version: Mac OSX 10.14.6 ### Who can help - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): distilbert-base-uncased The problem arises when using: * [ x] the official example scripts: (give details below) Running the script: transformers/examples/pytorch/text-classification/run_glue_no_trainer.py With parameters: --model_name_or_path distilbert-base-uncased --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --train_file piracy_train.csv --validation_file piracy_validation.csv --output_dir /data/output/distilbert-base-uncased-piracy-no-trainer Yields the error: ` Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 100%|██████████| 2/2 [00:00<00:00, 12.75ba/s] 100%|██████████| 1/1 [00:00<00:00, 45.35ba/s] 04/22/2021 14:47:33 - INFO - __main__ - Sample 598 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 6084, 1997, 16298, 1024, 2006, 5641, 2233, 2418, 2012, 5511, 19961, 11396, 1999, 2597, 2410, 1011, 4720, 2078, 1011, 28714, 1011, 2322, 2063, 2019, 19842, 2988, 2048, 10756, 19801, 2018, 7333, 2176, 8301, 21807, 2008, 5411, 1996, 19842, 2000, 2306, 1015, 5830, 1012, 27120, 4273, 3036, 2136, 3662, 4255, 1998, 8301, 21807, 11672, 1012, 6258, 2003, 3647, 1012, 102], 'labels': 1}. 04/22/2021 14:47:33 - INFO - __main__ - Sample 65 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 2105, 2260, 2078, 2213, 22064, 1997, 11937, 9148, 1011, 11937, 9148, 2479, 5137, 1012, 2048, 10027, 2152, 1011, 3177, 6242, 5411, 1037, 9625, 6839, 14128, 1012, 3040, 2992, 1996, 8598, 3626, 21900, 1998, 8878, 2811, 28405, 2543, 21290, 2015, 1012, 5137, 3212, 2038, 2042, 11925, 2011, 27527, 2557, 1012, 1996, 10027, 6242, 5411, 2000, 1037, 3292, 1997, 3156, 5563, 2013, 1996, 2911, 1998, 2333, 2185, 1012, 1996, 2911, 7943, 2014, 6019, 2000, 1996, 2279, 3417, 1997, 7688, 1012, 102], 'labels': 1}. 04/22/2021 14:47:33 - INFO - __main__ - Sample 877 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 7387, 1024, 2006, 2570, 2254, 1037, 9625, 6839, 2988, 2108, 2628, 2379, 2597, 5709, 1011, 2321, 2078, 4002, 2509, 1011, 2410, 2063, 3155, 3963, 13221, 2148, 1997, 16738, 1012, 1996, 2911, 2001, 7283, 2628, 2012, 1037, 3292, 1997, 1021, 2661, 1998, 2439, 1996, 10027, 6258, 2044, 1037, 3177, 3623, 1998, 2607, 2689, 1012, 102], 'labels': 1}. 04/22/2021 14:47:33 - INFO - __main__ - ***** Running training ***** 04/22/2021 14:47:33 - INFO - __main__ - Num examples = 1173 04/22/2021 14:47:33 - INFO - __main__ - Num Epochs = 3 04/22/2021 14:47:33 - INFO - __main__ - Instantaneous batch size per device = 32 04/22/2021 14:47:33 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 04/22/2021 14:47:33 - INFO - __main__ - Gradient Accumulation steps = 1 04/22/2021 14:47:33 - INFO - __main__ - Total optimization steps = 111 33%|███▎ | 37/111 [07:48<14:24, 11.69s/it]Traceback (most recent call last): File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 441, in <module> main() File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 406, in main metric.add_batch( UnboundLocalError: local variable 'metric' referenced before assignment 33%|███▎ | 37/111 [07:49<15:39, 12.69s/it] ` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) Simple single sentence text classification ## To reproduce Steps to reproduce the behavior: 1. Pick any csv dataset with a train and validation files and run the transformers/examples/pytorch/text-classification/run_glue_no_trainer.py script using the following parameters: 2. --model_name_or_path distilbert-base-uncased --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --train_file piracy_train.csv --validation_file piracy_validation.csv --output_dir /data/output/distilbert-base-uncased-piracy-no-trainer 3. This will yield an error as the metric variable is not initialized when an optional args.task_name is not specified. ` Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 100%|██████████| 2/2 [00:00<00:00, 12.75ba/s] 100%|██████████| 1/1 [00:00<00:00, 45.35ba/s] 04/22/2021 14:47:33 - INFO - __main__ - Sample 598 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 6084, 1997, 16298, 1024, 2006, 5641, 2233, 2418, 2012, 5511, 19961, 11396, 1999, 2597, 2410, 1011, 4720, 2078, 1011, 28714, 1011, 2322, 2063, 2019, 19842, 2988, 2048, 10756, 19801, 2018, 7333, 2176, 8301, 21807, 2008, 5411, 1996, 19842, 2000, 2306, 1015, 5830, 1012, 27120, 4273, 3036, 2136, 3662, 4255, 1998, 8301, 21807, 11672, 1012, 6258, 2003, 3647, 1012, 102], 'labels': 1}. 04/22/2021 14:47:33 - INFO - __main__ - Sample 65 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 2105, 2260, 2078, 2213, 22064, 1997, 11937, 9148, 1011, 11937, 9148, 2479, 5137, 1012, 2048, 10027, 2152, 1011, 3177, 6242, 5411, 1037, 9625, 6839, 14128, 1012, 3040, 2992, 1996, 8598, 3626, 21900, 1998, 8878, 2811, 28405, 2543, 21290, 2015, 1012, 5137, 3212, 2038, 2042, 11925, 2011, 27527, 2557, 1012, 1996, 10027, 6242, 5411, 2000, 1037, 3292, 1997, 3156, 5563, 2013, 1996, 2911, 1998, 2333, 2185, 1012, 1996, 2911, 7943, 2014, 6019, 2000, 1996, 2279, 3417, 1997, 7688, 1012, 102], 'labels': 1}. 04/22/2021 14:47:33 - INFO - __main__ - Sample 877 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 7387, 1024, 2006, 2570, 2254, 1037, 9625, 6839, 2988, 2108, 2628, 2379, 2597, 5709, 1011, 2321, 2078, 4002, 2509, 1011, 2410, 2063, 3155, 3963, 13221, 2148, 1997, 16738, 1012, 1996, 2911, 2001, 7283, 2628, 2012, 1037, 3292, 1997, 1021, 2661, 1998, 2439, 1996, 10027, 6258, 2044, 1037, 3177, 3623, 1998, 2607, 2689, 1012, 102], 'labels': 1}. 04/22/2021 14:47:33 - INFO - __main__ - ***** Running training ***** 04/22/2021 14:47:33 - INFO - __main__ - Num examples = 1173 04/22/2021 14:47:33 - INFO - __main__ - Num Epochs = 3 04/22/2021 14:47:33 - INFO - __main__ - Instantaneous batch size per device = 32 04/22/2021 14:47:33 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 04/22/2021 14:47:33 - INFO - __main__ - Gradient Accumulation steps = 1 04/22/2021 14:47:33 - INFO - __main__ - Total optimization steps = 111 33%|███▎ | 37/111 [07:48<14:24, 11.69s/it]Traceback (most recent call last): File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 441, in <module> main() File "/Users/daraghhartnett/Projects/D3M/neural_text/code/transformers/examples/pytorch/text-classification/run_glue_no_trainer.py", line 406, in main metric.add_batch( UnboundLocalError: local variable 'metric' referenced before assignment 33%|███▎ | 37/111 [07:49<15:39, 12.69s/it] ` ## Expected behavior Since providing your own csv files is allowed, the metric object should be initialized when no task_name is provided.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11403/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11402
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11402/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11402/comments
https://api.github.com/repos/huggingface/transformers/issues/11402/events
https://github.com/huggingface/transformers/issues/11402
866,259,186
MDU6SXNzdWU4NjYyNTkxODY=
11,402
Positional embeddings are not applied when input embeddings are passed in for Pytorch DistilBert model
{ "login": "randimah", "id": 56127215, "node_id": "MDQ6VXNlcjU2MTI3MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/56127215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/randimah", "html_url": "https://github.com/randimah", "followers_url": "https://api.github.com/users/randimah/followers", "following_url": "https://api.github.com/users/randimah/following{/other_user}", "gists_url": "https://api.github.com/users/randimah/gists{/gist_id}", "starred_url": "https://api.github.com/users/randimah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/randimah/subscriptions", "organizations_url": "https://api.github.com/users/randimah/orgs", "repos_url": "https://api.github.com/users/randimah/repos", "events_url": "https://api.github.com/users/randimah/events{/privacy}", "received_events_url": "https://api.github.com/users/randimah/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That's correct indeed, and seems like a bug to me. Would you like to open a PR to fix the issue?", "Thanks for the confirmation. Yes, I should be able to do that.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info - `transformers` version: 4.1.1 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.6.6 - PyTorch version (GPU?): 1.8.0+cpu (False) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @julien-c Models: - albert, bert, xlm: @LysandreJik ## Information Model I am using (DistilBertModel): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load pre-trained models for DistilBertForSequenceClassification and DistilBertTokenizer 2. Encode input text 3. Find input embeddings by passing in input ids through pre-trained model's word emebedding layer 4. Forward pass model with encoded input ids 5. Forward pass the model with input embeddings found in step 3 6. Compare the logits of steps 4 and 5 ```python from transformers import DistilBertTokenizer, DistilBertForSequenceClassification input_text = '''This is some sample text. But I would like a model prediction on this.''' pre_trained_model = 'distilbert-base-uncased' model = DistilBertForSequenceClassification.from_pretrained(pre_trained_model) tokenizer = DistilBertTokenizer.from_pretrained(pre_trained_model) encoded_tokens = tokenizer.encode_plus(input_text, add_special_tokens=True, return_token_type_ids=True, return_tensors='pt') input_embeds = model.distilbert.embeddings.word_embeddings(encoded_tokens['input_ids']) scores_for_input_ids = model(input_ids=encoded_tokens['input_ids'], attention_mask=encoded_tokens["attention_mask"]) scores_for_input_embeds = model(inputs_embeds=input_embeds, attention_mask=encoded_tokens["attention_mask"]) print('Logits for input ids', scores_for_input_ids.logits) print('Logits for input embeds', scores_for_input_embeds.logits) ``` Output Logits for input ids tensor([[-0.0721, 0.0499]], grad_fn=<AddmmBackward>) Logits for input embeds tensor([[ 0.0675, -0.0452]], grad_fn=<AddmmBackward>) ## Expected behavior The logits returned for steps 4 and 5 above should be the same. For other pytorch models such as Bert and Roberta as well as Tensorflow implementation of DistilBert (TFDistilBert) the logits returned for steps 4 and 5 are the same. ```python from transformers import DistilBertTokenizer, BertForSequenceClassification input_text = '''This is some sample text. But I would like a model prediction on this.''' pre_trained_model = 'bert-base-uncased' model = BertForSequenceClassification.from_pretrained(pre_trained_model) tokenizer = DistilBertTokenizer.from_pretrained(pre_trained_model) encoded_tokens = tokenizer.encode_plus(input_text, add_special_tokens=True, return_token_type_ids=True, return_tensors='pt') input_embeds = model.bert.embeddings.word_embeddings(encoded_tokens['input_ids']) scores_for_input_ids = model(input_ids=encoded_tokens['input_ids'], attention_mask=encoded_tokens["attention_mask"]) scores_for_input_embeds = model(inputs_embeds=input_embeds, attention_mask=encoded_tokens["attention_mask"]) print('Logits for input ids', scores_for_input_ids.logits) print('Logits for input embeds', scores_for_input_embeds.logits) ``` Output Logits for input ids tensor([[-0.1336, 0.1173]], grad_fn=<AddmmBackward>) Logits for input embeds tensor([[-0.1336, 0.1173]], grad_fn=<AddmmBackward>) I digged deeper into this in the transformers library and found out that positional embeddings are not applied when input embeddings are passed for the model forward pass particularly in the DistilBert model. On the other hand if the input ids are passed in, input embeddings are calculated from input ids and positional embeddings are applied on top of that before passing into the underlying transformer. In https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py lines 479-482.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11402/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11401
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11401/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11401/comments
https://api.github.com/repos/huggingface/transformers/issues/11401/events
https://github.com/huggingface/transformers/issues/11401
866,220,225
MDU6SXNzdWU4NjYyMjAyMjU=
11,401
Download offile HuggingFace Models in other format than ".bin" Format
{ "login": "deepaksinghtopwal", "id": 15118785, "node_id": "MDQ6VXNlcjE1MTE4Nzg1", "avatar_url": "https://avatars.githubusercontent.com/u/15118785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepaksinghtopwal", "html_url": "https://github.com/deepaksinghtopwal", "followers_url": "https://api.github.com/users/deepaksinghtopwal/followers", "following_url": "https://api.github.com/users/deepaksinghtopwal/following{/other_user}", "gists_url": "https://api.github.com/users/deepaksinghtopwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepaksinghtopwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepaksinghtopwal/subscriptions", "organizations_url": "https://api.github.com/users/deepaksinghtopwal/orgs", "repos_url": "https://api.github.com/users/deepaksinghtopwal/repos", "events_url": "https://api.github.com/users/deepaksinghtopwal/events{/privacy}", "received_events_url": "https://api.github.com/users/deepaksinghtopwal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure how to help here... can you download the files from another machine and sync them to the restricted env?", "No, that's the only problem that all the devices within organization are same network and have same policies(as mentioned above).. :)\r\nI see huggingFace has already provided many methods but the challenge is that none of it is working for me ..\r\n\r\nBut it does support downloading other format(i.e. i recently downloaded models from Easyocr in .pth and Spacy language models) , So just a query if any solution can be provided around that which can help me using Hugginface models in such restricted environment..\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
# 🚀 Feature request ### Hi, ### Just wandering if there is any other way of downloading Huggingface models in a restricted environment? For example : **a.)** we have a network which doesn't allow downloading the model by internet (so the auto download feature in transformer wont work), **b.)** we can't download Bin files(So cannot download the files from "Models" option in Huggingface.co) **c.)** Using the Git clone option gives the Proxy error. So is there any other way which can be used to download the Huggingface Models ? <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11401/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11400
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11400/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11400/comments
https://api.github.com/repos/huggingface/transformers/issues/11400/events
https://github.com/huggingface/transformers/pull/11400
866,106,614
MDExOlB1bGxSZXF1ZXN0NjIyMDY1MjE3
11,400
[Wav2Vec2] Correct conversion script
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11400/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11400", "html_url": "https://github.com/huggingface/transformers/pull/11400", "diff_url": "https://github.com/huggingface/transformers/pull/11400.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11400.patch", "merged_at": 1619184987000 }
https://api.github.com/repos/huggingface/transformers/issues/11399
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11399/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11399/comments
https://api.github.com/repos/huggingface/transformers/issues/11399/events
https://github.com/huggingface/transformers/issues/11399
866,085,603
MDU6SXNzdWU4NjYwODU2MDM=
11,399
unable to import transformers in Python <3.8
{ "login": "cdeepali", "id": 70963368, "node_id": "MDQ6VXNlcjcwOTYzMzY4", "avatar_url": "https://avatars.githubusercontent.com/u/70963368?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdeepali", "html_url": "https://github.com/cdeepali", "followers_url": "https://api.github.com/users/cdeepali/followers", "following_url": "https://api.github.com/users/cdeepali/following{/other_user}", "gists_url": "https://api.github.com/users/cdeepali/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdeepali/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdeepali/subscriptions", "organizations_url": "https://api.github.com/users/cdeepali/orgs", "repos_url": "https://api.github.com/users/cdeepali/repos", "events_url": "https://api.github.com/users/cdeepali/events{/privacy}", "received_events_url": "https://api.github.com/users/cdeepali/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I can add a PR to fix this. " ]
1,619
1,620
1,620
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Python version: 3.7 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## To reproduce Steps to reproduce the behavior: 1. Install transformers ``` conda create -y -n py37-trans python=3.7 transformers -c HuggingFace conda activate py37-trans ``` 2. import transformers throws the following error: `ModuleNotFoundError: No module named 'importlib_metadata'` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Import should be successful. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11399/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11398
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11398/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11398/comments
https://api.github.com/repos/huggingface/transformers/issues/11398/events
https://github.com/huggingface/transformers/issues/11398
866,066,422
MDU6SXNzdWU4NjYwNjY0MjI=
11,398
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 237414383616 bytes. Error code 12 (Cannot allocate memory)
{ "login": "keloemma", "id": 40454218, "node_id": "MDQ6VXNlcjQwNDU0MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keloemma", "html_url": "https://github.com/keloemma", "followers_url": "https://api.github.com/users/keloemma/followers", "following_url": "https://api.github.com/users/keloemma/following{/other_user}", "gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}", "starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keloemma/subscriptions", "organizations_url": "https://api.github.com/users/keloemma/orgs", "repos_url": "https://api.github.com/users/keloemma/repos", "events_url": "https://api.github.com/users/keloemma/events{/privacy}", "received_events_url": "https://api.github.com/users/keloemma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should pass along small batches to the model to avoid this error: you should create a loop that goes over the I in `range(0, len(padded), batch_size)` and passes along the `padded[i: i+batch_size]` to your model, then concatenates the predictions back together.\r\n\r\nAlso note that this is not a bug in Transformers or a feature request so I invite you to continue the discussion on the [forums](https://discuss.huggingface.co/) if you need further assistance.", "I experienced this same error when using the sentiment analysis pipeline on a list of strings. I set the model argument in the pipeline to `\"nlptown/bert-base-multilingual-uncased-sentiment\"`.", "I am experiencing the same error even on a batch size of 2. ", "same problem" ]
1,619
1,685
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.4 - Tensorflow version (GPU?): - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help Models: - albert, bert, xlm: @LysandreJik, @sgugger Model I am using (FlauBert): The problem arises when trying to produce features with the model, the output which is generated causes the system run out of memory. * [ ] the official example scripts: (I did not change much , pretty close to the original) ``` import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` * [ ] My own modified scripts: (give details below) ``` def get_flaubert_layer(texte): modelname = "flaubert-base-uncased" path = './flau/flaubert-base-unc/' flaubert = FlaubertModel.from_pretrained(path) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(path) tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512))) max_len = 0 for i in tokenized.values: if len(i) > max_len: max_len = len(i) padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values]) token_ids = torch.tensor(padded) with torch.no_grad(): last_layer = flaubert(token_ids)[0][:,0,:].numpy() return last_layer, modelname ``` The tasks I am working on is: * [ ] Producing vectors/features from a language model and pass it to others classifiers ## To reproduce Steps to reproduce the behavior: 1. Get transformers library and scikit-learn, pandas and numpy, pytorch 2. Last lines of code ``` # Reading the file filename = "corpus" sentences = pd.read_excel(os.path.join(root, filename + ".xlsx"), sheet_name= 0) data_id = sentences.identifiant print("Total phrases: ", len(data_id)) data = sentences.sent label = sentences.etiquette emb, mdlname = get_flaubert_layer(data) # corpus is dataframe of approximately 40 000 lines ``` Apperently this line produce something which is huge and which take a lot memory : last_layer = flaubert(token_ids)[0][:,0,:].numpy() I would have expected it run but I think the fact that I pass the whole dataset to the model is causing the system to break, so I wanted to know if it possible to tell the model to process the data set maybe 500 lines or 1000 lines at at a time so as to not pass the whole dataset. I know that , there is this parameter : batch_size which can be used but since I am not training a model but merely using it to produces embeddings as input for others classifiers , Do you perhaps know how to modify the batch size so the whole dataset is not treated. I am not really familiar with this type of architecture. In the example , they just put one single sentence but in my case I load a whole dataset (dataframe). ? My expectation is to make the model treat all the sentences and then produced the vectors I need for the task of classification.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11398/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11397
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11397/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11397/comments
https://api.github.com/repos/huggingface/transformers/issues/11397/events
https://github.com/huggingface/transformers/issues/11397
866,024,583
MDU6SXNzdWU4NjYwMjQ1ODM=
11,397
PreTrainedTokenizerFast.save_pretrained() ERROR
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It is because the version of `tokenizers` lib and the version of `transformers` lib do not match." ]
1,619
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: macOS-11.1-arm64-arm-64bit - Python version: 3.9.1 - PyTorch version (GPU?): 1.8.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ## Description It a bizarre error. When I run ``` from transformers import AutoTokenizer t=AutoTokenizer.from_pretrained('distilroberta-base') t.save_pretrained('vocab/') ``` It gives me the following errors: ``` >>> t.save_pretrained('vocab/') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2005, in save_pretrained return self._save_pretrained( File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 528, in _save_pretrained vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix) File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 172, in save_vocabulary files = self._tokenizer.model.save(save_directory, name=filename_prefix) TypeError: PyModel.save() got an unexpected keyword argument: name ``` It looks that the saving process called the `save_vocabulary` function in `gpt2/tokenization_gpt2_fast.py`. But the tokenizer I wanted to save is `distilroberta-base`. Does anyone get some ideas?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11397/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11396
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11396/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11396/comments
https://api.github.com/repos/huggingface/transformers/issues/11396/events
https://github.com/huggingface/transformers/pull/11396
866,020,830
MDExOlB1bGxSZXF1ZXN0NjIxOTk1NjU5
11,396
Fix small typo in text
{ "login": "MaksymDel", "id": 8141935, "node_id": "MDQ6VXNlcjgxNDE5MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8141935?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MaksymDel", "html_url": "https://github.com/MaksymDel", "followers_url": "https://api.github.com/users/MaksymDel/followers", "following_url": "https://api.github.com/users/MaksymDel/following{/other_user}", "gists_url": "https://api.github.com/users/MaksymDel/gists{/gist_id}", "starred_url": "https://api.github.com/users/MaksymDel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MaksymDel/subscriptions", "organizations_url": "https://api.github.com/users/MaksymDel/orgs", "repos_url": "https://api.github.com/users/MaksymDel/repos", "events_url": "https://api.github.com/users/MaksymDel/events{/privacy}", "received_events_url": "https://api.github.com/users/MaksymDel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? Fixes small typo ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). Documentation + maintained examples: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11396/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11396/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11396", "html_url": "https://github.com/huggingface/transformers/pull/11396", "diff_url": "https://github.com/huggingface/transformers/pull/11396.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11396.patch", "merged_at": 1619177840000 }
https://api.github.com/repos/huggingface/transformers/issues/11395
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11395/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11395/comments
https://api.github.com/repos/huggingface/transformers/issues/11395/events
https://github.com/huggingface/transformers/pull/11395
865,956,397
MDExOlB1bGxSZXF1ZXN0NjIxOTQzMTc1
11,395
[Blenderbot] Integration Test should be slow
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!" ]
1,619
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Test takes more than 30seconds on every commit, let's make it slow ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11395/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11395", "html_url": "https://github.com/huggingface/transformers/pull/11395", "diff_url": "https://github.com/huggingface/transformers/pull/11395.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11395.patch", "merged_at": 1619178550000 }
https://api.github.com/repos/huggingface/transformers/issues/11394
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11394/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11394/comments
https://api.github.com/repos/huggingface/transformers/issues/11394/events
https://github.com/huggingface/transformers/pull/11394
865,926,849
MDExOlB1bGxSZXF1ZXN0NjIxOTE5NDA0
11,394
[Flax] Correct Flax <=> PyTorch conversion
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,619
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Original BERT weights were saved in a weird format where `gamma` was used as the parameter name for the LayerNorm weight. This should be taken into account when converting to Flax. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11394/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11394", "html_url": "https://github.com/huggingface/transformers/pull/11394", "diff_url": "https://github.com/huggingface/transformers/pull/11394.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11394.patch", "merged_at": 1619171974000 }
https://api.github.com/repos/huggingface/transformers/issues/11393
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11393/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11393/comments
https://api.github.com/repos/huggingface/transformers/issues/11393/events
https://github.com/huggingface/transformers/pull/11393
865,924,493
MDExOlB1bGxSZXF1ZXN0NjIxOTE3NTQw
11,393
[Flax] Typo
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,619
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo in the examples `run_mlm_flax.py` script. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11393/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11393", "html_url": "https://github.com/huggingface/transformers/pull/11393", "diff_url": "https://github.com/huggingface/transformers/pull/11393.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11393.patch", "merged_at": 1619170499000 }
https://api.github.com/repos/huggingface/transformers/issues/11392
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11392/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11392/comments
https://api.github.com/repos/huggingface/transformers/issues/11392/events
https://github.com/huggingface/transformers/issues/11392
865,829,964
MDU6SXNzdWU4NjU4Mjk5NjQ=
11,392
MayBe There is a bug with class DebertaV2PredictionHeadTransform
{ "login": "startnew", "id": 15137679, "node_id": "MDQ6VXNlcjE1MTM3Njc5", "avatar_url": "https://avatars.githubusercontent.com/u/15137679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/startnew", "html_url": "https://github.com/startnew", "followers_url": "https://api.github.com/users/startnew/followers", "following_url": "https://api.github.com/users/startnew/following{/other_user}", "gists_url": "https://api.github.com/users/startnew/gists{/gist_id}", "starred_url": "https://api.github.com/users/startnew/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/startnew/subscriptions", "organizations_url": "https://api.github.com/users/startnew/orgs", "repos_url": "https://api.github.com/users/startnew/repos", "events_url": "https://api.github.com/users/startnew/events{/privacy}", "received_events_url": "https://api.github.com/users/startnew/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @startnew,\r\n\r\nSorry could you add some code that we could copy-paste into a terminal to reproduce the error? :-) I don't quite follow here - is it the official code (in `src/transformes`) that doesn't work or specific adapted code?", "Thank you for your reply, @patrickvonplaten , you can reproduce the error I encountered by opening the colab link below\r\n[https://colab.research.google.com/drive/1DiMkU0lEeZqj2AT9DrafP_X-PDZMzxuK#scrollTo=GeOz-1Ix-5HE](https://colab.research.google.com/drive/1DiMkU0lEeZqj2AT9DrafP_X-PDZMzxuK#scrollTo=GeOz-1Ix-5HE)", "refer from offical config ,the difference is I give an custum \"embedding_size\" and the \"embeddding_size\" is not equal to \"hidden_size\",the official use \"hidden_size” as \"embedding_size\", I guess this is the cause of the error", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hey @startnew,\r\n\r\nIt's sadly a bit too time consuming to dive into another repo - could you maybe post a short code snippet that shows your error without using any external github repos or code? \r\n\r\nUsually, you should be able to customize the `\"embedding_size\"` configuration parameter" ]
1,619
1,622
1,622
NONE
null
I got an **error** like this: `RuntimeError: mat1 dim 1 must match mat2 dim` when Use official Debertav2 mlm . ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.5.0.dev0 - Platform: Ubuntu 18.04.3 LTS - Python version: 3.6 - PyTorch version (GPU?): 1.7.1+cu101 - Tensorflow version (GPU?): - Using GPU in script?:yes - Using distributed or parallel set-up in script?: NO ### Who can help @LysandreJik @sgugger @patrickvonplaten ## Information Model I am using DebertaV2 For MLM : The problem arises when code using : * [ ] the official example scripts: [transformers/models/deberta_v2/modeling_deberta_v2.py](https://github.com/huggingface/transformers/blob/9f72e8f4e1e767c5f608dd135199e592255b8a69/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L1178) line at 1178 to 1213 * [x] official scripts like This: ```python # copied from transformers.models.bert.BertPredictionHeadTransform with bert -> deberta class DebertaV2PredictionHeadTransform(nn.Module): def __init__(self, config): super().__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) if isinstance(config.hidden_act, str): self.transform_act_fn = ACT2FN[config.hidden_act] else: self.transform_act_fn = config.hidden_act self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) def forward(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.transform_act_fn(hidden_states) hidden_states = self.LayerNorm(hidden_states) return hidden_states # copied from transformers.models.bert.BertLMPredictionHead with bert -> deberta class DebertaV2LMPredictionHead(nn.Module): def __init__(self, config): super().__init__() self.transform = DebertaV2PredictionHeadTransform(config) # The output weights are the same as the input embeddings, but there is # an output-only bias for each token. self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` self.decoder.bias = self.bias def forward(self, hidden_states): hidden_states = self.transform(hidden_states) hidden_states = self.decoder(hidden_states) return hidden_states ``` I got an **error** like this: `RuntimeError: mat1 dim 1 must match mat2 dim` > Traceback (most recent call last): File "train_pre_model.py", line 257, in <module> trainer.train(resume_from_checkpoint=last_checkpoint) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1120, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1524, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1556, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 161, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1159, in forward prediction_scores = self.cls(sequence_output) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1224, in forward prediction_scores = self.predictions(sequence_output) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1213, in forward hidden_states = self.decoder(hidden_states) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1692, in linear output = input.matmul(weight.t()) RuntimeError: mat1 dim 1 must match mat2 dim 0 * [ ] my own modified scripts: * when I refer from https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py ,change [transformers/models/deberta_v2/modeling_deberta_v2.py] TO ```python class DebertaV2PredictionHeadTransform(nn.Module): def __init__(self, config): super().__init__() self.dense = nn.Linear(config.hidden_size, config.embedding_size) if isinstance(config.hidden_act, str): self.transform_act_fn = ACT2FN[config.hidden_act] else: self.transform_act_fn = config.hidden_act self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps) def forward(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.transform_act_fn(hidden_states) hidden_states = self.LayerNorm(hidden_states) return hidden_states # copied from transformers.models.bert.BertLMPredictionHead with bert -> deberta class DebertaV2LMPredictionHead(nn.Module): def __init__(self, config): super().__init__() self.transform = DebertaV2PredictionHeadTransform(config) # The output weights are the same as the input embeddings, but there is # an output-only bias for each token. self.decoder = nn.Linear(config.embedding_size, config.vocab_size, bias=False) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` self.decoder.bias = self.bias def forward(self, hidden_states): hidden_states = self.transform(hidden_states) #print(hidden_states.size()) #print(self.decoder) hidden_states = self.decoder(hidden_states) return hidden_states ``` The code it can works well,but I can't undestand why it can work when the official can not
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11392/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11391
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11391/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11391/comments
https://api.github.com/repos/huggingface/transformers/issues/11391/events
https://github.com/huggingface/transformers/pull/11391
865,816,123
MDExOlB1bGxSZXF1ZXN0NjIxODMxNTA2
11,391
Fix typos in README for text-classification
{ "login": "yoshitomo-matsubara", "id": 11156001, "node_id": "MDQ6VXNlcjExMTU2MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yoshitomo-matsubara", "html_url": "https://github.com/yoshitomo-matsubara", "followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers", "following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}", "gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}", "starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions", "organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs", "repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos", "events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}", "received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? `transformers/examples/pytorch/text-classification/run_glue_no_trainer.py` has `max_length` instead of `max_seq_length` while the README uses `--max_seq_length` as example commands for `run_glue_no_trainer.py`. This PR fixes the typos. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11391/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11391", "html_url": "https://github.com/huggingface/transformers/pull/11391", "diff_url": "https://github.com/huggingface/transformers/pull/11391.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11391.patch", "merged_at": 1619178522000 }
https://api.github.com/repos/huggingface/transformers/issues/11390
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11390/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11390/comments
https://api.github.com/repos/huggingface/transformers/issues/11390/events
https://github.com/huggingface/transformers/issues/11390
865,781,214
MDU6SXNzdWU4NjU3ODEyMTQ=
11,390
S3 checkpoints not working with distributed training on sagemaker
{ "login": "laphang", "id": 24724502, "node_id": "MDQ6VXNlcjI0NzI0NTAy", "avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laphang", "html_url": "https://github.com/laphang", "followers_url": "https://api.github.com/users/laphang/followers", "following_url": "https://api.github.com/users/laphang/following{/other_user}", "gists_url": "https://api.github.com/users/laphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laphang/subscriptions", "organizations_url": "https://api.github.com/users/laphang/orgs", "repos_url": "https://api.github.com/users/laphang/repos", "events_url": "https://api.github.com/users/laphang/events{/privacy}", "received_events_url": "https://api.github.com/users/laphang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @philschmid ", "Hey @laphang, \r\n\r\nCould you please share your `estimator` configuration? that would help debug and reproduce your problem. Thanks!\r\n", "@laphang I tried to reproduce your error and for me its works using the following `HuggingFace` estimator. \r\n```python\r\n# estimator\r\nhuggingface_estimator = HuggingFace(entry_point='run_glue.py',\r\n source_dir='./scripts',\r\n metrics_definition=metric_definitions,\r\n instance_type=instance_type,\r\n instance_count=instance_count,\r\n volume_size=volume_size,\r\n role=role,\r\n transformers_version='4.4.2',\r\n pytorch_version='1.6.0',\r\n checkpoint_s3_uri=f's3://{sess.default_bucket()}/checkpoints',\r\n py_version='py36',\r\n distribution= distribution,\r\n hyperparameters = hyperparameters,\r\n debugger_hook_config=False)\r\n```\r\nThis estimator just extends the estimator from our [04_distributed_training_model_parallelism](https://github.com/huggingface/notebooks/blob/master/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) and includes the `checkpoint_s3_uri`. \r\n![Bildschirmfoto 2021-04-23 um 14 18 47](https://user-images.githubusercontent.com/32632186/115869940-deee0300-a43e-11eb-9995-80a033ac56d0.png)\r\n\r\n\r\n> ## Environment info\r\n> * `transformers` version: 4.5.0\r\n> * Platform: AWS Sagemaker\r\n> * Python version: 3.6\r\n> * PyTorch version (GPU?): 1.7.1\r\n> * Tensorflow version (GPU?):\r\n> * Using GPU in script?: yes\r\n> * Using distributed or parallel set-up in script?: yes\r\n\r\nReading your **environment** it seems that you are not yet using the new Hugging Face Deep Learning Container for Amazon SageMaker. Is that true? or have you update them? ", "@philschmid \r\nAh yes, I'm still using the PyTorchEstimator and installing transformers via requirements.txt. I'll try again with the HuggingFace Estimator and get back to you guys. Thanks for the quick response.", "@philschmid yeah, made the changes below from using the PyTorch estimator to the HuggingFace one, and now distributed training with s3 checkpoints is working properly now (training job completes successfully, and all the checkpoints are synced to s3). It's working both using Sagemaker distributed model parallel, and also using torch.distributed.launch\r\n\r\nAlso just wanted to say that I was pleasantly surprised with how seamlessly Transformers is working with SageMaker model parallel. Great work guys!\r\n\r\n```\r\nbefore:\r\nestimator = PyTorch(base_job_name=job_name, \r\n entry_point = 'run_clm.py', \r\n source_dir=source_dir,\r\n code_location=output_path,\r\n role=role,\r\n framework_version='1.7.1',\r\n py_version='py3', \r\n hyperparameters=hyperparameters,\r\n tags=tags, \r\n output_path=output_path, \r\n checkpoint_s3_uri=checkpoint_path, \r\n instance_count=1, \r\n instance_type='ml.p4d.24xlarge', \r\n distribution= distribution, \r\n use_spot_instances=train_use_spot_instances,\r\n max_run=train_max_run,\r\n max_wait=train_max_wait, \r\n metric_definitions=metric_definition\r\n )\r\n\r\nafter:\r\nestimator = HuggingFace(base_job_name=job_name, \r\n entry_point = 'run_clm.py', \r\n source_dir=source_dir,\r\n code_location=output_path,\r\n role=role,\r\n transformers_version='4.4.2',\r\n pytorch_version='1.6.0',\r\n py_version='py36', \r\n hyperparameters=hyperparameters,\r\n tags=tags, \r\n output_path=output_path, \r\n checkpoint_s3_uri=checkpoint_s3_uri, \r\n debugger_hook_config=False,\r\n instance_count=1, \r\n instance_type='ml.p4d.24xlarge', \r\n distribution= distribution, \r\n use_spot_instances=train_use_spot_instances,\r\n max_run=train_max_run,\r\n max_wait=train_max_wait, \r\n metric_definitions=metric_definition\r\n )\r\n```\r\n\r\n", "@laphang that are great news and thank you for the kind words! 🤗 \r\nShould you have any questions or problems in the future feel free to tag me directly in the issue.", "@philschmid I am getting the same error as @laphang was getting, even with hugging face estimator. Only, the first checkpoints is getting saved in the checkpoint_uri_location and rest don't appear in s3. After the end of training job, it is taking an hour showing uploading and ends with an error \"InternalServerError: We encountered an internal error. Please try again\".\r\n\r\nIt has started since I added sagemaker distributed data parallel into Hugging face estimator. It has kind of become a blocker for our model training, any help would be really appreciated.\r\n\r\n`distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}\r\n\r\nhuggingface_estimator = HuggingFace(\r\n entry_point='train.py',\r\n source_dir='./scripts',\r\n sagemaker_sess=sess,\r\n instance_type='ml.p4d.24xlarge',\r\n instance_count=1,\r\n volume_size=60,\r\n code_location=output_path,\r\n output_path=output_path,\r\n checkpoint_s3_uri=checkpoint_s3_uri,\r\n tensorboard_output_config=tensorboard_output_config,\r\n role=role,\r\n transformers_version='4.6.1',\r\n pytorch_version='1.7.1',\r\n py_version='py36',\r\n hyperparameters = hyperparameters,\r\n distribution=distribution\r\n)`", "Hey @Harshitcmd, \r\n\r\ncould maybe share your training script? which `TrainingArguments` are you using? \r\n\r\nFor \r\n> After the end of training job, it is taking an hour showing uploading and ends with an error \"InternalServerError: We encountered an internal error. Please try again\".\r\n\r\nIt might be possible that you are saving your checkpoint in `/opt/ml/model` (which will be uploaded to s3 after training) and it gets through saving the checkpoints. \r\n", "Hey @philschmid thanks for replying. \r\n\r\nI have been saving my checkpoints into \"check_dir\": \"/opt/ml/checkpoints\". Before integrating data parallelism with p4d.24xlarge I was using p3.2xlarge with the same training arguments and there all the checkpoints were getting saved into s3 on the go itself. \r\n\r\nPlz have a look into my training arguments.\r\n\r\n` \r\n\r\n training_args = TrainingArguments(\r\n output_dir=args.check_dir,\r\n num_train_epochs=args.epochs,\r\n per_device_train_batch_size=args.train_batch_size,\r\n per_device_eval_batch_size=args.eval_batch_size,\r\n eval_accumulation_steps=1,\r\n warmup_ratio=args.warmup_steps,\r\n evaluation_strategy=\"no\",\r\n logging_dir=f\"/opt/ml/output/tensorboard/\",\r\n learning_rate=float(args.learning_rate),\r\n save_total_limit=10,\r\n save_steps = 200,\r\n logging_steps = 20,\r\n )\r\n\r\n # create Trainer instance\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=test_dataset,\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n callbacks=[TensorBoardCallback]\r\n )`\r\n\r\n", "You might need to add `overwrite_output_dir` to your `TrainingArguments`\r\n> overwrite_output_dir (bool, optional, defaults to False) – If True, overwrite the content of the output directory. Use this to continue training if output_dir points to a checkpoint directory.\r\nI added it for example like that\r\n```python\r\noverwrite_output_dir=True if get_last_checkpoint(args.output_dir) is not None else False,\r\n```\r\n\r\nand to solve your upload issue you should save the model into `/opt/ml/model`. ", "Hey @philschmid,\r\n\r\nI tried adding overwrite_output_dir=True, it's partially solved my issue. Now, the checkpoints are in sync with s3(all the checkpoints and model artifacts are getting saved at the desired location). Even though all the checkpoints got uploaded to the s3 it has showed the status as **Uploading** for an hour and ended with an internal error(weird).\r\n\r\nPS: When I didn't integrate the data parallelism with the same instance type (p4d.24xlarge) everything worked seamlessly. " ]
1,619
1,635
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: AWS Sagemaker - Python version: 3.6 - PyTorch version (GPU?): 1.7.1 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger ## Information Model I am using (Bert, XLNet ...): gpt-neo The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use the run_clm.py example script to finetune gpt-neo in Sagemaker with either torch.distributed.launch, or using Sagemaker distributed model parallel (say on a p4d.24xlarge with 8 gpus) 2. Only the first checkpoint is synced to the checkpoint_s3_uri location. Subsequent checkpoints do not appear in S3 3. Also, at the end of the training job, it spends around 1 hour in the "Uploading" state and ends with the error below. InternalServerError: We encountered an internal error. Please try again. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expected the training to work normally, and all the checkpoints and final model to get synced to the S3 location. NB: training is working when I don't use the checkpoint_s3_uri (with both torch.distributed.launch and sagemaker distributed model parallel). Also with a single gpu (on a p3.2xlarge), training with checkpoint_s3_uri is working, all the checkpoints and final model are synced to S3.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11390/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11389
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11389/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11389/comments
https://api.github.com/repos/huggingface/transformers/issues/11389/events
https://github.com/huggingface/transformers/issues/11389
865,749,785
MDU6SXNzdWU4NjU3NDk3ODU=
11,389
Distributed DataSampler has fixed data order despite random seeds.
{ "login": "lorr1", "id": 57237365, "node_id": "MDQ6VXNlcjU3MjM3MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorr1", "html_url": "https://github.com/lorr1", "followers_url": "https://api.github.com/users/lorr1/followers", "following_url": "https://api.github.com/users/lorr1/following{/other_user}", "gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}", "starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorr1/subscriptions", "organizations_url": "https://api.github.com/users/lorr1/orgs", "repos_url": "https://api.github.com/users/lorr1/repos", "events_url": "https://api.github.com/users/lorr1/events{/privacy}", "received_events_url": "https://api.github.com/users/lorr1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Follow-up note (part of @lorr1's team that encountered this). This is particularly insidious for any sort of code that tries training with multiple random seeds; there's an assumption that across seeds, weight initialization (for pre-training, fine-tuning weights), dropout, AND data order are all different (and all do have significant bearing on results). \r\n\r\nConsistent data order (as in the existing code) runs counter to that expectation.", "I'm guessing we want a new argument to control that seed though, not the current `args.seed` that is set at the beginning of training, what do you think? ", "I think the seed set at the beginning of training would be fine -- that would be the expected behavior (weights get randomly initialized, then data order is random _conditioned on a single seed_.\r\n\r\nAdding a separate seed just for data order means it's just one more thing you need to keep track of.\r\n\r\nThere's a backwards compatibility issue here possibly (if folks doing multiple random seeds worth of runs have been relying on/reporting those results), but this feels like the simplest solution?", "It's definitely the easiest solution. For the backward compatibility issue, I hope users save their version of Transformers and PyTorch along the seeds for reproducibility. PyTorch does not guarantee the same results across versions (we had an issue with multinomial that changed behavior for instance). So I think it's fine to change the behavior, especially as it's a bug fix.\r\n\r\nWill make a PR with the change." ]
1,619
1,619
1,619
NONE
null
When using a distributed data loader with `shuffle = True` in the Hugging Face trainer, it calls the underlying torch data loader. If `shuffle` is set to True, the data loader seeds the generator with `seed + epoch` ([here](https://github.com/pytorch/pytorch/blob/f84a50109f794d4feab922056b77d7c358076776/torch/utils/data/distributed.py#L100)). When calling the data loader in HF trainer ([here](https://github.com/huggingface/transformers/blob/3ed5e97ba04ce9b24b4a7161ea74572598a4c480/src/transformers/trainer.py#L553)), the seed is _not_ passed to the torch data loader and thereby gets set to the default seed of 0. This means the data loader generator will always gets initialized to the epoch, despite a different seed to HF. I would think we'd want the data order to be random, too. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes (with DeepSpeed) ### Who can help @sgugger (trainer) ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * The hugging face trainer with a distributed data sampler The tasks I am working on is: * Training GPT2 from scratch using DDP with DeepSpeed ## To reproduce Steps to reproduce the behavior: Using a different seed with distributed data loader does not change the data order. ## Expected behavior The random seed should be passed to the data loader so the data order to randomized with the seed changing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11389/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11388
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11388/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11388/comments
https://api.github.com/repos/huggingface/transformers/issues/11388/events
https://github.com/huggingface/transformers/issues/11388
865,717,734
MDU6SXNzdWU4NjU3MTc3MzQ=
11,388
CUDA OOM in the middle of training when the training data is large
{ "login": "ganeshjawahar", "id": 4785960, "node_id": "MDQ6VXNlcjQ3ODU5NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/4785960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ganeshjawahar", "html_url": "https://github.com/ganeshjawahar", "followers_url": "https://api.github.com/users/ganeshjawahar/followers", "following_url": "https://api.github.com/users/ganeshjawahar/following{/other_user}", "gists_url": "https://api.github.com/users/ganeshjawahar/gists{/gist_id}", "starred_url": "https://api.github.com/users/ganeshjawahar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ganeshjawahar/subscriptions", "organizations_url": "https://api.github.com/users/ganeshjawahar/orgs", "repos_url": "https://api.github.com/users/ganeshjawahar/repos", "events_url": "https://api.github.com/users/ganeshjawahar/events{/privacy}", "received_events_url": "https://api.github.com/users/ganeshjawahar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi\r\nI also have observed the same issue with t5-base model and mt5-small model ", "Could you maybe make use of the `group_by_length` training argument? This will put the largest batch first to make sure OOM are detected in the very beginning (best feature ever by @sgugger )", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info - `transformers` version: 4.6.0.dev0 - Platform: Linux-3.10.0-1160.24.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @patrickvonplaten, @patil-suraj ## Information I am using https://github.com/huggingface/transformers/blob/9e147d31f67a03ea4f5b11a5c7c3b7f8d252bfb7/examples/seq2seq/run_seq2seq.py to train MT5/base on custom parallel data. The code works well when the training data is <=100K but throws CUDA out of memory error in the middle of training when I train with 200K (or beyond) data. The error message is here: loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.3950cd4aaa701cb6f55a976ff996001a5fb09bbbe7ba9084619949d9016f519e Model config MT5Config { "_name_or_path": "/home/patrick/hugging_face/t5/mt5-base", "architectures": [q "T5ForConditionalGeneration" ], "d_ff": 2048, "d_kv": 64, "d_model": 768, "decoder_start_token_id": 0, "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "layer_norm_epsilon": 1e-06, "model_type": "mt5", "num_decoder_layers": 12, "num_heads": 12, "num_layers": 12, "output_past": true, "pad_token_id": 0, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "tokenizer_class": "T5Tokenizer", "transformers_version": "4.6.0.dev0", "use_cache": true, "vocab_size": 250112 } loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.3950cd4aaa701cb6f55a976ff996001a5fb09bbbe7ba9084619949d9016f519e Model config MT5Config { "_name_or_path": "/home/patrick/hugging_face/t5/mt5-base", "architectures": [ "T5ForConditionalGeneration" ], "d_ff": 2048, "d_kv": 64, "d_model": 768, "decoder_start_token_id": 0, "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "layer_norm_epsilon": 1e-06, "model_type": "mt5", "num_decoder_layers": 12, "num_heads": 12, "num_layers": 12, "output_past": true, "pad_token_id": 0, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "tokenizer_class": "T5Tokenizer", "transformers_version": "4.6.0.dev0", "use_cache": true, "vocab_size": 250112 } Can't load following files from cache: ['added_tokens_file', 'tokenizer_file'] and cannot check if these files are necessary for the tokenizer to operate. loading file https://huggingface.co/google/mt5-base/resolve/main/spiece.model from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2 loading file https://huggingface.co/google/mt5-base/resolve/main/special_tokens_map.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82 loading file https://huggingface.co/google/mt5-base/resolve/main/tokenizer_config.json from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a loading weights file https://huggingface.co/google/mt5-base/resolve/main/pytorch_model.bin from cache at /scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/cache_dir/3b7e8056d4ed71d8d7ac2dea78627c4be77ed136399c05b563d4116abfcd9418.1afec9001b62cd5a347e7fd4b664e503ca2377606e11b9ddb8ec1d7b79bc3952 All model checkpoint weights were used when initializing MT5ForConditionalGeneration. All the weights of MT5ForConditionalGeneration were initialized from the model checkpoint at google/mt5-base. If your task is similar to the task the model of the checkpoint was trained on, you can already use MT5ForConditionalGeneration for predictions without further training. 100%|██████████| 196/196 [00:46<00:00, 4.22ba/s] 100%|██████████| 1/1 [00:00<00:00, 11.27ba/s] 100%|██████████| 1/1 [00:00<00:00, 6.31ba/s] ***** Running training ***** Num examples = 195996 Num Epochs = 5 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 32 Gradient Accumulation steps = 1 Total optimization steps = 30625 0%| | 0/30625 [00:00<?, ?it/s]/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 19%|█▉ | 5906/30625 [55:18<4:09:46, 1.65it/s]Traceback (most recent call last): File "/scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/run_seq2seq_general.py", line 879, in <module> main() File "/scratch/st-amuham01-1/ganeshjw/projects/mt5_beluga/run_seq2seq_general.py", line 625, in main train_result = trainer.train(resume_from_checkpoint=None) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/trainer.py", line 1192, in train tr_loss += self.training_step(model, inputs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/trainer.py", line 1590, in training_step loss = self.compute_loss(model, inputs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/trainer.py", line 1622, in compute_loss outputs = model(**inputs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 1505, in forward return_dict=return_dict, File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 959, in forward output_attentions=output_attentions, File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 638, in forward output_attentions=output_attentions, File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 545, in forward output_attentions=output_attentions, File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/t5/modeling_t5.py", line 502, in forward attn_weights, p=self.dropout, training=self.training File "/arc/project/st-amuham01-1/ganeshjw/.conda/envs/blink/lib/python3.7/site-packages/torch/nn/functional.py", line 1076, in dropout return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training) RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 31.75 GiB total capacity; 29.60 GiB already allocated; 2.00 MiB free; 30.11 GiB reserved in total by PyTorch) 19%|█▉ | 5906/30625 [55:20<3:51:35, 1.78it/s] Any help would be highly appreciated. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11388/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11387
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11387/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11387/comments
https://api.github.com/repos/huggingface/transformers/issues/11387/events
https://github.com/huggingface/transformers/pull/11387
865,711,993
MDExOlB1bGxSZXF1ZXN0NjIxNzQ4MTA5
11,387
Implement Fast Tokenization for Deberta
{ "login": "ShubhamSanghvi", "id": 26190273, "node_id": "MDQ6VXNlcjI2MTkwMjcz", "avatar_url": "https://avatars.githubusercontent.com/u/26190273?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShubhamSanghvi", "html_url": "https://github.com/ShubhamSanghvi", "followers_url": "https://api.github.com/users/ShubhamSanghvi/followers", "following_url": "https://api.github.com/users/ShubhamSanghvi/following{/other_user}", "gists_url": "https://api.github.com/users/ShubhamSanghvi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShubhamSanghvi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShubhamSanghvi/subscriptions", "organizations_url": "https://api.github.com/users/ShubhamSanghvi/orgs", "repos_url": "https://api.github.com/users/ShubhamSanghvi/repos", "events_url": "https://api.github.com/users/ShubhamSanghvi/events{/privacy}", "received_events_url": "https://api.github.com/users/ShubhamSanghvi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik , most of it was easy to figure out by looking at other tokenizers. The setup and testing guidelines were very easy to follow, I was up and running very quickly.\r\n\r\nFor the fast tokenizers, few things that might help someone like me who is new to the transformers library:\r\n1. A top-level difference between the fast and slow tokenizers. At first, I did not know there was a tokenization library and took some time to figure that out.\r\n2. Overview of how we implement tokenizers. Things like what do the vocab files do and what does the merges_file do. (Although this could just be me.)\r\n3. In some fast tokenizers, there are files listed which are not being used. That confused me initially as I thought we needed those files for a fast tokenizer.\r\n\r\nHope this helps. \r\n", "@LysandreJik, \r\n\r\n#10498 mentioned implementing a tokenizer for deberta v2 as well. I have created a new feature request #11529 for that.", "Thank you @ShubhamSanghvi, this is all very helpful. We'll take care of including that in the documentation so that it's clearer from now on.\r\n\r\nWill take a look at #11529!" ]
1,619
1,620
1,619
CONTRIBUTOR
null
# What does this PR do? Fixes #10498 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11387/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11387/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11387", "html_url": "https://github.com/huggingface/transformers/pull/11387", "diff_url": "https://github.com/huggingface/transformers/pull/11387.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11387.patch", "merged_at": 1619784496000 }
https://api.github.com/repos/huggingface/transformers/issues/11386
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11386/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11386/comments
https://api.github.com/repos/huggingface/transformers/issues/11386/events
https://github.com/huggingface/transformers/pull/11386
865,489,019
MDExOlB1bGxSZXF1ZXN0NjIxNTU4ODgy
11,386
[Seq2seq] Add Support for TensorFlow
{ "login": "jhabr", "id": 25850, "node_id": "MDQ6VXNlcjI1ODUw", "avatar_url": "https://avatars.githubusercontent.com/u/25850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jhabr", "html_url": "https://github.com/jhabr", "followers_url": "https://api.github.com/users/jhabr/followers", "following_url": "https://api.github.com/users/jhabr/following{/other_user}", "gists_url": "https://api.github.com/users/jhabr/gists{/gist_id}", "starred_url": "https://api.github.com/users/jhabr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jhabr/subscriptions", "organizations_url": "https://api.github.com/users/jhabr/orgs", "repos_url": "https://api.github.com/users/jhabr/repos", "events_url": "https://api.github.com/users/jhabr/events{/privacy}", "received_events_url": "https://api.github.com/users/jhabr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! I'm the TF maintainer for 🤗 Transformers right now. Thanks for this, the code quality looks really good! There's one issue, though - we're currently trying to move away from `TFTrainer` and use more native, idiomatic TF code based on Keras. You can see an example of the kind of TFTrainer-free approach we're working on [here](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_text_classification.py).\r\n\r\nAs a result, we probably can't accept `Seq2SeqTFTrainer` in the main library right now, but we're definitely planning on adding a Seq2Seq TF example soon. If you'd like, you can try to convert this PR to a TFTrainer-free Seq2Seq example script and put it in /examples/tensorflow, but I understand if that's a lot of work and you don't want to bother right now! ", "Hi @Rocketknight1! Thanks for your feedback, I understand. This actually emerged as a side product from a project that I'm currently working on so I thought I'd share this. But it's good to know the direction you're heading. If you're planning to add some seq2seq examples for Keras in the next few weeks then it's fine to close this PR I guess. Otherwise I will probably rewrite this code in order to be aligned with the Huggingface library and its overall direction. Do you have any date on your mind for the seq2seq TF examples?", "Hi, sorry for the delay! We are indeed planning to include those models, hopefully in a month or so. I don't have an exact date, but it's on my To Do list.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds seq2seq support for TensorFlow and its corresponding summarization training and evaluation script. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, @patrickvonplaten, @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11386/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11386", "html_url": "https://github.com/huggingface/transformers/pull/11386", "diff_url": "https://github.com/huggingface/transformers/pull/11386.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11386.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/11385
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11385/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11385/comments
https://api.github.com/repos/huggingface/transformers/issues/11385/events
https://github.com/huggingface/transformers/issues/11385
865,453,945
MDU6SXNzdWU4NjU0NTM5NDU=
11,385
[docs]Incorrect way of input encoding for "multiple choice" models in documentation?
{ "login": "Riroaki", "id": 26740837, "node_id": "MDQ6VXNlcjI2NzQwODM3", "avatar_url": "https://avatars.githubusercontent.com/u/26740837?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Riroaki", "html_url": "https://github.com/Riroaki", "followers_url": "https://api.github.com/users/Riroaki/followers", "following_url": "https://api.github.com/users/Riroaki/following{/other_user}", "gists_url": "https://api.github.com/users/Riroaki/gists{/gist_id}", "starred_url": "https://api.github.com/users/Riroaki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Riroaki/subscriptions", "organizations_url": "https://api.github.com/users/Riroaki/orgs", "repos_url": "https://api.github.com/users/Riroaki/repos", "events_url": "https://api.github.com/users/Riroaki/events{/privacy}", "received_events_url": "https://api.github.com/users/Riroaki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Riroaki I agree with you! It should be:\r\n```python\r\ntokenizer([[prompt, choice0], [prompt, choice1]], return_tensors='pt', padding=True)\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@sgugger @SBrandeis \r\nPlease have a look at the example scripts.🙏", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I'm running into this issue as well.\r\n\r\nI'm not super familiar working with multiple choice models, but I think that given #6074, and the [run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py) example, these should be passed as two lists, instead of one.\r\n\r\nIn other words, in the example, instead of\r\n\r\n```python\r\nencoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True)\r\n```\r\n\r\nit should be \r\n\r\n```python\r\nencoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True)\r\n```\r\n\r\nIt would be awesome to make this two-character-deletion change, as it just tripped me up when starting working on a multiple choice model!", "This has been fixed, but you need to switch to the master documentation to see the change." ]
1,619
1,626
1,626
NONE
null
In the documentation about "xxForMultipleChoice" models like BERT, ALBERT, RoBERTa, the examples goes like [this](https://huggingface.co/transformers/model_doc/bert.html#bertformultiplechoice): ``` >>> from transformers import BertTokenizer, BertForMultipleChoice >>> import torch >>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') >>> model = BertForMultipleChoice.from_pretrained('bert-base-uncased') >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1 >>> encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k,v in encoding.items()}, labels=labels) # batch size is 1 ... ``` In current version (4.5.1), the `encoding` actually consists of 2 sentences: `[prompt + prompt]`, and `[choice0 + choice1]`, which to my knowledge is incorrect, as each encoded sentence should include one prompt and one choice. I think the `encoding` supposed to be like: ``` tokenizer([[prompt, choice0], [prompt, choice1]], return_tensors='pt', padding=True) ``` So, is there anything wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11385/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11384
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11384/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11384/comments
https://api.github.com/repos/huggingface/transformers/issues/11384/events
https://github.com/huggingface/transformers/issues/11384
865,399,217
MDU6SXNzdWU4NjUzOTkyMTc=
11,384
some issue in loading local txt file as Dataset for run_mlm.py
{ "login": "alighofrani95", "id": 14968123, "node_id": "MDQ6VXNlcjE0OTY4MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alighofrani95", "html_url": "https://github.com/alighofrani95", "followers_url": "https://api.github.com/users/alighofrani95/followers", "following_url": "https://api.github.com/users/alighofrani95/following{/other_user}", "gists_url": "https://api.github.com/users/alighofrani95/gists{/gist_id}", "starred_url": "https://api.github.com/users/alighofrani95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alighofrani95/subscriptions", "organizations_url": "https://api.github.com/users/alighofrani95/orgs", "repos_url": "https://api.github.com/users/alighofrani95/repos", "events_url": "https://api.github.com/users/alighofrani95/events{/privacy}", "received_events_url": "https://api.github.com/users/alighofrani95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_I tried to load 3 .txt files as a dataset_ - bad idea, try to merge them in one, then let's see what you'll get (which error message or result).", "Try this one:\r\n```\r\nfrom pathlib import Path\r\npaths = [str(x) for x in Path(\".\").glob(\"**/*.txt\")]\r\n```\r\n\r\nP.S. Next time just copy-paste code, don't screen it", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
![image](https://user-images.githubusercontent.com/14968123/115773877-18cef300-a3c6-11eb-8e58-a9cbfd1001ec.png) first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error. > FileNotFoundError: [Errno 2] No such file or directory: 'c' by removing one of the training .txt files It's fixed and although if I put all file as training it's ok ![image](https://user-images.githubusercontent.com/14968123/115774207-867b1f00-a3c6-11eb-953b-905cfb112d25.png) ![image](https://user-images.githubusercontent.com/14968123/115774264-9b57b280-a3c6-11eb-9f36-7b109f0e5a31.png) after this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining. by using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs. > Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 336, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/dataset/dataset.py > During handling of the above exception, another exception occurred: > Traceback (most recent call last): File "run_mlm.py", line 486, in <module> main() File "run_mlm.py", line 242, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 719, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 347, in prepare_module combined_path, github_file_path FileNotFoundError: Couldn't find file locally at dataset/dataset.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.6.0/datasets/dataset/dataset.py. The file is also not present on the master branch on github.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11384/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11383
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11383/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11383/comments
https://api.github.com/repos/huggingface/transformers/issues/11383/events
https://github.com/huggingface/transformers/pull/11383
865,106,662
MDExOlB1bGxSZXF1ZXN0NjIxMjIzNjk5
11,383
Fixed trainer total_flos relaoding in distributed mode
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? There was a bug on the `total_flos` quantity when loading/reloading trainer states in distributed mode: when reloading a training, every process started from the total amount of floating-point operations. The next time they were aggregated, this caused the sum of the operations of all processes to be inflated. This PR fixes this behaviour by only storing a `current_flos` variable per process that comes back to zero every time it is logged, and keeping the total amount separate in the trainer state. # Who to tag? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11383/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11383", "html_url": "https://github.com/huggingface/transformers/pull/11383", "diff_url": "https://github.com/huggingface/transformers/pull/11383.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11383.patch", "merged_at": 1619178813000 }
https://api.github.com/repos/huggingface/transformers/issues/11382
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11382/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11382/comments
https://api.github.com/repos/huggingface/transformers/issues/11382/events
https://github.com/huggingface/transformers/pull/11382
865,035,174
MDExOlB1bGxSZXF1ZXN0NjIxMTY0MTQ0
11,382
Fix Trainer with remove_unused_columns=False
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
COLLABORATOR
null
# What does this PR do? A bug was introduced by mistake in #11343 when `remove_unused_columns=False`. This PR fixes that. Fixes #11381
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11382/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11382", "html_url": "https://github.com/huggingface/transformers/pull/11382", "diff_url": "https://github.com/huggingface/transformers/pull/11382.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11382.patch", "merged_at": 1619104584000 }
https://api.github.com/repos/huggingface/transformers/issues/11381
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11381/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11381/comments
https://api.github.com/repos/huggingface/transformers/issues/11381/events
https://github.com/huggingface/transformers/issues/11381
865,027,196
MDU6SXNzdWU4NjUwMjcxOTY=
11,381
Trainer._remove_unused_columns() returns None
{ "login": "guyrosin", "id": 1250162, "node_id": "MDQ6VXNlcjEyNTAxNjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1250162?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guyrosin", "html_url": "https://github.com/guyrosin", "followers_url": "https://api.github.com/users/guyrosin/followers", "following_url": "https://api.github.com/users/guyrosin/following{/other_user}", "gists_url": "https://api.github.com/users/guyrosin/gists{/gist_id}", "starred_url": "https://api.github.com/users/guyrosin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guyrosin/subscriptions", "organizations_url": "https://api.github.com/users/guyrosin/orgs", "repos_url": "https://api.github.com/users/guyrosin/repos", "events_url": "https://api.github.com/users/guyrosin/events{/privacy}", "received_events_url": "https://api.github.com/users/guyrosin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you so much for flagging! This should be fixed by the PR above." ]
1,619
1,619
1,619
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.6.0.dev0 - Platform: Linux-4.15.0-134-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1 (False) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger, @LysandreJik ## Information `Trainer._remove_unused_columns()` returns None in case `args.remove_unused_columns` is `False`, instead of returning the given dataset. Related to #11343. Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [x] the official example scripts: (give details below) run_mlm/glue/... * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Set `TrainingArguments.remove_unused_columns=False` 2. Train/eval/test your model using `Trainer` 3. The dataset would be None, and so the following exception would raise: ``` Traceback (most recent call last): ... train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/guyrosin/src/transformers/src/transformers/trainer.py", line 814, in num_examples self.num_examples(train_dataloader) if train_dataset_is_sized else total_train_batch_size * args.max_steps File "/home/guyrosin/src/transformers/src/transformers/trainer.py", line 814, in num_examples self.num_examples(train_dataloader) if train_dataset_is_sized else total_train_batch_size * args.max_steps File "/home/guyrosin/src/transformers/src/transformers/trainer.py", line 814, in num_examples return len(dataloader.dataset) TypeError: object of type 'NoneType' has no len() ``` ## Expected behavior `Trainer._remove_unused_columns()` should always return a dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11381/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11380
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11380/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11380/comments
https://api.github.com/repos/huggingface/transformers/issues/11380/events
https://github.com/huggingface/transformers/pull/11380
864,886,579
MDExOlB1bGxSZXF1ZXN0NjIxMDQwMzUx
11,380
[Examples] Fixes inconsistency around eval vs val and predict vs test
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @sgugger and @stas00,\r\nI have made changes in the following way,\r\n| Earlier | Now |\r\n| ---- | ---- |\r\n| test | predict |\r\n| test_examples | predict_examples |\r\n| test_dataset | predict_dataset |\r\n| max_test_samples | max_predict_samples |\r\n| val | eval |\r\n| val_examples | eval_examples |\r\n| val_dataset | eval_dataset |\r\n| max_val_samples | max_eval_samples |\r\n\r\n* I have also made changes in the trainer code for the above variables\r\n* I have modified the template file accordingly\r\n* `examples/pytorch/question-answering/run_qa_no_trainer.py` don't have code complete code for predict stage. Shall we need to add? \r\n\r\n", "Hi @sgugger,\r\nI was trying to test [run_qa_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py) and I came to know that when I passed `--test_file` it was giving me an error since we don't have such [argument](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py#L87) in the file.\r\n\r\nWhen i give `--do_predict` it gives an error at this [line](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py#L507) since I was using squad data and it doesn't have `test` set\r\n\r\nBut as you said it will work perfectly for the dataset with `testset`.", "Ah! now I get it. We can add support for a `--test_file` in another PR, yes!", "I am exploring other files, I will add more changes in docs and other files", "You're doing a very painful task, @bhadreshpsavani, as @sgugger commented we unfortunately can't normalize these 2 names fully both because of the backcompat and also where in some cases something is called validation/test split :( Thank you for doing this important work and bearing with all these setbacks.", "Hi @stas00, \r\nIt's totally fine. \r\nI was expecting more suggestions because I did a lot of changes for all files in a single go. I am enjoying this coding work that's important for me!", "Hi @stas00,\r\nCan we use this command?\r\n`git push --force-with-lease origin myfeature`\r\nI read that this is safe than `--force` and [many organizations even using this.](https://stackoverflow.com/questions/41283955/github-keeps-saying-this-branch-is-x-commits-ahead-y-commits-behind)", "In general all PRs are isolated until they are merged, so if you make a mistake on your own PR, in the worst case you will make a mess of your own changes, but it won't impact the master. So feel free to experiment.\r\n\r\n**edit:** this is for sure for when you don't have write access to upstream master, or if you work in your own fork. I'm not sure if it's the same when one does have a write access and is working directly on the source. I find it much safer to always do all the work in my own fork and merge upstream via PRs.\r\n\r\nI have never used this particular flag before, so if it's safer go for it.", "`run_tests_torch` CI job has been flakey as of recent, You can always force a CI restart if you see one or more CI jobs are failing unrelated to your commit with an empty commit:\r\n\r\n```\r\ngit commit --allow-empty -m \"Trigger CI\"\r\ngit push\r\n```", "This is a cool command. I will definitely need this. I will note this down!\r\nThanks", "Hi @sgugger and @stas00 ,\r\nI have reverted trainer changes and updated the example pytorch readme.\r\nPlease let me know if we need to make more changes ", "@sgugger, should these not be covered too?\r\n\r\n```\r\n$ grep -Ir max_val_samples\r\nexamples/tensorflow/text-classification/run_text_classification.py: max_val_samples: Optional[int] = field(\r\nexamples/tensorflow/text-classification/run_text_classification.py: if data_args.max_val_samples is not None:\r\nexamples/tensorflow/text-classification/run_text_classification.py: eval_dataset = eval_dataset.select(range(data_args.max_val_samples))\r\nexamples/research_projects/wav2vec2/run_common_voice.py: max_val_samples: Optional[int] = field(\r\nexamples/research_projects/wav2vec2/run_common_voice.py: if data_args.max_val_samples is not None:\r\nexamples/research_projects/wav2vec2/run_common_voice.py: eval_dataset = eval_dataset.select(range(data_args.max_val_samples))\r\nexamples/research_projects/wav2vec2/run_common_voice.py: max_val_samples = data_args.max_val_samples if data_args.max_val_samples is not None else len(eval_dataset)\r\nexamples/research_projects/wav2vec2/run_common_voice.py: metrics[\"eval_samples\"] = min(max_val_samples, len(eval_dataset))\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: max_val_samples: Optional[int] = field(\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: if data_args.max_val_samples is not None:\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: eval_dataset = eval_dataset.select(range(data_args.max_val_samples))\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: max_val_samples = data_args.max_val_samples if data_args.max_val_samples is not None else len(eval_dataset)\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: metrics[\"eval_samples\"] = min(max_val_samples, len(eval_dataset))\r\n```\r\n\r\n```\r\n$ grep -Ir max_test_samples\r\nexamples/tensorflow/text-classification/run_text_classification.py: max_test_samples: Optional[int] = field(\r\nexamples/tensorflow/text-classification/run_text_classification.py: if data_args.max_test_samples is not None:\r\nexamples/tensorflow/text-classification/run_text_classification.py: test_dataset = test_dataset.select(range(data_args.max_test_samples))\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: max_test_samples: Optional[int] = field(\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: if data_args.max_test_samples is not None:\r\ntests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py: test_dataset = test_dataset.select(range(data_args.max_test_samples))\r\n```", "This PR should be rebased on master and deal with `examples/tensorflow/text-classification/run_text_classification.py` that was added recently yes.\r\n\r\n`examples/research_projects/wav2vec2/run_common_voice.py` is a research-project so is not actively maintained and pinned to a version of transformers, I would leave it out of this PR.\r\n\r\nFor `tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py`, I wouldn't touch it since it's a test script (which will ultimately be replaced by a TF example) but I'll let @philschmid decide on this one.", "> For `tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py`, I wouldn't touch it since it's a test script (which will ultimately be replaced by a TF example) but I'll let @philschmid decide on this one.\r\n\r\nIs a custom version for `SageMakerTrainer`, this will be removed after the deprecation of `SageMakerTrainer`, there is no need for adjustments. ", "Thanks for adjusting! I think this is good to merge, @stas00 if you agree I'll let you click on the button :-) " ]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? Fixes #10165 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11380/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11380/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11380", "html_url": "https://github.com/huggingface/transformers/pull/11380", "diff_url": "https://github.com/huggingface/transformers/pull/11380.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11380.patch", "merged_at": 1619454271000 }
https://api.github.com/repos/huggingface/transformers/issues/11379
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11379/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11379/comments
https://api.github.com/repos/huggingface/transformers/issues/11379/events
https://github.com/huggingface/transformers/pull/11379
864,884,527
MDExOlB1bGxSZXF1ZXN0NjIxMDM4NjMx
11,379
Correctly cast num_train_epochs to int
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[]
1,619
1,619
1,619
MEMBER
null
The num_train_epochs arg in `training_args.py` is actually a float, so we cast it to int before it goes to Keras.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11379/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11379", "html_url": "https://github.com/huggingface/transformers/pull/11379", "diff_url": "https://github.com/huggingface/transformers/pull/11379.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11379.patch", "merged_at": 1619095800000 }
https://api.github.com/repos/huggingface/transformers/issues/11378
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11378/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11378/comments
https://api.github.com/repos/huggingface/transformers/issues/11378/events
https://github.com/huggingface/transformers/pull/11378
864,857,577
MDExOlB1bGxSZXF1ZXN0NjIxMDE2MTky
11,378
Remove max length beam scorer
{ "login": "GeetDsa", "id": 13940397, "node_id": "MDQ6VXNlcjEzOTQwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GeetDsa", "html_url": "https://github.com/GeetDsa", "followers_url": "https://api.github.com/users/GeetDsa/followers", "following_url": "https://api.github.com/users/GeetDsa/following{/other_user}", "gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}", "starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions", "organizations_url": "https://api.github.com/users/GeetDsa/orgs", "repos_url": "https://api.github.com/users/GeetDsa/repos", "events_url": "https://api.github.com/users/GeetDsa/events{/privacy}", "received_events_url": "https://api.github.com/users/GeetDsa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Awesome job @GeetDsa - this looks good to me :-) \r\n\r\nAlso pinging @patil-suraj and @Narsil for review.", "@Narsil - do you think we need a test here or should this be fine without?", "Hi @patrickvonplaten , Most of the errors are caused because of \"max_length\" attribute used while creating `beam_scorer` object in the test cases. These errors are arising as the beam_scorer attribute is removed from the source code. Infact, I had modified the files under `tests/*` to take account for this.(This can be found in my commits. Sorry that I did not had different commits for the actual source code and the tests). So, may be can you review the files that I have modified under `tests/*` as well?", "I think we shouldn´t modify the tests if possible (thats what make backward compatiblity enforced).\r\n\r\nInstead, we should probably : \r\n- add some warning for users that used the raw components/functions\r\n- point the towards a better solution\r\n- modify the tests that DO raise warnings to make sure we do raise them (and also swallow them in the logs) (they become backward compatibility tests their names should probably reflect that too) (We can remove some backward tests if they are very redundant btw)\r\n- Have new tests that point show new API.\r\n- Higher level tests (within models) should not be affected.\r\n@GeetDsa I can do this for you if you want.", "> I think we shouldn´t modify the tests if possible (thats what make backward compatiblity enforced).\r\n> \r\n> Instead, we should probably :\r\n> \r\n> * add some warning for users that used the raw components/functions\r\n> * point the towards a better solution\r\n> * modify the tests that DO raise warnings to make sure we do raise them (and also swallow them in the logs) (they become backward compatibility tests their names should probably reflect that too) (We can remove some backward tests if they are very redundant btw)\r\n> * Have new tests that point show new API.\r\n> * Higher level tests (within models) should not be affected.\r\n> @GeetDsa I can do this for you if you want.\r\n\r\nHi @Narsil, I think it would be better if you can do it, as I don't really know how to deal with tests.", "> > I think we shouldn´t modify the tests if possible (thats what make backward compatiblity enforced).\r\n> > Instead, we should probably :\r\n> > \r\n> > * add some warning for users that used the raw components/functions\r\n> > * point the towards a better solution\r\n> > * modify the tests that DO raise warnings to make sure we do raise them (and also swallow them in the logs) (they become backward compatibility tests their names should probably reflect that too) (We can remove some backward tests if they are very redundant btw)\r\n> > * Have new tests that point show new API.\r\n> > * Higher level tests (within models) should not be affected.\r\n> > @GeetDsa I can do this for you if you want.\r\n\r\n\r\n Hi @Narsil, I think it would be better if you can do it, as I don't really know how to deal with tests.\r\n\r\n", "I'll take care of it", "@Narsil - I think we don't need to add a test here. By design the PR prevents errors such as the one mentioned in the issue - It's not possible anymore to pass `max_length` to `BeamSearchScorer` and thus a test doesn't make much sense here IMO.", "Will wait for https://github.com/huggingface/transformers/pull/11442 to be merged before rebasing this one to merge", "@patrickvonplaten But if users were using the BeamScorer object, that's a breaking change, isn't it ?", "> @patrickvonplaten But if users were using the BeamScorer object, that's a breaking change, isn't it ?\r\n\r\nYes true. IMO, no functionality is lost though because:\r\n\r\n- Previously, if one had passed `max_length` to both `def beam_search(...)` and `BeamSearchScorer(...)`, then there would have been a bug (see issue). The correct way of fixing the bug (while still allowing `BeamSearchScorer(...)` to accept `max_length`) would have been to overwrite `BeamSearchScorer's` max_length with `beam_search(...)`'s max_length. On the other hand it's never possible to **not** pass `max_length` to `def beam_search(...)` => therefore I think either way the `max_length` arg to `BeamSearchScorer` is useless (the `max_length` value of `beam_search(...)` would have been preferred in any way. \r\n\r\n=> However, we could/should probably add `**kwargs` to `BeamSearchScorer` that throws a warning if `max_length` is passed & says that's it's deprecated. This would be cleaner overall and have no breaking changes - what do you think @Narsil ?", "Yep, that's what I had in mind, just accept it, raise a warning (and ignore it, exactly as it used to if I understand correctly) ", "> Yep, that's what I had in mind, just accept it, raise a warning (and ignore it, exactly as it used to if I understand correctly)\r\n\r\nDone! Could you review one last time and merge if ok for you? @Narsil ", "@patrickvonplaten, thank you for taking care of it. I found a small issue in the warning message. The message provided in the latest commit is \"\"`max_length` should be passed directly to `beam_search(...)`, `beam_sample(...)`\"\", shouldn't you include \"generate(..)\" as well, as sometime, `generate(...)` inherently calls `beam_search(..,)` or `group_beam_search(..,)`, or other relevant functions.", "> @patrickvonplaten, thank you for taking care of it. I found a small issue in the warning message. The message provided in the latest commit is \"\"`max_length` should be passed directly to `beam_search(...)`, `beam_sample(...)`\"\", shouldn't you include \"generate(..)\" as well, as sometime, `generate(...)` inherently calls `beam_search(..,)` or `group_beam_search(..,)`, or other relevant functions.\r\n\r\nHey @GeetDsa, \r\n\r\nIt should be fine, since the warning is impossible to be triggered by `generate(...)` since when one calls `generate(...)`, one cannot pass `max_length` to `BeamScorer` " ]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #11040 Modifies ad per the comments from Pull #11122 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patil-suraj ; @patrickvonplaten ; @Narsil Note: I have modified code in the `./tests/` where, `max_length` was used with `beam_scorer`. Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11378/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11378", "html_url": "https://github.com/huggingface/transformers/pull/11378", "diff_url": "https://github.com/huggingface/transformers/pull/11378.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11378.patch", "merged_at": 1619476120000 }
https://api.github.com/repos/huggingface/transformers/issues/11377
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11377/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11377/comments
https://api.github.com/repos/huggingface/transformers/issues/11377/events
https://github.com/huggingface/transformers/pull/11377
864,851,796
MDExOlB1bGxSZXF1ZXN0NjIxMDExNDM4
11,377
new call for model addition
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "Think we still need to fill out a couple of things before publishing it :-) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "unstale", "@patil-suraj @patrickvonplaten \r\n\r\nI have started implementation of GLM model in HF. Need some input on implementation part.\r\n\r\n1. Original GLM model is implemented using torch parallel distribution , In HF implementation, are we going to keep torch parallel distributed or convert to normal architecture(without parallelism )\r\n\r\n2. There are six version of models (like GLM -base, GLM- large…etc), in which two diffrent tokenization is used Wordpiece and BPE. Like GLM-base and large is having wordpiece and 'GLE-Roberta'and 'GLE-large' is having BPE.\r\n\r\n" ]
1,619
1,648
null
MEMBER
null
# What does this PR do? This PR add a "call for model addition" to add [GLM](https://github.com/THUDM/GLM) model into the lib.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11377/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11377/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11377", "html_url": "https://github.com/huggingface/transformers/pull/11377", "diff_url": "https://github.com/huggingface/transformers/pull/11377.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11377.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/11376
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11376/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11376/comments
https://api.github.com/repos/huggingface/transformers/issues/11376/events
https://github.com/huggingface/transformers/issues/11376
864,831,951
MDU6SXNzdWU4NjQ4MzE5NTE=
11,376
Wav2vec2: comparison to original implementation
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @cceyda - thanks for the issue!\r\n\r\nYes, this was an intentional choice since those parameters seemed to be the same in all experiments after studying the paper. If people want to change those params, we could always add them in a later version.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,621
1,621
CONTRIBUTOR
null
# 🚀 Reproducibility challenge During the fine-tuning week I realized some of the smaller details in the implementation are a bit different than the original fairseq implementation. @patrickvonplaten Original Code: https://github.com/pytorch/fairseq/blob/master/fairseq/models/wav2vec/wav2vec2.py 🤗 Code: https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py For example in `compute_mask_indices` function comparing scripts: - There are naming differences, not that important as long as it is *documented* somewhere(?) for people transitioning to 🤗 & trying to replicate with the same hyper-parameters. `mask_time_prob=mask_prob (in fairseq)` `mask_time_length=mask_length(in fairseq)` `mask_feature_prob=mask_channel_length(in fairseq)` `mask_feature_length=mask_channel_length (in fairseq)` Also with the naming of different dropouts. - But there are also seem to be some un-specified parameters: `no_overlap`,`min_space`. https://github.com/pytorch/fairseq/blob/05b86005bcca0155319fa9b81abfd69f63c06906/fairseq/models/wav2vec/wav2vec2.py#L348 Not sure how these effect, didn't look deeply, wanted to report just in case. ## Motivation I realized while digging deeper because there was minor differences on results while finetuning (might be also be a fairseq problem https://github.com/pytorch/fairseq/issues/1448) ## Your contribution opening this issue 😛
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11376/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11375
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11375/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11375/comments
https://api.github.com/repos/huggingface/transformers/issues/11375/events
https://github.com/huggingface/transformers/issues/11375
864,815,372
MDU6SXNzdWU4NjQ4MTUzNzI=
11,375
Output probability from `model.generate` for TF models
{ "login": "gcuder", "id": 60609608, "node_id": "MDQ6VXNlcjYwNjA5NjA4", "avatar_url": "https://avatars.githubusercontent.com/u/60609608?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gcuder", "html_url": "https://github.com/gcuder", "followers_url": "https://api.github.com/users/gcuder/followers", "following_url": "https://api.github.com/users/gcuder/following{/other_user}", "gists_url": "https://api.github.com/users/gcuder/gists{/gist_id}", "starred_url": "https://api.github.com/users/gcuder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gcuder/subscriptions", "organizations_url": "https://api.github.com/users/gcuder/orgs", "repos_url": "https://api.github.com/users/gcuder/repos", "events_url": "https://api.github.com/users/gcuder/events{/privacy}", "received_events_url": "https://api.github.com/users/gcuder/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
CONTRIBUTOR
null
# 🚀 Feature request Since pytorch models already have the option to output probablities when using `model.generate(...)`, I wanted to ask if there's any chance if this will also be implemented for TensorFlow models?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11375/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11375/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11374
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11374/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11374/comments
https://api.github.com/repos/huggingface/transformers/issues/11374/events
https://github.com/huggingface/transformers/pull/11374
864,799,830
MDExOlB1bGxSZXF1ZXN0NjIwOTY5NDUw
11,374
[Flax] Correct typo
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,619
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Wrong name for dropout was used. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11374/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11374", "html_url": "https://github.com/huggingface/transformers/pull/11374", "diff_url": "https://github.com/huggingface/transformers/pull/11374.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11374.patch", "merged_at": 1619089904000 }
https://api.github.com/repos/huggingface/transformers/issues/11373
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11373/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11373/comments
https://api.github.com/repos/huggingface/transformers/issues/11373/events
https://github.com/huggingface/transformers/pull/11373
864,721,645
MDExOlB1bGxSZXF1ZXN0NjIwOTA2MTY4
11,373
Add space
{ "login": "tma15", "id": 481227, "node_id": "MDQ6VXNlcjQ4MTIyNw==", "avatar_url": "https://avatars.githubusercontent.com/u/481227?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tma15", "html_url": "https://github.com/tma15", "followers_url": "https://api.github.com/users/tma15/followers", "following_url": "https://api.github.com/users/tma15/following{/other_user}", "gists_url": "https://api.github.com/users/tma15/gists{/gist_id}", "starred_url": "https://api.github.com/users/tma15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tma15/subscriptions", "organizations_url": "https://api.github.com/users/tma15/orgs", "repos_url": "https://api.github.com/users/tma15/repos", "events_url": "https://api.github.com/users/tma15/events{/privacy}", "received_events_url": "https://api.github.com/users/tma15/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11373/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11373", "html_url": "https://github.com/huggingface/transformers/pull/11373", "diff_url": "https://github.com/huggingface/transformers/pull/11373.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11373.patch", "merged_at": 1619093939000 }
https://api.github.com/repos/huggingface/transformers/issues/11372
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11372/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11372/comments
https://api.github.com/repos/huggingface/transformers/issues/11372/events
https://github.com/huggingface/transformers/pull/11372
864,669,881
MDExOlB1bGxSZXF1ZXN0NjIwODY0ODcx
11,372
[run_translation.py] fix typo
{ "login": "johnson7788", "id": 6083466, "node_id": "MDQ6VXNlcjYwODM0NjY=", "avatar_url": "https://avatars.githubusercontent.com/u/6083466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnson7788", "html_url": "https://github.com/johnson7788", "followers_url": "https://api.github.com/users/johnson7788/followers", "following_url": "https://api.github.com/users/johnson7788/following{/other_user}", "gists_url": "https://api.github.com/users/johnson7788/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnson7788/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnson7788/subscriptions", "organizations_url": "https://api.github.com/users/johnson7788/orgs", "repos_url": "https://api.github.com/users/johnson7788/repos", "events_url": "https://api.github.com/users/johnson7788/events{/privacy}", "received_events_url": "https://api.github.com/users/johnson7788/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
line 380: forced less a letter r: model.config.foced_bos_token_id = forced_bos_token_id --> model.config.forced_bos_token_id = forced_bos_token_id # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11372/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11372", "html_url": "https://github.com/huggingface/transformers/pull/11372", "diff_url": "https://github.com/huggingface/transformers/pull/11372.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11372.patch", "merged_at": 1619093831000 }
https://api.github.com/repos/huggingface/transformers/issues/11371
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11371/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11371/comments
https://api.github.com/repos/huggingface/transformers/issues/11371/events
https://github.com/huggingface/transformers/issues/11371
864,527,519
MDU6SXNzdWU4NjQ1Mjc1MTk=
11,371
[examples] UserWarning: `max_length` is deprecated
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This one also having same warning `examples/pytorch/summarization/run_summarization.py `", "Would you like to tackle this one, @bhadreshpsavani, next? Absolutely no need to say yes ;)", "Sure, I will take this issue,\r\nI don't have any extra issue with me apart from this!", "Hi @stas00,\r\nI am not very sure that if this is expected behavior or needed to fix,\r\nIt's coming from this code blocks\r\nhttps://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/src/transformers/generation_utils.py#L962-L966\r\nGenerally, a warning is for a good purpose, right?\r\n\r\nwe are not passing any `max_length` but still this warning is coming so I think I need to fix that part in the code somewhere", "Because of the below default value, the warning is coming,\r\nhttps://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/examples/pytorch/translation/run_translation.py#L139-L144\r\nhttps://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/examples/pytorch/summarization/run_summarization.py#L150-L155\r\nWe are passing it in trainer and it uses generation utils at below code\r\nhttps://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/examples/legacy/seq2seq/seq2seq_trainer.py#L220-L224", "Thank you for this investigation, @bhadreshpsavani - that's very helpful. It looks like the change was introduced just a day before I filed this Issue.\r\n\r\n@Narsil, could we please check with you on this deprecation you introduced in https://github.com/huggingface/transformers/commit/aad95c7cdebc24e780e5a5cf39d832c015e40075\r\n\r\n1. Unless I'm missing something the deprecation doesn't seem to be complete since `max_length` is actively used in the same function:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/src/transformers/generation_utils.py#L919\r\n\r\nI am not sure how a deprecated variable is still used normally in the logic...\r\n\r\nAlso it's not documented as deprecated:\r\n```\r\n max_length (:obj:`int`, `optional`, defaults to 20):\r\n The maximum length of the sequence to be generated.\r\n```\r\n\r\n2. it now generates warnings in the example scripts like `run_translation.py` and `run_summarization.py` - so if there is a new way could we please atomically adjust all the places that are now impacted by this change? The examples ideally should be in sync with API changes, since their purpose is to correctly demonstrate how to use the library.\r\n\r\nThe main entry point leading to this warning in seq2seq examples is:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4e7bf94e7280d2b725ac4644dbe9808560afa5d8/src/transformers/trainer_seq2seq.py#L161-L171\r\n\r\nThank you!\r\n", "Hi @stas00 ,\r\n\r\nYes, it's my fault 1 deprecation too much on the `generate` function ! \r\nJust submitted a new PR to remove this extra warnings (which is incorrect).\r\n\r\n`max_length` is used by `generate` but not by the subsequent functions.\r\n" ]
1,619
1,620
1,620
CONTRIBUTOR
null
Not sure how many example scripts are affected by this: ``` src/transformers/generation_utils.py:963: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead. ``` getting this with at least `examples/pytorch/translation/run_translation.py`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11371/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11370
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11370/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11370/comments
https://api.github.com/repos/huggingface/transformers/issues/11370/events
https://github.com/huggingface/transformers/issues/11370
864,514,340
MDU6SXNzdWU4NjQ1MTQzNDA=
11,370
ERRORS: run_mlm_performer.py
{ "login": "luoda888", "id": 22568420, "node_id": "MDQ6VXNlcjIyNTY4NDIw", "avatar_url": "https://avatars.githubusercontent.com/u/22568420?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luoda888", "html_url": "https://github.com/luoda888", "followers_url": "https://api.github.com/users/luoda888/followers", "following_url": "https://api.github.com/users/luoda888/following{/other_user}", "gists_url": "https://api.github.com/users/luoda888/gists{/gist_id}", "starred_url": "https://api.github.com/users/luoda888/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luoda888/subscriptions", "organizations_url": "https://api.github.com/users/luoda888/orgs", "repos_url": "https://api.github.com/users/luoda888/repos", "events_url": "https://api.github.com/users/luoda888/events{/privacy}", "received_events_url": "https://api.github.com/users/luoda888/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I tried to use self.config.attention_probs_dropout_prob instead of config.dropout_rate. It seems worked.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: tfs4.5.1 - Platform: ubuntu18.04 - Python version: 3.8 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no I try to use the run_mlm_performer.py `TOKENIZERS_PARALLELISM=false python run_mlm_performer.py --model_name_or_path ../cache_model/bert-base-chinese/ --tokenizer_name ../cache_model/bert-base-chinese/ --train_file ../data/wikicorpus_zh_one_article_per_line-jianti.txt --do_train --fp16 --output_dir ./test-mlm --max_seq_length 512 --per_device_train_batch_size 256 --reinitialize --overwrite_output_dir True --preprocessing_num_workers 8` And raise AttributeError( jax._src.traceback_util.FilteredStackTrace: AttributeError: "FlaxBertSelfAttention" object has no attribute "dropout_rate" You will find that FlaxBertSelfAttention no defined dropout_rate...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11370/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11369
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11369/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11369/comments
https://api.github.com/repos/huggingface/transformers/issues/11369/events
https://github.com/huggingface/transformers/pull/11369
864,485,831
MDExOlB1bGxSZXF1ZXN0NjIwNzE0ODA0
11,369
Fix typo
{ "login": "penut85420", "id": 1570332, "node_id": "MDQ6VXNlcjE1NzAzMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/1570332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/penut85420", "html_url": "https://github.com/penut85420", "followers_url": "https://api.github.com/users/penut85420/followers", "following_url": "https://api.github.com/users/penut85420/following{/other_user}", "gists_url": "https://api.github.com/users/penut85420/gists{/gist_id}", "starred_url": "https://api.github.com/users/penut85420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penut85420/subscriptions", "organizations_url": "https://api.github.com/users/penut85420/orgs", "repos_url": "https://api.github.com/users/penut85420/repos", "events_url": "https://api.github.com/users/penut85420/events{/privacy}", "received_events_url": "https://api.github.com/users/penut85420/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes typo in `/src/transformers/generation_utils.py`, change `defaults tp 1.0` to `defaults to 1.0`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11369/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11369", "html_url": "https://github.com/huggingface/transformers/pull/11369", "diff_url": "https://github.com/huggingface/transformers/pull/11369.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11369.patch", "merged_at": 1619100617000 }
https://api.github.com/repos/huggingface/transformers/issues/11368
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11368/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11368/comments
https://api.github.com/repos/huggingface/transformers/issues/11368/events
https://github.com/huggingface/transformers/issues/11368
864,440,200
MDU6SXNzdWU4NjQ0NDAyMDA=
11,368
Megatron fused CUDA kernels to improve Hugging Face model classes' scalability
{ "login": "g-karthik", "id": 3851993, "node_id": "MDQ6VXNlcjM4NTE5OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/g-karthik", "html_url": "https://github.com/g-karthik", "followers_url": "https://api.github.com/users/g-karthik/followers", "following_url": "https://api.github.com/users/g-karthik/following{/other_user}", "gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}", "starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions", "organizations_url": "https://api.github.com/users/g-karthik/orgs", "repos_url": "https://api.github.com/users/g-karthik/repos", "events_url": "https://api.github.com/users/g-karthik/events{/privacy}", "received_events_url": "https://api.github.com/users/g-karthik/received_events", "type": "User", "site_admin": false }
[ { "id": 2690307185, "node_id": "MDU6TGFiZWwyNjkwMzA3MTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Performance", "name": "Performance", "color": "207F32", "default": false, "description": "" }, { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "I think the biggest barrier to using custom CUDA kernel is that it'd require `transformers` to move from a python-only package, to a compilation-required type of package (even if JIT), which in my experience is the type of a package that is far from trivial to use and often raises a barrier to entry.\r\n\r\nIf I'm not mistaken some fused kernels have been pushed upstream into the pytorch-core, so if you know of any that we could receive precompiled via pytorch, then we can definitely use those.\r\n\r\nAnd if they aren't and you have some resources to initiate the conversation - it'd definitely help to request that such kernels will be added to pytorch-core. Definitely tag me if I do start such a thread at pytorch Issues.\r\n\r\n-----\r\n\r\nI love your spirit of proposing various performance optimizations, @g-karthik and I'd love to work on all of those you have been proposing here and at Deepspeed issues, but so far I find no free resources to do so and all my time is spent on making things work. " ]
1,619
1,621
null
NONE
null
# 🚀 Feature request Support for custom fused CUDA kernels with HF model classes. ## Motivation It appears that Hugging Face model classes do not scale very well as-is unlike Megatron-LM, even when the latter is configured with a degree of model-parallelization = 1 for a "fair" performance comparison. One of the presumed reasons for this is that Megatron-LM leverages custom fused CUDA kernels written by NVIDIA, specifically [these](https://github.com/NVIDIA/Megatron-LM/blob/aed2f75e209e525c842aec7c044af7acae2a4614/megatron/model/transformer.py#L26L27). Could we get variants of existing HF classes (perhaps for `GPT2Model`, `GPT2LMHeadModel`, etc.) such that the variants leverage some/all of these fused CUDA kernels? All this while still ensuring that one can load the original pre-trained weights into these variant classes. Any guidance/low-level thoughts towards making this happen would also be greatly useful! @thomwolf @patrickvonplaten @LysandreJik @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11368/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/11367
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11367/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11367/comments
https://api.github.com/repos/huggingface/transformers/issues/11367/events
https://github.com/huggingface/transformers/pull/11367
864,338,392
MDExOlB1bGxSZXF1ZXN0NjIwNTkxNTY4
11,367
Replace double occurrences as the last step
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,621
1,621
MEMBER
null
This PR fixes an issue with the SPM converters (ALBERT and XLNet) where it would replace some characters by whitespace - after removing double whitespace occurrences. This meant that if double whitespace were to appear thanks to this replacement, they would be kept until the end of encoding, leading to a mismatch between SentencePiece and `tokenizers`. Fixes https://github.com/huggingface/transformers/issues/11358
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11367/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11367", "html_url": "https://github.com/huggingface/transformers/pull/11367", "diff_url": "https://github.com/huggingface/transformers/pull/11367.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11367.patch", "merged_at": 1621841939000 }
https://api.github.com/repos/huggingface/transformers/issues/11366
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11366/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11366/comments
https://api.github.com/repos/huggingface/transformers/issues/11366/events
https://github.com/huggingface/transformers/issues/11366
864,255,775
MDU6SXNzdWU4NjQyNTU3NzU=
11,366
RuntimeError: CUDA error: device-side assert triggered
{ "login": "abb4s", "id": 7654832, "node_id": "MDQ6VXNlcjc2NTQ4MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/7654832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abb4s", "html_url": "https://github.com/abb4s", "followers_url": "https://api.github.com/users/abb4s/followers", "following_url": "https://api.github.com/users/abb4s/following{/other_user}", "gists_url": "https://api.github.com/users/abb4s/gists{/gist_id}", "starred_url": "https://api.github.com/users/abb4s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abb4s/subscriptions", "organizations_url": "https://api.github.com/users/abb4s/orgs", "repos_url": "https://api.github.com/users/abb4s/repos", "events_url": "https://api.github.com/users/abb4s/events{/privacy}", "received_events_url": "https://api.github.com/users/abb4s/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
``` if torch.cuda.is_available(): dev = "cuda:0" else: dev = "cpu" device = torch.device(dev) bert = BertForSequenceClassification.from_pretrained(args.model_name_or_path) bert = bert.to(device) ``` raise RuntimeError: CUDA error: device-side assert triggered nvidia-smi: ![image](https://user-images.githubusercontent.com/7654832/115616848-ed84cf00-a305-11eb-951d-751619b9f8a8.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11366/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11365
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11365/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11365/comments
https://api.github.com/repos/huggingface/transformers/issues/11365/events
https://github.com/huggingface/transformers/issues/11365
864,218,373
MDU6SXNzdWU4NjQyMTgzNzM=
11,365
Index out of range in self with fine-tuned DPR Context Encoder
{ "login": "benscottie", "id": 75142468, "node_id": "MDQ6VXNlcjc1MTQyNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/75142468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benscottie", "html_url": "https://github.com/benscottie", "followers_url": "https://api.github.com/users/benscottie/followers", "following_url": "https://api.github.com/users/benscottie/following{/other_user}", "gists_url": "https://api.github.com/users/benscottie/gists{/gist_id}", "starred_url": "https://api.github.com/users/benscottie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benscottie/subscriptions", "organizations_url": "https://api.github.com/users/benscottie/orgs", "repos_url": "https://api.github.com/users/benscottie/repos", "events_url": "https://api.github.com/users/benscottie/events{/privacy}", "received_events_url": "https://api.github.com/users/benscottie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik do you have insight on what could be causing this error? Thanks!", "Hi! I'm trying to reproduce but as I don't have your checkpoint this is proving complicated. Could you provide a reproducible example/colab so I can take a look?\r\n\r\nAlso, it seems you've shared some of the stack trace but not the entire stack trace. It would be helpful to see the full error to see where the issue originates from.", "@LysandreJik thanks for your response! I just updated the error to include the entire stack trace. Hope that is helpful.\r\n\r\nI'm not sure how to create a reproducible example since the error is based on the fine-tuned checkpoint. I can send what the config file looks like for that model if that is helpful", "Here is my code for creating the BiEncoder model object from the separate context and question encoders in order to fine-tune DPR as well as the code for saving the checkpoints. Maybe that will help. `self.model` refers to the biencoder.\r\n```\r\n# DPR BiEncoder Model Class\r\nclass BiEncoder(torch.nn.Module):\r\n\r\n def __init__(self, query_model, ctx_model):\r\n super(BiEncoder, self).__init__()\r\n self.query_model = query_model\r\n self.ctx_model = ctx_model\r\n\r\n def forward(self, query_ids, query_attn_mask, ctx_ids, ctx_attn_mask):\r\n #query_embed = self.question_model(query_ids).pooler_output\r\n query_embed = self.query_model(query_ids, attention_mask=query_attn_mask)[0]\r\n #ctx_embed = self.ctx_model(ctx_ids).pooler_output\r\n ctx_embed = self.ctx_model(ctx_ids, attention_mask=ctx_attn_mask)[0]\r\n \r\n return query_embed, ctx_embed\r\n\r\n# Load Model\r\ndef get_model(query_encoder_path, ctx_encoder_path):\r\n\r\n # Question Encoder\r\n query_encoder = DPRQuestionEncoder.from_pretrained(query_encoder_path)\r\n\r\n # Context Encoder\r\n ctx_encoder = DPRContextEncoder.from_pretrained(ctx_encoder_path)\r\n\r\n # Initialize Dual Encoder\r\n biencoder = BiEncoder(query_encoder, ctx_encoder)\r\n\r\n return biencoder\r\n\r\n# Get Optimizer\r\ndef get_optimizer(self):\r\n optimizer_grouped_parameters = [{'params': [p for n,p in self.model.named_parameters()],\r\n 'params': [p for n,p in self.model.named_parameters()]}]\r\n return AdamW(optimizer_grouped_parameters, lr=self.lr)\r\n\r\n# Save Checkpoint\r\nquery_encoder = self.model.query_model\r\nctx_encoder = self.model.ctx_model\r\nquery_model_path = os.path.join(self.cp_subdir, f'query-encoder-{self.exp_date}-checkpoint-{self.global_step}')\r\nctx_model_path = os.path.join(self.cp_subdir, f'ctx-encoder-{self.exp_date}-checkpoint-{self.global_step}')\r\nquery_encoder.save_pretrained(query_model_path) #save encoders\r\nctx_encoder.save_pretrained(ctx_model_path)\r\n```", "Is there a way for you to upload your checkpoints on the hub so that I can take a look and try to reproduce locally? I'm curious to see if the configuration has a too small `max_position_embeddings` leading to the overflow.", "The `max_position_embeddings` in the config file are 512, which matches the DPR default. The vocab size also matches the default.", "I encountered the same issue in Bertweet:\r\nhttps://colab.research.google.com/drive/1cEtC98hIfB-2I-Tcxsp_OEau4rNmjXJw?usp=sharing", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,624
1,624
NONE
null
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.27 - Python version: 3.8.6 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Have tried with and without - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Model I am using: `DPR Question and Context Encoders` Getting Index out of range in self error for embeddings when trying to apply a locally fine-tuned version for each of the DPR encoders. Previous issues point to a difference in `vocab lengths` or tokens but nothing has been changed there. The `model max length` also is consistent at 512. Code: ``` from transformers import DPRContextEncoderTokenizerFast, DPRContextEncoder ctx_tok = DPRContextEncoderTokenizerFast.from_pretrained('/data/riddler/checkpoints/adapted_dpr/ctx-encoder-2021-04-19-checkpoint-11000') model = DPRContextEncoder.from_pretrained('/data/riddler/checkpoints/adapted_dpr/ctx-encoder-2021-04-19-checkpoint-11000') input_ids = ctx_tok("Hello, is my dog cute ?", return_tensors='pt')["input_ids"] embeddings = model(input_ids).pooler_output ``` Error: ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-91-24fb6846809e> in <module> ----> 1 embeddings = model(input_ids['input_ids']) /venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /venv/lib/python3.8/site-packages/transformers/models/dpr/modeling_dpr.py in forward(self, input_ids, attention_mask, token_type_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict) 573 return_dict=return_dict, 574 ) --> 575 576 if not return_dict: 577 return outputs[1:] /venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /venv/lib/python3.8/site-packages/transformers/models/dpr/modeling_dpr.py in forward(self, input_ids, attention_mask, token_type_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict) 170 return_dict: bool = False, 171 ) -> Union[BaseModelOutputWithPooling, Tuple[Tensor, ...]]: --> 172 outputs = self.bert_model( 173 input_ids=input_ids, 174 attention_mask=attention_mask, /venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 962 encoder_hidden_states=encoder_hidden_states, 963 encoder_attention_mask=encoder_extended_attention_mask, --> 964 past_key_values=past_key_values, 965 use_cache=use_cache, 966 output_attentions=output_attentions, /venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 204 if self.position_embedding_type == "absolute": 205 position_embeddings = self.position_embeddings(position_ids) --> 206 embeddings += position_embeddings 207 embeddings = self.LayerNorm(embeddings) 208 embeddings = self.dropout(embeddings) /venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /venv/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 122 123 def forward(self, input: Tensor) -> Tensor: --> 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, 126 self.norm_type, self.scale_grad_by_freq, self.sparse) /venv/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1812 # remove once script supports set_grad_enabled 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1815 1816 IndexError: index out of range in self ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11365/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11364
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11364/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11364/comments
https://api.github.com/repos/huggingface/transformers/issues/11364/events
https://github.com/huggingface/transformers/pull/11364
864,123,685
MDExOlB1bGxSZXF1ZXN0NjIwNDA3OTg5
11,364
[Flax] Big FlaxBert Refactor
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,619
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR does a major refactor of FlaxBert in Transformers, notably: - Costume LayerNorm and Embedding layers are replaced by the official ones. This should significantly reduce maintenance cost at the expense of two more general lines in the conversion script. - A couple of bugs are fixed, *e.g.*, BERT uses the *non-approximated* GELU and not the fast one -> this fixes some minor differences when comparing PyTorchBERT vs FlaxBERT - Weight Tying is added, which should be done for, *e.g.* `FlaxBertForMaskedLM` - Weights can now also be converted the other way around Flax => PyTorch Sorry for putting quite a lot of things into one PR, but there are very much intertwined here. Also, I will have to re-upload some flax weights so that they correspond to the new weight structure (see [here](https://github.com/huggingface/transformers/pull/10977)) @avital @marcvanzee, I had an issue when saving/loading flax weights for which I opened an issue [here](https://github.com/google/flax/issues/1261). At the moment, I solve it by manually transforming every numpy array into a jax DeviceArray, see [here](https://github.com/huggingface/transformers/pull/11364/files#r617853008) - not sure if there is a better solution. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11364/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11364", "html_url": "https://github.com/huggingface/transformers/pull/11364", "diff_url": "https://github.com/huggingface/transformers/pull/11364.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11364.patch", "merged_at": 1619164389000 }
https://api.github.com/repos/huggingface/transformers/issues/11363
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11363/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11363/comments
https://api.github.com/repos/huggingface/transformers/issues/11363/events
https://github.com/huggingface/transformers/issues/11363
864,100,566
MDU6SXNzdWU4NjQxMDA1NjY=
11,363
torch_xla/csrc/tensor_methods.cpp:880 : Check failed: xla::ShapeUtil::Compatible(shapes.back(), tensor_shape)
{ "login": "mabdullah1994", "id": 18423941, "node_id": "MDQ6VXNlcjE4NDIzOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/18423941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mabdullah1994", "html_url": "https://github.com/mabdullah1994", "followers_url": "https://api.github.com/users/mabdullah1994/followers", "following_url": "https://api.github.com/users/mabdullah1994/following{/other_user}", "gists_url": "https://api.github.com/users/mabdullah1994/gists{/gist_id}", "starred_url": "https://api.github.com/users/mabdullah1994/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mabdullah1994/subscriptions", "organizations_url": "https://api.github.com/users/mabdullah1994/orgs", "repos_url": "https://api.github.com/users/mabdullah1994/repos", "events_url": "https://api.github.com/users/mabdullah1994/events{/privacy}", "received_events_url": "https://api.github.com/users/mabdullah1994/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We didn't check yet whether BigBird works on TPU. We should put it on the roadmap (cc @vasudevgupta7) .", "It should be interesting to check which operation (in bigbird) is causing problem on TPU :)", "@vasudevgupta7 @patrickvonplaten Thanks. Please let us know if there was any update. Thanks!", "Hi @mabdullah1994, sorry I missed your comment. I was checking bigbird on colab-tpu. I found that bigbird is working on TPU when we are not passing `attention_mask` (& only passing input_ids) into `model.forward()`. I will try to have a deeper look at it & will try to fix it some time soon.\r\n\r\nCheckout this [notebook](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_tpu.ipynb) with TPU runtime.", "Hi @vasudevgupta7 . Thanks for the update. Please let us know when this is fixed. Need this kind of urgently. Thanks!", "@patrickvonplaten @vasudevgupta7 Any expected time frame, where we might expect it to work with `trainer` on TPUs? I am having the exact same problem, reproduction on synthetic dataset here on [Colab](https://colab.research.google.com/drive/1I6DR07ppQBTYBLatvGjBj70xsgVCOd4z?usp=sharing).\r\n\r\nOr can we only use your script for the time being?" ]
1,619
1,623
1,620
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: TPU - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten ## Information I am using BigBirdForSequenceClassification and BigBirdTokenizer for a simple text classification problem on Google Colab TPU: The problem arises when using: * [ ] my own modified scripts: (Script shared) If I use the BigBirdForSequenceClassification model, I start getting weird errors on TPU. ``` from pathlib import Path def read_imdb_split(split_dir): split_dir = Path(split_dir) texts = [] labels = [] for label_dir in ["pos", "neg"]: for text_file in (split_dir/label_dir).iterdir(): texts.append(text_file.read_text()) labels.append(0 if label_dir is "neg" else 1) return texts, labels train_texts, train_labels = read_imdb_split('aclImdb/train') test_texts, test_labels = read_imdb_split('aclImdb/test') train_texts, train_labels = read_imdb_split('aclImdb/train') test_texts, test_labels = read_imdb_split('aclImdb/test') from sklearn.model_selection import train_test_split train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2) from transformers import BigBirdTokenizer tokenizer = BigBirdTokenizer.from_pretrained('google/bigbird-roberta-base') train_encodings = tokenizer(train_texts, truncation=True, padding=True) val_encodings = tokenizer(val_texts, truncation=True, padding=True) test_encodings = tokenizer(test_texts, truncation=True, padding=True) import torch class IMDbDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) train_dataset = IMDbDataset(train_encodings, train_labels) val_dataset = IMDbDataset(val_encodings, val_labels) test_dataset = IMDbDataset(test_encodings, test_labels) from transformers import BigBirdForSequenceClassification, Trainer, TrainingArguments import torch_xla.distributed.xla_multiprocessing as xmp import torch_xla.core.xla_model as xm def main(): training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=1, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) model = BigBirdForSequenceClassification.from_pretrained('google/bigbird-roberta-base') trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() def _mp_fn(index): main() xmp.spawn(_mp_fn, args=(), nprocs=1, start_method='fork') ``` The tasks I am working on is: * [ ] my own task or dataset: Using the IMDB Dataset for Text Classification ## To reproduce Steps to reproduce the behavior: 1. Setup TPU-client on google Colab: !pip install cloud-tpu-client https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl 2. Download the dataset: a. !wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz b. !tar -xf aclImdb_v1.tar.gz 3. Execute the given script <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` RuntimeError Traceback (most recent call last) <ipython-input-14-38fb8a22e1a3> in <module>() ----> 1 xmp.spawn(_mp_fn, args=(), nprocs=1, start_method='fork') 7 frames /usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method) 384 pf_cfg = _pre_fork_setup(nprocs) 385 if pf_cfg.num_devices == 1: --> 386 _start_fn(0, pf_cfg, fn, args) 387 else: 388 return torch.multiprocessing.start_processes( /usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in _start_fn(index, pf_cfg, fn, args) 321 # environment must be fully setup before doing so. 322 _setup_replication() --> 323 fn(gindex, *args) 324 325 <ipython-input-12-0ed5b032dbf1> in _mp_fn(index) 32 33 def _mp_fn(index): ---> 34 main() <ipython-input-12-0ed5b032dbf1> in main() 29 ) 30 ---> 31 trainer.train() 32 33 def _mp_fn(index): /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 1099 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control) 1100 -> 1101 for step, inputs in enumerate(epoch_iterator): 1102 1103 # Skip past any already trained steps if resuming training /usr/local/lib/python3.7/dist-packages/torch_xla/distributed/parallel_loader.py in __next__(self) 32 33 def __next__(self): ---> 34 return self.next() 35 36 def __len__(self): /usr/local/lib/python3.7/dist-packages/torch_xla/distributed/parallel_loader.py in next(self) 44 if self._mark_step_batch_count <= self._batches_yielded: 45 self._batches_yielded = 0 ---> 46 xm.mark_step() 47 else: 48 self._batches_yielded += 1 /usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py in mark_step() 716 torch_xla._XLAC._xla_step_marker( 717 torch_xla._XLAC._xla_get_default_device(), [], --> 718 wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False)) 719 # Only emit metrics from the first local device index, to avoid emitting the 720 # same values from different threads. RuntimeError: Error while lowering: s64[1,2368]{1,0} aten::copysign, pad=(0, 19, 0, 0), value=0 Error: /pytorch/xla/torch_xla/csrc/helpers.h:100 : Check failed: scalar_value.isIntegral() *** Begin stack trace *** tensorflow::CurrentStackTrace() torch_xla::XlaHelpers::ScalarValue(c10::Scalar, xla::PrimitiveType, xla::XlaBuilder*) torch_xla::ir::ops::ConstantPadNd::Lower(torch_xla::ir::LoweringContext*) const torch_xla::ir::LoweringContext::LowerNode(torch_xla::ir::Node const*) torch_xla::ir::LoweringContext::LoweringContext(std::string const&, torch_xla::Device, absl::lts_2020_02_25::Span<torch_xla::ir::Node const* const>, std::unordered_map<torch_xla::ir::Node const*, torch_xla::ir::Util::EmitStatus, std::hash<torch_xla::ir::Node const*>, std::equal_to<torch_xla::ir::Node const*>, std::allocator<std::pair<torch_xla::ir::Node const* const, torch_xla::ir::Util::EmitStatus> > >) torch_xla::XLATensor::Compile(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> > const&, absl::lts_2020_02_25::Span<std::string const>, torch_xla::XLATensor::SyncTensorCollection const&, torch_xla::XLATensor::PostOrderData*) torch_xla::XLATensor::SyncTensorsGraphInternal(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> >*, absl::lts_2020_02_25::Span<std::string const>, torch_xla::XLATensor::SyncTensorsConfig const&) torch_xla::XLATensor::SyncTensorsGraph(std::vector<torch_xla::XLATensor, std::allocator<torch_xla::XLATensor> >*, absl::lts_2020_02_25::Span<std::string const>, bool, bool) torch_xla::XLATensor::SyncLiveTensorsGraph(torch_xla::Device const*, absl::lts_2020_02_25::Span<std::string const>, bool) _PyMethodDef_RawFastCallKeywords _PyCFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyObject_FastCall_Prepend _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallDict _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName PyEval_EvalCode _PyMethodDef_RawFastCallKeywords _PyCFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallDict _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallDict _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyObject_Call_Prepend _PyObject_FastCallKeywords _PyMethodDef_RawFastCallDict PyCFunction_Call _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName PyEval_EvalCode _PyMethodDef_RawFastCallKeywords _PyCFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallDict _Py_UnixMain __libc_start_main _start *** End stack trace *** Scalar type not supported Python Frames: ``` Similarly, once I got the following error: ``` RuntimeError: torch_xla/csrc/tensor_methods.cpp:880 : Check failed: xla::ShapeUtil::Compatible(shapes.back(), tensor_shape) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Model training should have started but instead got the error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11363/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11362
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11362/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11362/comments
https://api.github.com/repos/huggingface/transformers/issues/11362/events
https://github.com/huggingface/transformers/issues/11362
864,066,560
MDU6SXNzdWU4NjQwNjY1NjA=
11,362
Training a TimeSFormer for video classification
{ "login": "slimaneaymen", "id": 24354915, "node_id": "MDQ6VXNlcjI0MzU0OTE1", "avatar_url": "https://avatars.githubusercontent.com/u/24354915?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slimaneaymen", "html_url": "https://github.com/slimaneaymen", "followers_url": "https://api.github.com/users/slimaneaymen/followers", "following_url": "https://api.github.com/users/slimaneaymen/following{/other_user}", "gists_url": "https://api.github.com/users/slimaneaymen/gists{/gist_id}", "starred_url": "https://api.github.com/users/slimaneaymen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slimaneaymen/subscriptions", "organizations_url": "https://api.github.com/users/slimaneaymen/orgs", "repos_url": "https://api.github.com/users/slimaneaymen/repos", "events_url": "https://api.github.com/users/slimaneaymen/events{/privacy}", "received_events_url": "https://api.github.com/users/slimaneaymen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! Is this a `transformers` issue? Where does `transformers` come into play?", "> \r\n> \r\n> Hello! Is this a `transformers` issue? Where does `transformers` come into play?\r\nHi !! Actually, here I am showing only the issues I'm getting when training the TimeSFormer model,\r\nyou can find more information about the model in this link: https://github.com/lucidrains/TimeSformer-pytorch ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
My input data are the feature maps, instead of raw images. and have the form : (4,50,1,1,256) mini_batch=4 / frames=50 / channels=1 / H=1 / W= 256 The parameters of the TimeSformer are : TimeSformer( dim = 128, image_size = 256, patch_size = 16, num_frames = 50, num_classes = 2, depth = 12, heads = 8, dim_head = 32, attn_dropout = 0., ff_dropout = 0. ) In order to check if my network is working, I have tried to make it overfit by using only 6 training data and 2 validation data of the same shape as before (4,50,1,1,256). But the accuracy I'm getting is in oscillation and never reaches a value >80% and my training loss is not decreasing it's always around 0.6900 - 06950 My training function and parameters are: ![Capture1](https://user-images.githubusercontent.com/24354915/115587902-7c75f500-a2ce-11eb-9a6c-88266f3c233a.PNG) ![Capture2](https://user-images.githubusercontent.com/24354915/115588104-ae875700-a2ce-11eb-81f0-d6e31e177d5d.PNG) ![Capture3](https://user-images.githubusercontent.com/24354915/115588124-b47d3800-a2ce-11eb-82bc-c80e87670bd3.PNG) I would appreciate any suggestion. thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11362/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11361
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11361/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11361/comments
https://api.github.com/repos/huggingface/transformers/issues/11361/events
https://github.com/huggingface/transformers/pull/11361
864,053,846
MDExOlB1bGxSZXF1ZXN0NjIwMzUwNjQ5
11,361
Move old TF text classification script to legacy
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11361/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11361", "html_url": "https://github.com/huggingface/transformers/pull/11361", "diff_url": "https://github.com/huggingface/transformers/pull/11361.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11361.patch", "merged_at": 1619022978000 }
https://api.github.com/repos/huggingface/transformers/issues/11360
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11360/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11360/comments
https://api.github.com/repos/huggingface/transformers/issues/11360/events
https://github.com/huggingface/transformers/pull/11360
864,026,740
MDExOlB1bGxSZXF1ZXN0NjIwMzI4ODE1
11,360
Merge new TF example script
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[]
1,619
1,619
1,619
MEMBER
null
New branch for merging the new example because I'm scared of rebasing after that big of a change!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11360/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11360", "html_url": "https://github.com/huggingface/transformers/pull/11360", "diff_url": "https://github.com/huggingface/transformers/pull/11360.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11360.patch", "merged_at": 1619021095000 }
https://api.github.com/repos/huggingface/transformers/issues/11359
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11359/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11359/comments
https://api.github.com/repos/huggingface/transformers/issues/11359/events
https://github.com/huggingface/transformers/pull/11359
864,022,202
MDExOlB1bGxSZXF1ZXN0NjIwMzI1MTQ0
11,359
[testing doc] bring doc up to date
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,619
1,619
1,619
CONTRIBUTOR
null
Following up to https://github.com/huggingface/transformers/pull/11350 edited `testing.rst` to update outdated information. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11359/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11359", "html_url": "https://github.com/huggingface/transformers/pull/11359", "diff_url": "https://github.com/huggingface/transformers/pull/11359.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11359.patch", "merged_at": 1619020260000 }
https://api.github.com/repos/huggingface/transformers/issues/11358
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11358/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11358/comments
https://api.github.com/repos/huggingface/transformers/issues/11358/events
https://github.com/huggingface/transformers/issues/11358
863,948,701
MDU6SXNzdWU4NjM5NDg3MDE=
11,358
Different results between `AlbertTokenizer` and `AlbertTokenizerFast` modules with a new `spiece.model` file
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @SaulLu, thanks a lot for the detailed issue. I managed to reproduce the issue by using another tokenizer on the hub, `codegram/calbert-tiny-uncased`, which has the same issue.\r\n\r\n@n1t0 helped me identify the issue, and we have a fix in #11367. Would you mind trying it out and let me know if it fixes your issue? The correct behavior is that of the slow tokenizer, there's an excess space in the fast tokenizer encoding.\r\n\r\nYou can either checkout the branch and install from source - or you can install the following in your env:\r\n```\r\npip install -U git+https://github.com/huggingface/transformers@fix-albert-converter\r\n```", "Hey @LysandreJik !\r\n\r\nThank you very much for your detailed answer! I tested the fix provided on #11367. The tokenization is now the same with the previous example `text=\"a\\n b\"` which became `['[CLS]', '▁a', '▁b', '[SEP]']` ! :+1: \r\n\r\nUnfortunately, I have a similar inconsistency that was not resolved with the following example: \r\n```\r\ntext=\"\\n\"\r\n```\r\n\r\nCell:\r\n```python\r\nalbert_tokenizer = AlbertTokenizer.from_pretrained(tokenizer_dir_path)\r\nprint(\"ids -> ids_token\",albert_tokenizer.convert_ids_to_tokens(albert_tokenizer.encode(text)))\r\n```\r\nOutput:\r\n```bash\r\nids -> ids_token ['[CLS]', '[SEP]']\r\n```\r\nCell:\r\n```python\r\nalbert_tokenizer_fast = AlbertTokenizerFast.from_pretrained(tokenizer_dir_path)\r\nprint(\"ids -> ids_token\",albert_tokenizer_fast.convert_ids_to_tokens(albert_tokenizer_fast.encode(text)))\r\n```\r\nOutput:\r\n```Bash\r\nids -> ids_token ['[CLS]', '▁', '[SEP]']\r\n\r\n```\r\nCell:\r\n```python\r\nsp = spm.SentencePieceProcessor(model_file=os.path.join(tokenizer_dir_path, \"spiece.model\"))\r\nprint(\"ids -> ids_token\", sp.id_to_piece(sp.encode(text)))\r\n```\r\nOutput:\r\n```bash\r\nids -> ids_token []\r\n```", "If it can help, I started to compare the tokenizer.json files between the one contained in `albert-base-v2` and the one obtained by running the command :\r\n```\r\nalbert_tokenizer_fast = AlbertTokenizerFast.from_pretrained(\"SaulLu/albert-bn-dev\")\r\nalbert_tokenizer_fast.save_pretrained(\"./albert-bn-dev-fast\")\r\n```\r\n\r\nAn extract of `albert-base-v2` `tokenizer.json`'s file is: \r\n```\r\n{\r\n \"version\": \"1.0\",\r\n \"truncation\": null,\r\n \"padding\": null,\r\n \"added_tokens\": [\r\n {\r\n \"id\": 0,\r\n \"special\": true,\r\n \"content\": \"<pad>\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 1,\r\n \"special\": true,\r\n \"content\": \"<unk>\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 2,\r\n \"special\": true,\r\n \"content\": \"[CLS]\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 3,\r\n \"special\": true,\r\n \"content\": \"[SEP]\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 4,\r\n \"special\": true,\r\n \"content\": \"[MASK]\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n }\r\n ],\r\n \"normalizer\": {\r\n \"type\": \"Sequence\",\r\n \"normalizers\": [\r\n { \"type\": \"Replace\", \"pattern\": { \"String\": \"``\" }, \"content\": \"\\\"\" },\r\n { \"type\": \"Replace\", \"pattern\": { \"String\": \"''\" }, \"content\": \"\\\"\" },\r\n { \"type\": \"NFKD\" },\r\n { \"type\": \"StripAccents\" },\r\n { \"type\": \"Lowercase\" },\r\n {\r\n \"type\": \"Precompiled\",\r\n \"precompiled_charsmap\": \"...\"\r\n }\r\n ]\r\n },\r\n \"pre_tokenizer\": {\r\n \"type\": \"Sequence\",\r\n \"pretokenizers\": [\r\n { \"type\": \"WhitespaceSplit\" },\r\n { \"type\": \"Metaspace\", \"replacement\": \"▁\", \"str_rep\": \"▁\", \"add_prefix_space\": true }\r\n ]\r\n },\r\n \"post_processor\": {\r\n \"type\": \"TemplateProcessing\",\r\n \"single\": [\r\n { \"SpecialToken\": { \"id\": \"[CLS]\", \"type_id\": 0 } },\r\n { \"Sequence\": { \"id\": \"A\", \"type_id\": 0 } },\r\n { \"SpecialToken\": { \"id\": \"[SEP]\", \"type_id\": 0 } }\r\n ],\r\n \"pair\": [\r\n { \"SpecialToken\": { \"id\": \"[CLS]\", \"type_id\": 0 } },\r\n { \"Sequence\": { \"id\": \"A\", \"type_id\": 0 } },\r\n { \"SpecialToken\": { \"id\": \"[SEP]\", \"type_id\": 0 } },\r\n { \"Sequence\": { \"id\": \"B\", \"type_id\": 1 } },\r\n { \"SpecialToken\": { \"id\": \"[SEP]\", \"type_id\": 1 } }\r\n ],\r\n \"special_tokens\": {\r\n \"[SEP]\": { \"id\": \"[SEP]\", \"ids\": [3], \"tokens\": [\"[SEP]\"] },\r\n \"[CLS]\": { \"id\": \"[CLS]\", \"ids\": [2], \"tokens\": [\"[CLS]\"] }\r\n }\r\n },\r\n \"decoder\": {\r\n \"type\": \"Metaspace\",\r\n \"replacement\": \"▁\",\r\n \"str_rep\": \"▁\",\r\n \"add_prefix_space\": true\r\n },\r\n \"model\": {\r\n \"unk_id\": 1,\r\n \"vocab\": [...]\r\n }\r\n}\r\n```\r\n\r\nAn extract of `albert-bn-dev-fast` `tokenizer.json`'s file is: \r\n```\r\n{\r\n \"version\": \"1.0\",\r\n \"truncation\": null,\r\n \"padding\": null,\r\n \"added_tokens\": [\r\n {\r\n \"id\": 0,\r\n \"special\": true,\r\n \"content\": \"<pad>\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 1,\r\n \"special\": true,\r\n \"content\": \"<unk>\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 2,\r\n \"special\": true,\r\n \"content\": \"[CLS]\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 3,\r\n \"special\": true,\r\n \"content\": \"[SEP]\",\r\n \"single_word\": false,\r\n \"lstrip\": false,\r\n \"rstrip\": false,\r\n \"normalized\": false\r\n },\r\n {\r\n \"id\": 4,\r\n \"special\": true,\r\n \"content\": \"[MASK]\",\r\n \"single_word\": false,\r\n \"lstrip\": true,\r\n \"rstrip\": false,\r\n \"normalized\": true\r\n }\r\n ],\r\n \"normalizer\": {\r\n \"type\": \"Sequence\",\r\n \"normalizers\": [\r\n { \"type\": \"Replace\", \"pattern\": { \"String\": \"``\" }, \"content\": \"\\\"\" },\r\n { \"type\": \"Replace\", \"pattern\": { \"String\": \"''\" }, \"content\": \"\\\"\" },\r\n { \"type\": \"NFKD\" },\r\n { \"type\": \"StripAccents\" },\r\n { \"type\": \"Lowercase\" },\r\n {\r\n \"type\": \"Precompiled\",\r\n \"precompiled_charsmap\": \"...\"},\r\n { \"type\": \"Replace\", \"pattern\": { \"Regex\": \" {2,}\" }, \"content\": \" \" }\r\n ]\r\n },\r\n \"pre_tokenizer\":{\"type\":\"Metaspace\",\"replacement\":\"▁\",\"add_prefix_space\":true},\r\n },\r\n \"post_processor\": {\r\n \"type\": \"TemplateProcessing\",\r\n \"single\": [\r\n { \"SpecialToken\": { \"id\": \"[CLS]\", \"type_id\": 0 } },\r\n { \"Sequence\": { \"id\": \"A\", \"type_id\": 0 } },\r\n { \"SpecialToken\": { \"id\": \"[SEP]\", \"type_id\": 0 } }\r\n ],\r\n \"pair\": [\r\n { \"SpecialToken\": { \"id\": \"[CLS]\", \"type_id\": 0 } },\r\n { \"Sequence\": { \"id\": \"A\", \"type_id\": 0 } },\r\n { \"SpecialToken\": { \"id\": \"[SEP]\", \"type_id\": 0 } },\r\n { \"Sequence\": { \"id\": \"B\", \"type_id\": 1 } },\r\n { \"SpecialToken\": { \"id\": \"[SEP]\", \"type_id\": 1 } }\r\n ],\r\n \"special_tokens\": {\r\n \"[CLS]\": { \"id\": \"[CLS]\", \"ids\": [2], \"tokens\": [\"[CLS]\"] },\r\n \"[SEP]\": { \"id\": \"[SEP]\", \"ids\": [3], \"tokens\": [\"[SEP]\"] }\r\n }\r\n },\r\n \"decoder\": { \"type\": \"Metaspace\", \"replacement\": \"▁\", \"add_prefix_space\": true },\r\n \"model\": {\r\n \"type\": \"Unigram\",\r\n \"unk_id\": 1,\r\n \"vocab\": [...]\r\n }\r\n}\r\n```\r\nBy replacing the content of `pre_tokenizer` key with `{\r\n \"type\": \"Sequence\",\r\n \"pretokenizers\": [\r\n { \"type\": \"WhitespaceSplit\" },\r\n { \"type\": \"Metaspace\", \"replacement\": \"▁\", \"str_rep\": \"▁\", \"add_prefix_space\": true }\r\n ]\r\n }` in the `albert-bn-dev-fast/tokenizer.json` file, then the Fast tokenizer returns the same results as the slow on the 2 discussed examples. :smiley: \r\n\r\n", "Hi @SaulLu, sorry for getting back so late on this. \r\n\r\nI think you've stumbled upon another difference between the slow and fast tokenizers which would need to be patched within `tokenizers` directly.\r\n\r\nIs it of great importance to your task? While it is an issue, I would argue it is quite low priority as such a use-case seems rare and the issue doesn't seem to have a huge impact. Please let me know if you think this should be bumped up. " ]
1,619
1,621
1,621
CONTRIBUTOR
null
Hello! I would like to ask your opinion about a tokenizer behavior. In a project, I have to train a new tokenizer to re-pretrain an Albert model. I don't know if I did something wrong (and if I did, I'd love to know!) but for the moment a text is not tokenized in the same way with `AlbertTokenizer` and `AlbertTokenizerFast`. Thanks a lot for your time in advance :smile: ## To reproduce Steps to reproduce the behavior: 1. Training a tokenizer with [sentencepiece library](https://github.com/google/sentencepiece). The resulting tokenizer is saved under the name `spiece.model`. I can share it if needed. 2. Assuming that only the `spiece.model` file is in the root. Run the following blocs of code: ```python tokenizer_dir_path = "." text = "a\n b" ``` Cell: ```python albert_tokenizer = AlbertTokenizer.from_pretrained(tokenizer_dir_path) print("ids", albert_tokenizer.encode(text)) print("ids -> ids_token",albert_tokenizer.convert_ids_to_tokens(albert_tokenizer.encode(text))) ``` Output: ```bash ids [2, 1842, 5132, 3] ids -> ids_token ['[CLS]', '▁a', '▁b', '[SEP]'] ``` Cell: ```python albert_tokenizer_fast = AlbertTokenizerFast.from_pretrained(tokenizer_dir_path) print("ids", albert_tokenizer_fast.encode(text)) print("ids -> ids_token",albert_tokenizer_fast.convert_ids_to_tokens(albert_tokenizer_fast.encode(text))) ``` Output: ```Bash ids [2, 1127, 266, 3157, 3] ids -> ids_token ['[CLS]', '▁a', '▁', '▁b', '[SEP]'] ``` Cell: ```python sp = spm.SentencePieceProcessor(model_file=os.path.join(tokenizer_dir_path, "spiece.model")) print("ids", sp.encode(text)) print("ids -> ids_token", sp.id_to_piece(sp.encode(text))) ``` Output: ```bash ids [1127, 3157] ids -> ids_token ['▁a', '▁b'] ``` Other variations: I also tried to instantiate the tokenizer like this `AlbertTokenizerFast(vocab_file=os.path.join(tokenizer_dir_path, "spiece.model"))`. ## Expected behavior I expected to have the same result with the modules: `AlbertTokenizer` and `AlbertTokenizerFast`. In particular, I did not expect "\n" to be tokenized by "_" in the case of `AlbertTokenizerFast`. ## Environment info - `transformers` version: 4.5.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Albert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11358/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11357
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11357/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11357/comments
https://api.github.com/repos/huggingface/transformers/issues/11357/events
https://github.com/huggingface/transformers/issues/11357
863,886,453
MDU6SXNzdWU4NjM4ODY0NTM=
11,357
possible mistake in documentation
{ "login": "shyrma", "id": 30350590, "node_id": "MDQ6VXNlcjMwMzUwNTkw", "avatar_url": "https://avatars.githubusercontent.com/u/30350590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shyrma", "html_url": "https://github.com/shyrma", "followers_url": "https://api.github.com/users/shyrma/followers", "following_url": "https://api.github.com/users/shyrma/following{/other_user}", "gists_url": "https://api.github.com/users/shyrma/gists{/gist_id}", "starred_url": "https://api.github.com/users/shyrma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shyrma/subscriptions", "organizations_url": "https://api.github.com/users/shyrma/orgs", "repos_url": "https://api.github.com/users/shyrma/repos", "events_url": "https://api.github.com/users/shyrma/events{/privacy}", "received_events_url": "https://api.github.com/users/shyrma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @shyrma,\r\n\r\nyou are right I think - do you mind opening a PR to fix it? :-) ", "I'm not sure how to make a fix in an appropriate way, since both classes (for example) BartModel and BartForConditionalGeneration\r\nuse the same doc string BART_INPUTS_DOCSTRING `@add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING)`.\r\nBART_INPUTS_DOCSTRING contains mistake in respect to BartForConditionalGeneration only.", "Hi @shyrma \r\n\r\nThere were few other mistakes in the docs for almost all seq-2-seq models. I took care of it! Thanks a lot for pointing this out.", "Hi guys\r\nMistake is still present in documentation (forward method):\r\nhttps://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration\r\nhttps://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration\r\n", "HI @shyrma \r\n\r\nfor BART and mBART, this is actually correct. BART can be used for seq classification for these tasks it just uses the `input_ids` as `decoder_input_ids`. So the doc-string is right when it says `input_ids`.\r\n\r\nAlso, for T5 you are looking at the stable version doc, the changes are on master right now, and will be reflected in stable in next release. https://huggingface.co/transformers/master/model_doc/t5.html#t5forconditionalgeneration ", "I consider only BartForConditionalGeneration and T5ForConditionalGeneration.\r\n> Also, for T5 you are looking at the stable version doc, the changes are on master right now, and will be reflected in stable in next release\r\n\r\nGreat! And what about BartForConditionalGeneration?", "as I said above BART can use `input_ids` to create `decoder_input_ids` when `labels` are not present. So the docstring for BART is correct.", "Currently one can find following explanation of the parameter \"decoder_input_ids\" of BartForConditionalGeneration forward method:\r\n`decoder_input_ids - ... If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pretraining following the paper.`\r\nDo I understand correctly that you argue this is correct explanation ?", "Yes, that's the correct explanation and that's true for tasks like sequence classification and question answering as well, for these tasks BART uses the same `input_ids` as `decoder_input_ids`.", "Hmm, I'm not sure. And corresponding code tells that it is not true:\r\n\r\n```\r\n if labels is not None:\r\n if decoder_input_ids is None:\r\n decoder_input_ids = shift_tokens_right(\r\n labels, self.config.pad_token_id, self.config.decoder_start_token_id\r\n )\r\n```", "That's correct, that code prepares `decoder_input_ids` from `lables`, when `labels` are not `None, but when they are `None`, `input_ids` are used.\r\n\r\nhttps://github.com/huggingface/transformers/blob/8d43c71a1ca3ad322cc45008eb66a5611f1e017e/src/transformers/models/bart/modeling_bart.py#L1147-L1152", "I'm sorry I meant this piece of code (dealing with BartForConditionalGeneration, not with BartModel)\r\nhttps://github.com/huggingface/transformers/blob/8d43c71a1ca3ad322cc45008eb66a5611f1e017e/src/transformers/models/bart/modeling_bart.py#L1283-L1287\r\nAnd looks like explanation in docs should be following:\r\n`decoder_input_ids - ... If no decoder_input_ids is provided, the model will create this tensor by shifting the labels to the right for denoising pretraining following the paper.`\r\nthat is replace \"inputs_ids\" by \"labels\"" ]
1,619
1,619
1,619
NONE
null
Looking at description of the parameter "decoder_input_ids" in "forward" method of BartForConditionalGeneration/T5ForConditionalGeneration, I see following: BartForConditionalGeneration: decoder_input_ids - ... For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the !!INPUT_IDS!! to the right for denoising pretraining following the paper. T5ForConditionalGeneration: decoder_input_ids - ... To know more on how to prepare decoder_input_ids for pretraining take a look at T5 Training. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of !!INPUT_IDS!!. Looks like there should be LABELS instead of INPUT_IDS. Thanks, @patrickvonplaten, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11357/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11356
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11356/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11356/comments
https://api.github.com/repos/huggingface/transformers/issues/11356/events
https://github.com/huggingface/transformers/issues/11356
863,750,965
MDU6SXNzdWU4NjM3NTA5NjU=
11,356
Whyhttps://github.com/huggingface/transformers/tree/master/examples/pplm
{ "login": "27182812", "id": 33630730, "node_id": "MDQ6VXNlcjMzNjMwNzMw", "avatar_url": "https://avatars.githubusercontent.com/u/33630730?v=4", "gravatar_id": "", "url": "https://api.github.com/users/27182812", "html_url": "https://github.com/27182812", "followers_url": "https://api.github.com/users/27182812/followers", "following_url": "https://api.github.com/users/27182812/following{/other_user}", "gists_url": "https://api.github.com/users/27182812/gists{/gist_id}", "starred_url": "https://api.github.com/users/27182812/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/27182812/subscriptions", "organizations_url": "https://api.github.com/users/27182812/orgs", "repos_url": "https://api.github.com/users/27182812/repos", "events_url": "https://api.github.com/users/27182812/events{/privacy}", "received_events_url": "https://api.github.com/users/27182812/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The html doesn‘t work. I can't see \"https://github.com/huggingface/transformers/tree/master/examples/pplm\".", "Did something went wrong? A lot of the scripts are gone like there used to be examples for clm and plm in transformers/examples/language-modeling but now there's only run_mlm_noflax.py", "Soga,haha. Thank you!", "Apparently they are moving stuff around. So what was originally `transformers/examples/language-modeling` has become `transformer/examples/pytorch/language-modeling` now. So maybe you can look around to find if they've moved what you're looking for into some other folder.", "I got it. Thank you very much!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,619
1,622
1,622
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11356/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11355
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11355/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11355/comments
https://api.github.com/repos/huggingface/transformers/issues/11355/events
https://github.com/huggingface/transformers/pull/11355
863,749,985
MDExOlB1bGxSZXF1ZXN0NjIwMTAwNjkw
11,355
Fix token_type_ids error for big_bird model.
{ "login": "wlhgtc", "id": 16603773, "node_id": "MDQ6VXNlcjE2NjAzNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wlhgtc", "html_url": "https://github.com/wlhgtc", "followers_url": "https://api.github.com/users/wlhgtc/followers", "following_url": "https://api.github.com/users/wlhgtc/following{/other_user}", "gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}", "starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions", "organizations_url": "https://api.github.com/users/wlhgtc/orgs", "repos_url": "https://api.github.com/users/wlhgtc/repos", "events_url": "https://api.github.com/users/wlhgtc/events{/privacy}", "received_events_url": "https://api.github.com/users/wlhgtc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger ", "I think the BigBird QA model actually uses 16 token type ids (https://huggingface.co/google/bigbird-base-trivia-itc/blob/main/config.json), but this is an exception I think and the default case is 2 token type ids => so this looks good to me.\r\n\r\n @vasudevgupta7 what do you think?", "Yes, both base & large pre-trained checkpoints accept 2 token type ids while only trivia-qa checkpoint accepts 16 token type ids. So, this should be good." ]
1,619
1,619
1,619
CONTRIBUTOR
null
I run the follow code, but get strange outputs, the `token_type_ids` is shorter than `input_ids`: ```python def demo(): model_name = "./resources/bigbird-roberta-base" tokenizer = AutoTokenizer.from_pretrained(model_name) text = 'With power of science ,' max_length = 10 encoded_tokens = tokenizer.encode_plus( text=text, add_special_tokens=True, max_length=max_length, truncation=True if max_length is not None else False, return_tensors=None, return_offsets_mapping=tokenizer.is_fast, return_attention_mask=False, return_token_type_ids=True, return_special_tokens_mask=True, ) print(encoded_tokens) ``` ![CleanShot 2021-04-21 at 19 03 13@2x](https://user-images.githubusercontent.com/16603773/115543730-4b98be80-a2d4-11eb-8830-c1a2a20f4d84.png) Seem it miss the `create_token_type_ids_from_sequences` method, so it run code in [here](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/tokenization_utils_base.py#L2665) as default. So I copy the method in BERT to fix it. I know maybe the big_bird model doesn't need `token_type_ids`, but we also have to make sure it return the right result if we set `return_token_type_ids = True`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11355/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11355", "html_url": "https://github.com/huggingface/transformers/pull/11355", "diff_url": "https://github.com/huggingface/transformers/pull/11355.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11355.patch", "merged_at": 1619026677000 }
https://api.github.com/repos/huggingface/transformers/issues/11354
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11354/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11354/comments
https://api.github.com/repos/huggingface/transformers/issues/11354/events
https://github.com/huggingface/transformers/issues/11354
863,678,081
MDU6SXNzdWU4NjM2NzgwODE=
11,354
Question-answering pipeline failing with Nonetype exception when selecting spans with tokens outside of the context
{ "login": "psorianom", "id": 1085210, "node_id": "MDQ6VXNlcjEwODUyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1085210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/psorianom", "html_url": "https://github.com/psorianom", "followers_url": "https://api.github.com/users/psorianom/followers", "following_url": "https://api.github.com/users/psorianom/following{/other_user}", "gists_url": "https://api.github.com/users/psorianom/gists{/gist_id}", "starred_url": "https://api.github.com/users/psorianom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/psorianom/subscriptions", "organizations_url": "https://api.github.com/users/psorianom/orgs", "repos_url": "https://api.github.com/users/psorianom/repos", "events_url": "https://api.github.com/users/psorianom/events{/privacy}", "received_events_url": "https://api.github.com/users/psorianom/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, thank you for opening such a detailed issue. I know @Narsil has some experience with the question answering pipeline and has probably been confronted to that issue in the past.\r\n\r\nNicolas, what do you think of the fix proposed above?\r\n\r\nEither way we should work on a clearer error message.", "The proposed fix seems reasonable.\r\n\r\nA few things:\r\n\r\n- `decode` is already supposed to do the proper filtering, so we should probably move the logic in the function and document the changes in the docstring.\r\nIMO it makes sense to return less than `topk` if there are not enough options available within `context`.\r\n- Also we should probably add a tests for this use case (the low hanging fruit is adding the exact excerpt as a `slow` test).\r\n- The `numpy` doc recommends using `isin` instead of `in1d` for new code: https://numpy.org/doc/stable/reference/generated/numpy.in1d.html", "Hi @LysandreJik and @Narsil, thank you for your quick answers.\r\nI will modify my solution as suggested. I will have to change my solution's logic because in `decode` we are not aware of the `undesired_tokens`. Maybe I could just add it as parameter or maybe some masking over the zero-valued scores would be preferable ?\r\nI will also look into the tests.\r\n\r\n", "I think adding it as an argument is fine. Maybe let's make it optional to keep backward compatibility though, @LysandreJik ?", "Sounds good! ", "Great! I am working on this. I will make a PR as soon as I can. " ]
1,618
1,620
1,620
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: '4.6.0.dev0' - Platform: Linux Mint 20 - Python version: 3.7.10 - PyTorch version (GPU?): GPU - Tensorflow version (GPU?): NA - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik ## Information Model I am using (Bert, XLNet ...): camembert (specifically [etalab-ia/camembert-base-squadFR-fquad-piaf](https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf)) The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: * [x] my own task or dataset: Question Answering with own SQuAD-like dataset ## To reproduce When using a `question-answering` pipeline, if the context is too small (or if the model can't find multiple candidates), the produced scores will be zero and thus when sorting and filtering for `topk > 1`, we may return random indices of zero score values which correspond to tokens that **are not** in the context, but in the question. This sorting and index returning happens [here](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/pipelines/question_answering.py#L406). Asking for an index that does not exist in the context returns a `None` down the line (in function `enc.word_to_chars()` [here](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/pipelines/question_answering.py#L376)). This bug may be related to this issue https://github.com/huggingface/transformers/issues/9843. This suite of events finally produce this exception: ``` Traceback (most recent call last): File "/home/pavel/.config/JetBrains/PyCharmCE2021.1/scratches/bug_transf.py", line 25, in <module> print(nlp({'question': questions[0], 'context': text}, topk=20, handle_impossible_answer=True, max_seq_len=256, doc_stride=128)) File "/home/pavel/miniconda3/envs/piaf-ml/lib/python3.7/site-packages/transformers/pipelines.py", line 1968, in __call__ for s, e, score in zip(starts, ends, scores) File "/home/pavel/miniconda3/envs/piaf-ml/lib/python3.7/site-packages/transformers/pipelines.py", line 1968, in <listcomp> for s, e, score in zip(starts, ends, scores) TypeError: 'NoneType' object cannot be interpreted as an integer ``` ## Full Context We are building a Retriever (ES with bm25) + Reader (QA with the above mentioned model) search engine with the haystack library. In this setting, we test with different lengths for the contexts where the QA model will find the answer. We are also testing for different values of `topk`. As an example, if I have a 1001 words context and I set the max length to 1000, I will split the document in two sub-documents, one with the first 1000 words and the other with the last word. Thus my second sub-document will be very small. These type of small documents will be passed to the transformers QA pipeline which will usually generate the above exception when `topk` is greater than one. Steps to reproduce the behavior: ```python from transformers import pipeline nlp = pipeline('question-answering', model='etalab-ia/camembert-base-squadFR-fquad-piaf', tokenizer='etalab-ia/camembert-base-squadFR-fquad-piaf') question = "Comment bénéficier du billet de congé annuel de la SNCF à tarif réduit ?" context = "perle" result = nlp({'question': question, 'context': context}, topk=20, handle_impossible_answer=True, max_seq_len=256, doc_stride=128) print(result) ``` ## Proposed Solution Given that in `self.decode` we return the indices of the context tokens to create the answers, we could re-filter them to make sure that we will use context-tokens indices to generate the spans later on. Just like this (replacing this [line](https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/pipelines/question_answering.py#L344)): ```python starts, ends, scores = self.decode(start_, end_, kwargs["topk"], kwargs["max_answer_len"]) desired_spans = np.in1d(starts, undesired_tokens.nonzero()) & np.in1d(ends, undesired_tokens.nonzero()) starts = starts[desired_spans] ends = ends[desired_spans] scores = scores[desired_spans] ``` I have a [branch](https://github.com/psorianom/transformers/blob/e96afad34bc872b4fc9318d45a551e0c33f3de8c/src/transformers/pipelines/question_answering.py#L346) here ready to be PRequested if you agree with this solution. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I would like to have an answer with valid spans even if they are lower than the required `topk` parameter. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11354/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11353
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11353/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11353/comments
https://api.github.com/repos/huggingface/transformers/issues/11353/events
https://github.com/huggingface/transformers/pull/11353
863,445,586
MDExOlB1bGxSZXF1ZXN0NjE5ODUyMDA5
11,353
T5 Gradient Checkpointing
{ "login": "ceshine", "id": 674501, "node_id": "MDQ6VXNlcjY3NDUwMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/674501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ceshine", "html_url": "https://github.com/ceshine", "followers_url": "https://api.github.com/users/ceshine/followers", "following_url": "https://api.github.com/users/ceshine/following{/other_user}", "gists_url": "https://api.github.com/users/ceshine/gists{/gist_id}", "starred_url": "https://api.github.com/users/ceshine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceshine/subscriptions", "organizations_url": "https://api.github.com/users/ceshine/orgs", "repos_url": "https://api.github.com/users/ceshine/repos", "events_url": "https://api.github.com/users/ceshine/events{/privacy}", "received_events_url": "https://api.github.com/users/ceshine/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,618
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Partially fixes #6564. This is inspired by @xFinal's workaround. However, instead of modifying PyTorch's implementation of gradient checkpointing, I only modify `T5Block` here and replace the None value with a dummy tensor that requires a gradient. ~Gradient checkpointing is only enabled for the encoder. Some of the outputs of the decoder don't require gradients and will cause problems. PyTorch 1.8.0 has fixed this (see note 1 below).~ Additional notes: 1. `require_grad = True` for the dummy tensor is no longer required since PyTorch 1.8.0 ([From this PR](https://github.com/pytorch/pytorch/pull/45934)). 2. None as a return value is allowed since [this PyTorch PR](https://github.com/pytorch/pytorch/pull/52422). It has not been released yet (the latest release is 1.8.1 at the time of writing). We won't even need the dummy tensor after that. 3. I tested this code locally with PyTorch 1.7.1. 4. I did not write any additional test because it seems that [test_training_gradient_checkpointing](https://github.com/huggingface/transformers/blob/81009b7a5c5cb183a9275c15bf347bdc988b02c4/tests/test_modeling_common.py#L242) in `ModelTesterMixin` already covers it. ~EDIT: I forgot this PR only covers the encoder, not the decoder. The description has been updated to reflect this fact.~ EDIT: This PR covers the decoder now. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). (Updated docstring for T5Config.) - [ ] Did you write any new necessary tests? (Already covered by existing tests.) ## Who can review? @patrickvonplaten (T5) @sgugger (Documentation) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11353/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11353/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11353", "html_url": "https://github.com/huggingface/transformers/pull/11353", "diff_url": "https://github.com/huggingface/transformers/pull/11353.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11353.patch", "merged_at": 1619772235000 }
https://api.github.com/repos/huggingface/transformers/issues/11352
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11352/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11352/comments
https://api.github.com/repos/huggingface/transformers/issues/11352/events
https://github.com/huggingface/transformers/pull/11352
863,423,680
MDExOlB1bGxSZXF1ZXN0NjE5ODM0MjE1
11,352
[deepspeed] fix resume from checkpoint
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed. I removed it during merge. Thank you for the reminder, @sgugger " ]
1,618
1,619
1,619
CONTRIBUTOR
null
This PR fixes a bug that most likely somehow got exposed (not caused) by https://github.com/huggingface/transformers/pull/11318 - surprisingly the same test worked just fine before that other PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11352/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11352", "html_url": "https://github.com/huggingface/transformers/pull/11352", "diff_url": "https://github.com/huggingface/transformers/pull/11352.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11352.patch", "merged_at": 1619016496000 }
https://api.github.com/repos/huggingface/transformers/issues/11351
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11351/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11351/comments
https://api.github.com/repos/huggingface/transformers/issues/11351/events
https://github.com/huggingface/transformers/issues/11351
863,362,363
MDU6SXNzdWU4NjMzNjIzNjM=
11,351
fine tuning encoder decoder for custom language translation
{ "login": "YanSoares", "id": 35378133, "node_id": "MDQ6VXNlcjM1Mzc4MTMz", "avatar_url": "https://avatars.githubusercontent.com/u/35378133?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YanSoares", "html_url": "https://github.com/YanSoares", "followers_url": "https://api.github.com/users/YanSoares/followers", "following_url": "https://api.github.com/users/YanSoares/following{/other_user}", "gists_url": "https://api.github.com/users/YanSoares/gists{/gist_id}", "starred_url": "https://api.github.com/users/YanSoares/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YanSoares/subscriptions", "organizations_url": "https://api.github.com/users/YanSoares/orgs", "repos_url": "https://api.github.com/users/YanSoares/repos", "events_url": "https://api.github.com/users/YanSoares/events{/privacy}", "received_events_url": "https://api.github.com/users/YanSoares/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\n\r\nYou might also find [this notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) showing how to train an encoder-decoder model interesting.\r\n\r\nThanks!", "Hello\r\nThanks for reply.\r\nI opened a question in the forum, I am waiting for some help. \r\nThank you again!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,618
1,622
1,622
NONE
null
Hello everyone, I would like to know if you can train a BERT2GPT model (or other models) for translation of customized languages ​​(from scratch). I need to translate gloss signals from ASL to English. I have already looked for tutorials on the internet, but most of them are for the task of generating text, I cannot find tutorials for translating text. I read about EncoderDecoder, I think it's possible, I just don't know how to make a notebook to perform training from scratch using the hugging face models. Could you help me? Has anyone done something like that?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11351/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11350
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11350/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11350/comments
https://api.github.com/repos/huggingface/transformers/issues/11350/events
https://github.com/huggingface/transformers/pull/11350
863,362,064
MDExOlB1bGxSZXF1ZXN0NjE5Nzg1OTI4
11,350
Examples reorg
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`tests/deepspeed` and `tests/extended` need to be updated too following the rename. Thank you.\r\n\r\n```\r\npip install fairscale deepspeed\r\nRUN_SLOW=1 pytest tests/deepspeed tests/extended\r\n```", "Which tests are failing for you in `tests/extended`? Everything is passing on my side. For deepspeed I fixed the last path I had forgotten to update but it's impossible for me to run those tests as they all error out since deepspeed is not able to build properly on my setup. Got:\r\n\r\n```\r\n !! WARNING !! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\r\nYour compiler (c++) is not compatible with the compiler Pytorch was \r\n built with for this platform, which is g++ on linux. Please\r\nuse g++ to to compile your extension. Alternatively, you may\r\ncompile PyTorch from source using c++, and then you can also use\r\nc++ to compile your extension. \r\n\r\nSee https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help\r\nwith compiling PyTorch from source. \r\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! \r\n !! WARNING !! \r\n```\r\nand pretty much every test is a failure.", "All tests now pass (including `fairscale` and `apex`) - one deepspeed test fails but it's unrelated to this PR.\r\n\r\nIf you have trouble building deepspeed at run time, please consider pre-building it: https://huggingface.co/transformers/master/main_classes/trainer.html#installation and of course report an Issue to Deepspeed if you have a few minutes to do so.\r\n\r\nThank you for fixing this, @sgugger ", "Looks like we have some really old dead references too:\r\n```\r\ndocs/source/testing.rst:* :prefix_link:`test_seq2seq_examples_multi_gpu.py <examples/seq2seq/test_seq2seq_examples_multi_gpu.py>` - a\r\ndocs/source/testing.rst:* :prefix_link:`test_finetune_trainer.py <examples/seq2seq/test_finetune_trainer.py>` - a normal (non-PL) test\r\ndocs/source/testing.rst: CUDA_VISIBLE_DEVICES=\"0,1\" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py \\\r\ndocs/source/testing.rst: examples/seq2seq/test_seq2seq_examples_multi_gpu.py\r\ndocs/source/testing.rst: data_dir = self.examples_dir / \"seq2seq/test_data/wmt_en_ro\"\r\n```", "Yes indeed! Those are for the very old scripts (that's not the only place we have those). I wasn't sure how to replace those so if you could point me to the script you want to use instead, I can adapt. I think the whole paragraph may need a rewrite since it has been a long time.", "> Yes indeed! Those are for the very old scripts (that's not the only place we have those). I wasn't sure how to replace those so if you could point me to the script you want to use instead, I can adapt. I think the whole paragraph may need a rewrite since it has been a long time.\r\n\r\nFixed here https://github.com/huggingface/transformers/pull/11359" ]
1,618
1,619
1,619
COLLABORATOR
null
# What does this PR do? As discussed internally, this PR reorganizes the `examples` folder to make clean subfolders for PyTorch and TensorFlow. This way, each example can have its own requirements including the proper backend and there is no headache to determine who will be first in the README. It also splits the seq2seq folder in two: translation and summarization. Finally it moves the content of `examples/test_data` to `tests/fixtures/tests_samples` which is more adapted. In passing, it updates references to the examples that were moved.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11350/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11350", "html_url": "https://github.com/huggingface/transformers/pull/11350", "diff_url": "https://github.com/huggingface/transformers/pull/11350.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11350.patch", "merged_at": 1619017880000 }
https://api.github.com/repos/huggingface/transformers/issues/11349
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11349/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11349/comments
https://api.github.com/repos/huggingface/transformers/issues/11349/events
https://github.com/huggingface/transformers/pull/11349
863,259,255
MDExOlB1bGxSZXF1ZXN0NjE5NzAxMTA3
11,349
[Wav2Vec2] Fix special tokens for Wav2Vec2 tokenizer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,618
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes https://github.com/huggingface/transformers/issues/10942. Wav2Vec2's vocabulary can consist of multi-character tokens which should then nevertheless be treated as single atomic tokens when encoding/decoding. => This PR ensures such behavior and fixes the issue attached to this PR. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11349/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11349", "html_url": "https://github.com/huggingface/transformers/pull/11349", "diff_url": "https://github.com/huggingface/transformers/pull/11349.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11349.patch", "merged_at": 1619086989000 }
https://api.github.com/repos/huggingface/transformers/issues/11348
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11348/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11348/comments
https://api.github.com/repos/huggingface/transformers/issues/11348/events
https://github.com/huggingface/transformers/issues/11348
863,241,686
MDU6SXNzdWU4NjMyNDE2ODY=
11,348
'Tensor' object has no attribute 'size'
{ "login": "waqarkaleemkhan", "id": 37707339, "node_id": "MDQ6VXNlcjM3NzA3MzM5", "avatar_url": "https://avatars.githubusercontent.com/u/37707339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/waqarkaleemkhan", "html_url": "https://github.com/waqarkaleemkhan", "followers_url": "https://api.github.com/users/waqarkaleemkhan/followers", "following_url": "https://api.github.com/users/waqarkaleemkhan/following{/other_user}", "gists_url": "https://api.github.com/users/waqarkaleemkhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/waqarkaleemkhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/waqarkaleemkhan/subscriptions", "organizations_url": "https://api.github.com/users/waqarkaleemkhan/orgs", "repos_url": "https://api.github.com/users/waqarkaleemkhan/repos", "events_url": "https://api.github.com/users/waqarkaleemkhan/events{/privacy}", "received_events_url": "https://api.github.com/users/waqarkaleemkhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! You're using `AutoModel`, which is a pytorch model, with TensorFlow instructions. Change to `TFAutoModel`!", "Hi @LysandreJik Thanks for your response when I try to import TFAutoModel from TensorFlow it gives an error that cannot import name 'TFAutomodel' from 'transformers' (unknown location)\r\nmy environment \r\npython =3.7\r\nTensorFlow= 2.0\r\ncan you please guide me further what to do", "What's your `transformers` version? Can you install a more recent version of `tensorflow` and see if it fixes your issue?", "I have updated my TensorFlow and transformer but still I have the same issue ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,618
1,622
1,622
NONE
null
I am trying to implement a transformer for language classification using TensorFlow below is the model code I used but it throws an error of tensor object has no attribute size please help for the notebook you can visit the below link https://github.com/waqarkaleemkhan/Transformer_for_Language_classification/blob/master/Transformer_for_language_classification/model.ipynb from transformers import AutoModel bert = AutoModel.from_pretrained('bert-base-cased') input_ids = tf.keras.layers.Input(shape=(SEQ_LEN,), name='input_ids', dtype='int32') mask = tf.keras.layers.Input(shape=(SEQ_LEN,), name='attention_mask', dtype='int32') embeddings = bert(input_ids, attention_mask=mask) X = tf.keras.layers.LSTM(64)(embeddings) X = tf.keras.layers.BatchNormalization()(X) X = tf.keras.layers.Dense(64, activation='relu')(X) X = tf.keras.layers.Dropout(0.1)(X) y = tf.keras.layers.Dense(3, activation='softmax', name='outputs')(X)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11348/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11347/comments
https://api.github.com/repos/huggingface/transformers/issues/11347/events
https://github.com/huggingface/transformers/pull/11347
863,217,793
MDExOlB1bGxSZXF1ZXN0NjE5NjY1NzA2
11,347
Extract metric_key_prefix during NotebookProgressCallback.on_evaluate
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Mmm, looks like there is a problem in the tests though.", "Yes, part of the problem is coming from `CallbackHandler.on_evaluate` which does not have a `kwargs` in the signature: https://github.com/huggingface/transformers/blob/f1b938fda81d4b9e8ab435cb7f37f71c9b7cbb1e/src/transformers/trainer_callback.py#L361\r\n\r\nAdding `kwargs` there seems to work, so my question is whether we should also add `kwargs` to the other class functions (e.g. `on_train_begin` etc)? Since `CallbackHandler` is a subclass of `TrainerCallback`, this would preserve the function signatures in the derived class", "This would be a breaking change for all users that have implemented their custom `TrainerCallback`. Maybe we can just not try to pass the `eval_prefix` and just look for anything that is `xxx_loss` in the metrics dictionary, then `xxx` is the `eval_prefix`?", "Ah good point. I've followed your suggestion instead 😃 ", "It seems the torch tests timed out (not sure how my changes could induce that). Would it be possible to rerun the CI?" ]
1,618
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR upgrades `NotebookProgressCallback` to detect `metric_key_prefix` when `on_evaluate` is called. Useful when users override `Trainer.evaluate` and pick a non-standard prefix for train / eval / test. Forum link where this topic was discussed: https://discuss.huggingface.co/t/logging-training-accuracy-using-trainer-class/5524?u=lewtun ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11347/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11347", "html_url": "https://github.com/huggingface/transformers/pull/11347", "diff_url": "https://github.com/huggingface/transformers/pull/11347.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11347.patch", "merged_at": 1619017929000 }
https://api.github.com/repos/huggingface/transformers/issues/11346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11346/comments
https://api.github.com/repos/huggingface/transformers/issues/11346/events
https://github.com/huggingface/transformers/pull/11346
863,144,483
MDExOlB1bGxSZXF1ZXN0NjE5NjAyNDEz
11,346
[contributing doc] explain/link to good first issue
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,618
1,619
1,619
CONTRIBUTOR
null
This PR expands the contributing doc to helps users find `Good First Issue`/`Good Second Issue` issues. @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11346/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11346", "html_url": "https://github.com/huggingface/transformers/pull/11346", "diff_url": "https://github.com/huggingface/transformers/pull/11346.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11346.patch", "merged_at": 1619025011000 }
https://api.github.com/repos/huggingface/transformers/issues/11345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11345/comments
https://api.github.com/repos/huggingface/transformers/issues/11345/events
https://github.com/huggingface/transformers/issues/11345
863,134,929
MDU6SXNzdWU4NjMxMzQ5Mjk=
11,345
absolute embeddings in Deberta
{ "login": "ylwangy", "id": 31842494, "node_id": "MDQ6VXNlcjMxODQyNDk0", "avatar_url": "https://avatars.githubusercontent.com/u/31842494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylwangy", "html_url": "https://github.com/ylwangy", "followers_url": "https://api.github.com/users/ylwangy/followers", "following_url": "https://api.github.com/users/ylwangy/following{/other_user}", "gists_url": "https://api.github.com/users/ylwangy/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylwangy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylwangy/subscriptions", "organizations_url": "https://api.github.com/users/ylwangy/orgs", "repos_url": "https://api.github.com/users/ylwangy/repos", "events_url": "https://api.github.com/users/ylwangy/events{/privacy}", "received_events_url": "https://api.github.com/users/ylwangy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "> The paper says they add absolute position embeddings at the last layer, however, the models is still using the addition of position embeddings and word embeddings.\r\n> \r\n> ## In configuration_deberta.py\r\n> position_biased_input (:obj:`bool`, `optional`, defaults to :obj:`True`):\r\n> Whether add absolute position embedding to content embedding.\r\n> \r\n> default setting: position_biased_input=True\r\n\r\nIt's disabled by the model_config.json along with the model repository.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "The implementation of Deberta is actually not the same as the paper, the absolute embeddings should be added at the last two layers." ]
1,618
1,625
1,624
NONE
null
The paper says they add absolute position embeddings at the last layer, however, the models is still using the addition of position embeddings and word embeddings. In configuration_deberta.py ------------ position_biased_input (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether add absolute position embedding to content embedding. default setting: position_biased_input=True
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11345/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11344/comments
https://api.github.com/repos/huggingface/transformers/issues/11344/events
https://github.com/huggingface/transformers/issues/11344
863,019,153
MDU6SXNzdWU4NjMwMTkxNTM=
11,344
[run_summarization.py] wrong dataset leads to CUDA error:s
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This fails too:\r\n```\r\nCUDA_LAUNCH_BLOCKING=1 python examples/seq2seq/run_summarization.py \\\r\n--model_name_or_path google/pegasus-xsum --do_eval --dataset_name xsum --output_dir output_dir \\\r\n--per_device_eval_batch_size=16 --predict_with_generate --max_val_samples 20\r\n```\r\n\r\n```\r\n***** Running Evaluation *****\r\n Num examples = 20\r\n Batch size = 16\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nTraceback (most recent call last):\r\n File \"examples/seq2seq/run_summarization.py\", line 591, in <module>\r\n main()\r\n File \"examples/seq2seq/run_summarization.py\", line 547, in main\r\n metrics = trainer.evaluate(\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer_seq2seq.py\", line 75, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer.py\", line 1853, in evaluate\r\n output = eval_loop(\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer.py\", line 2005, in evaluation_loop\r\n loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/trainer_seq2seq.py\", line 167, in prediction_step\r\n generated_tokens = self.model.generate(\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/generation_utils.py\", line 931, in generate\r\n model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/generation_utils.py\", line 413, in _prepare_encoder_decoder_kwargs_for_generation\r\n model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1015, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py\", line 721, in forward\r\n embed_pos = self.embed_positions(input_shape)\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1015, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py\", line 139, in forward\r\n return super().forward(positions)\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/sparse.py\", line 156, in forward\r\n return F.embedding(\r\n File \"/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/functional.py\", line 2037, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: CUDA error: device-side assert triggered\r\n```\r\n\r\n```\r\nCollecting environment information...\r\nPyTorch version: 1.9.0a0+git548765d\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.2 LTS (x86_64)\r\nGCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\r\nClang version: 10.0.0-4ubuntu1\r\nCMake version: version 3.16.3\r\n\r\nPython version: 3.8 (64-bit runtime)\r\nIs CUDA available: True\r\nCUDA runtime version: 11.2.152\r\nGPU models and configuration:\r\nGPU 0: GeForce GTX 1070 Ti\r\nGPU 1: GeForce RTX 3090\r\n```\r\n", "I'm not sure it's a dataset thing. I think there is something wrong inside the Pegasus model, there have been multiple issues with it not working with Trainer.", "Hmm, after updating `datasets` to the latest version the cmd line in OP started to work. But it crashes in the same way if I add `--max_train_samples 20 --max_val_samples 20`.\r\n", "Hi, do you know how to use GPU when running summarization.py? I have 2 GPUs on my computer, but it didn't use them... Thank you very much!", "@liubest, please kindly use https://discuss.huggingface.co/ if you run into troubles after reading [README.md](https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/examples/pytorch/summarization/README.md), which should cover most of the questions on this example usage.", "> @liubest, please kindly use https://discuss.huggingface.co/ if you run into troubles after reading [README.md](https://github.com/huggingface/transformers/blob/52166f672ed337934d90cc525c226d93209e0923/examples/pytorch/summarization/README.md), which should cover most of the questions on this example usage.\r\n\r\nThank you for your reply. I have one more question and it is not found in the forum. When using run_summarization.py, how to run transformer models like t5-small, facebook/bart-large-cnn without loading pre-trained weights? I only want to train their original model architecture without pre-trained model. Thank you very much!", "You will find probably dozens tutorials if you use google: Please try [huggingface train model from scratch](https://www.google.com/search?channel=fs&q=huggingface+train+model+from+scratch).\r\n\r\nPlease let's not derail this issue by asking unrelated questions. If you still have a problem please start a new Issue. Thank you!", "I'm also interested in solving this problem. @stas00, let me know if I should look into it", "Yes, please, @patrickvonplaten - thank you!", "@stas00, I checked and the problem simply seems to be that `max_source_length` is too high. It's set to 1024 by default even though Pegasus can only handle `512`. So, the following command should just run fine:\r\n\r\n```bash\r\npython examples/pytorch/summarization/run_summarization.py --model_name_or_path google/pegasus-xsum --do_train \\\r\n--do_eval --dataset_name cnn_dailymail --dataset_config \"3.0.0\" \\\r\n--output_dir /tmp/tst-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=1 \\\r\n--overwrite_output_dir --predict_with_generate --max_source_length 512\r\n```", "By the way errors like those `/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed` are in my experience very often out of index errors and it helps to run the same code on CPU which then gives a better error message", "> @stas00, I checked and the problem simply seems to be that `max_source_length` is too high. It's set to 1024 by default even though Pegasus can only handle `512`. So, the following command should just run fine:\r\n> \r\n> ```shell\r\n> python examples/pytorch/summarization/run_summarization.py --model_name_or_path google/pegasus-xsum --do_train \\\r\n> --do_eval --dataset_name cnn_dailymail --dataset_config \"3.0.0\" \\\r\n> --output_dir /tmp/tst-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=1 \\\r\n> --overwrite_output_dir --predict_with_generate --max_source_length 512\r\n> ```\r\n\r\nThank you for investigating this, @patrickvonplaten - could we programmatically defend against this mismatch?", "> By the way errors like those `/workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [174,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed` are in my experience very often out of index errors and it helps to run the same code on CPU which then gives a better error message\r\n\r\nYes! so with `CUDA_VISIBLE_DEVICES=\"\"`\r\n\r\nwe should document this at https://huggingface.co/transformers/troubleshooting.html\r\n\r\nAlso `CUDA_LAUNCH_BLOCKING=1` is another important debug technique for gpu\r\n", "@stas00 , @patrickvonplaten , Pegasus actually uses SinusoidalPositionalEmbedding, so there is no seq length limit. We should resize the embedding if cur len is greater than the default len. That's what we do in FSMT and M2M100", "On the other hand Pegasus has only been trained on a max length of 512, so I'm not sure whether it's a good idea to \"silently\" extend the input to a length of 1024 since the model will probably produce garbage, or do you guys have had different experiences @stas00 @patil-suraj ? \r\n\r\nThink I'd prefer to reduce max length automatically to model.config.max_position_embeddings and throw a warning", "That makes sense, but even though pegaus is pre-trained with 512, they use different `max_position_embeddings` when fine-tuning\r\n\r\nfo example for xsum model `max_position_embeddings` is 512 https://huggingface.co/google/pegasus-xsum/blob/main/config.json#L44\r\n\r\nand for cnn_dm, pubmed it is 1024 \r\nhttps://huggingface.co/google/pegasus-pubmed/blob/main/config.json#L38\r\nhttps://huggingface.co/google/pegasus-pubmed/blob/main/config.json#L38\r\n", "> Think I'd prefer to reduce max length automatically to model.config.max_position_embeddings and throw a warning\r\n\r\nThis is very likely to be unnoticed.\r\n\r\nWe misuse warnings too much, they are ok when you have 5 lines of output, when you have 100s of those chances that the user will see it is close to 0. Especially when things seem to work, albeit with setting changes behind the scenes.\r\n\r\nI feel that @patil-suraj's suggestion of granting user's wish is a better one and if they get garbage then it's loud and clear that they did something wrong. Here, a warning of asking for a longer value than preset will work, as they are likely to search for the culprit.\r\n\r\nAnd in situations where we know what the user is asking for is surely not going to work, we should assert.", "Ok - good arguments! IMO we should only allow this resizing though for models that use Sinusoidal position embeddings a.k.a. position embeddings that have set `.grad` to False.\r\n\r\nIn terms of implementation, I'd suggest to add a general `resize_position_embeddings(self, max_posituon_embeddings)` to `PreTrainedModel` that throws a NotImplementedError and is then overwritten in Pegasus", "We should also overwrite the `config.max_position_embeddings` when doing so", "@patrickvonplaten, do you have some resources to come back so that we could complete this issue? It looks like it fell between the cracks. Thank you.", "Ok so the plan is to:\r\n\r\n1. Add a `resize_position_embeddings` to `PreTrainedModel` just like we are doing it for the word embeddings\r\n2. `resize_position_embeddings` should probably log or warn depending on whether it's sinus position embeddings or learned ones\r\n3. The function should overwrite `config.max_position_embeddings`\r\n\r\n=> Happy to open a PR for this one, but would be great to first hear @LysandreJik and @sgugger's opinion on it as well", "Works for me!", "@sgugger ,can you share your working code?", "No I meant the plan suggested by @patrickvonplaten in the above message works for me." ]
1,618
1,631
1,631
CONTRIBUTOR
null
Feeding `--dataset_name cnn_dailymail` to `--model_name_or_path google/pegasus-xsum` leads to lots of errors from pytorch - perhaps there is a way to detect that the dataset is inappropriate and give a nice relevant assert instead? You'd think that `--dataset_name cnn_dailymail` and `--dataset_name xsum` should be interchangeable... ``` python examples/seq2seq/run_summarization.py --model_name_or_path google/pegasus-xsum --do_train \ --do_eval --dataset_name cnn_dailymail --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization --per_device_train_batch_size=1 --per_device_eval_batch_size=1 \ --overwrite_output_dir --predict_with_generate [....] /workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [290,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [290,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [290,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. (crashes w/o traceback here) ``` If I run it on one gpu I get: ``` [...] /workspace/pytorch/aten/src/ATen/native/cuda/Indexing.cu:666: indexSelectLargeIndex: block: [138,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. return forward_call(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 763, in forward layer_outputs = encoder_layer( File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 323, in forward hidden_states, attn_weights, _ = self.self_attn( File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/mnt/nvme1/code/huggingface/transformers-gpt-neo-nan/src/transformers/models/pegasus/modeling_pegasus.py", line 190, in forward query_states = self.q_proj(hidden_states) * self.scaling File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl return forward_call(*input, **kwargs) File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward return F.linear(input, self.weight, self.bias) File "/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/functional.py", line 1860, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` Thanks. @sgugger, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11344/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11343/comments
https://api.github.com/repos/huggingface/transformers/issues/11343/events
https://github.com/huggingface/transformers/pull/11343
862,967,857
MDExOlB1bGxSZXF1ZXN0NjE5NDUwMTEy
11,343
Update to use datasets remove_cloumns method
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,618
1,618
1,618
COLLABORATOR
null
# What does this PR do? This PR updates the command used in the `Trainer` to drop the columns not used by the model to take advantage of the latest (well not so latest since it landed in datasets 1.4.0) `remove_columns` method. This adds the advantage of not modifying in place the dataset (as was done before) so the user does not have unexpected changes in their original datasets. In consequence, the little hack needed in the question answering examples is now unnecessary.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11343/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11343", "html_url": "https://github.com/huggingface/transformers/pull/11343", "diff_url": "https://github.com/huggingface/transformers/pull/11343.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11343.patch", "merged_at": 1618942322000 }
https://api.github.com/repos/huggingface/transformers/issues/11342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11342/comments
https://api.github.com/repos/huggingface/transformers/issues/11342/events
https://github.com/huggingface/transformers/issues/11342
862,942,788
MDU6SXNzdWU4NjI5NDI3ODg=
11,342
mlflow parameter overflow when training a language adapter
{ "login": "JackyXiangcheng", "id": 40454951, "node_id": "MDQ6VXNlcjQwNDU0OTUx", "avatar_url": "https://avatars.githubusercontent.com/u/40454951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JackyXiangcheng", "html_url": "https://github.com/JackyXiangcheng", "followers_url": "https://api.github.com/users/JackyXiangcheng/followers", "following_url": "https://api.github.com/users/JackyXiangcheng/following{/other_user}", "gists_url": "https://api.github.com/users/JackyXiangcheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/JackyXiangcheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JackyXiangcheng/subscriptions", "organizations_url": "https://api.github.com/users/JackyXiangcheng/orgs", "repos_url": "https://api.github.com/users/JackyXiangcheng/repos", "events_url": "https://api.github.com/users/JackyXiangcheng/events{/privacy}", "received_events_url": "https://api.github.com/users/JackyXiangcheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,618
1,619
1,619
NONE
null
When I train a german adapter, I met a problem, which has been talked a lot in the issues, but there are not very clear solutions. Traceback (most recent call last): File "/mnt/localdata/cao/run_de_modeling.py", line 512, in <module> main() File "/mnt/localdata/cao/run_de_modeling.py", line 476, in main trainer.train(model_path=model_path) File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/trainer.py", line 748, in train self.control = self.callback_handler.on_train_begin(self.args, self.state, self.control) File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/trainer_callback.py", line 335, in on_train_begin return self.call_event("on_train_begin", args, state, control) File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/trainer_callback.py", line 373, in call_event result = getattr(callback, event)( File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/integrations.py", line 502, in on_train_begin self.setup(args, state, model) File "/home/cao/miniconda3/lib/python3.8/site-packages/transformers/integrations.py", line 497, in setup mlflow.log_params(dict(combined_dict_items[i : i + MLflowCallback.MAX_LOG_SIZE])) File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/tracking/fluent.py", line 475, in log_params MlflowClient().log_batch(run_id=run_id, metrics=[], params=params_arr, tags=[]) File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/tracking/client.py", line 838, in log_batch self._tracking_client.log_batch(run_id, metrics, params, tags) File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/client.py", line 245, in log_batch self.store.log_batch(run_id=run_id, metrics=metrics, params=params, tags=tags) File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/store/tracking/file_store.py", line 852, in log_batch _validate_batch_log_data(metrics, params, tags) File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/utils/validation.py", line 232, in _validate_batch_log_data _validate_param(param.key, param.value) File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/utils/validation.py", line 112, in _validate_param _validate_length_limit("Param value", MAX_PARAM_VAL_LENGTH, value) File "/home/cao/miniconda3/lib/python3.8/site-packages/mlflow/utils/validation.py", line 180, in _validate_length_limit raise MlflowException( mlflow.exceptions.MlflowException: Param value '{'adapters': {'de': (text_lang, 'bb1c8efb82510bed')}, 'config_map': {text_lang: AdapterConfig(original_ln_before=True, original_ln_after=True, residual_before_ln=True, adapter_residual_before_ln=False, ln_before=False, ln_after=False, mh_adapter=Fals' had length 786, which exceeded length limit of 250 My codes are like this: CUDA_VISIBLE_DEVICES="4" python3 /mnt/localdata/cao/run_de_modeling.py \ --output_dir=/mnt/localdata/cao/output_language_adapter_de/ \ --model_type=bert \ --model_name_or_path=bert-base-multilingual-cased \ --do_train \ --train_data_file=/mnt/localdata/cao/data_for_model/DE_train.txt \ --do_eval \ --eval_data_file=/mnt/localdata/cao/data_for_model/DE_valid.txt \ --mlm \ --language de \ --train_adapter \ --adapter_config pfeiffer \ --per_gpu_train_batch_size 3 \ --learning_rate 5e-5 \ --cache_dir /mnt/localdata/cao/de_cache_dir/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11342/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11341/comments
https://api.github.com/repos/huggingface/transformers/issues/11341/events
https://github.com/huggingface/transformers/issues/11341
862,931,271
MDU6SXNzdWU4NjI5MzEyNzE=
11,341
Warpping model.generate before exporting into tensorflow savedmodel format
{ "login": "techsachinkr", "id": 32604730, "node_id": "MDQ6VXNlcjMyNjA0NzMw", "avatar_url": "https://avatars.githubusercontent.com/u/32604730?v=4", "gravatar_id": "", "url": "https://api.github.com/users/techsachinkr", "html_url": "https://github.com/techsachinkr", "followers_url": "https://api.github.com/users/techsachinkr/followers", "following_url": "https://api.github.com/users/techsachinkr/following{/other_user}", "gists_url": "https://api.github.com/users/techsachinkr/gists{/gist_id}", "starred_url": "https://api.github.com/users/techsachinkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/techsachinkr/subscriptions", "organizations_url": "https://api.github.com/users/techsachinkr/orgs", "repos_url": "https://api.github.com/users/techsachinkr/repos", "events_url": "https://api.github.com/users/techsachinkr/events{/privacy}", "received_events_url": "https://api.github.com/users/techsachinkr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,618
1,622
1,622
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> I had been using BART model and there i can use model.generate if i am working with checkpoints. However model.generate is not exported when i export the model as savedmodel ".pb" format. So is there a way to wrap model.generate while exporting savedmodel, so that it allows using beam search and generation of summary_ids to generate summary ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Will allow summary generation with savedmodel format for BART like models that uses model.generate Related to https://github.com/huggingface/transformers/issues/5443 which has been marked closed. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11341/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11340/comments
https://api.github.com/repos/huggingface/transformers/issues/11340/events
https://github.com/huggingface/transformers/pull/11340
862,927,762
MDExOlB1bGxSZXF1ZXN0NjE5NDE2MjMw
11,340
Remove boiler plate code
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
closed
false
null
[]
[]
1,618
1,619
1,619
MEMBER
null
# What does this PR do? In Flax, the logic of every model has to be built purely on a tree of `flax.linen.Module` classes so that the whole model, *e.g.* `FlaxBertForMaskedLMModule`, can automatically be recast as an explicit function. This allows for easy jitting, data parallelism, and model parallelism. However this also means that no weight parameters can be stored in `FlaxBertForMaskedLMModule`, which is a bit problematic since we need to store the weights from `load_from_pretrained`. This forces us to have two Flax classes: - `FlaxBertForMaskedLMModule`, - `FlaxBertForMaskedLM`, whereas `FlaxBertForMaskedLM` takes care of loading/saving the weights and `FlaxBertForMaskedLMModule` defines the logic (function) of the model. This has led to a lot of boilerplate code since `FlaxBertModel`, `FlaxBertForMaskedLM`, `FlaxBertForPretraining`, ... are essentially all the same: - they take the same input and pass it to the corresponding `flax.linen.Module` forward function - they initialize a `FlaxPretrainedModel` with the correct module to inherit the loading/saving functionality - they make use of the same `init_weights` function. For BERT, the `__call__` functions take identical inputs across different classes. This assumes that this holds true for more or less all BERT-like models in Flax. However, there are some exceptions: - A `BertForCausalLM` (needed when we add `FlaxEncoderDecoderModel`) would have to overwrite the `__call__` method as it also takes `encoder_hidden_states`, `encoder_attention_mask` as an input. - This design makes less sense for *e.g.* T5 since `T5Encoder` takes very much different inputs as `T5ForConditionalGeneration`. Here the `__call__` and `init_weights` methods would then not be placed in `T5PreTrainedModel`. Overall, I think however that removing this much boilerplate is cleaner and will allow us to implement the important models faster. What do you think @sgugger @LysandreJik @avital ? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/11340", "html_url": "https://github.com/huggingface/transformers/pull/11340", "diff_url": "https://github.com/huggingface/transformers/pull/11340.diff", "patch_url": "https://github.com/huggingface/transformers/pull/11340.patch", "merged_at": 1619022879000 }
https://api.github.com/repos/huggingface/transformers/issues/11339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11339/comments
https://api.github.com/repos/huggingface/transformers/issues/11339/events
https://github.com/huggingface/transformers/issues/11339
862,854,036
MDU6SXNzdWU4NjI4NTQwMzY=
11,339
Perform max_input_tokens truncation with Summarization Pipeline
{ "login": "brandenchan", "id": 33759007, "node_id": "MDQ6VXNlcjMzNzU5MDA3", "avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brandenchan", "html_url": "https://github.com/brandenchan", "followers_url": "https://api.github.com/users/brandenchan/followers", "following_url": "https://api.github.com/users/brandenchan/following{/other_user}", "gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}", "starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions", "organizations_url": "https://api.github.com/users/brandenchan/orgs", "repos_url": "https://api.github.com/users/brandenchan/repos", "events_url": "https://api.github.com/users/brandenchan/events{/privacy}", "received_events_url": "https://api.github.com/users/brandenchan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think the argument you're looking for is `truncation`! Could you try passing `truncation=True` or `truncation=\"longest_first\"` to your pipeline call?", "Hi @LysandreJik this doesn't seem to be working for me. When I call \r\n```\r\nsummarizer = pipeline(\"summarization\", model=model, tokenizer=tokenizer, device=use_gpu, truncation=True)\r\n```\r\nI get \r\n```\r\n<ipython-input-8-315444041dc9> in <module>\r\n 5 \r\n 6 #Summarize\r\n----> 7 summarizer = TransformersSummarizer(model_name_or_path=\"google/pegasus-xsum\")\r\n 8 \r\n 9 p_summarizer = Pipeline()\r\n\r\n~/Code/haystack/haystack/summarizer/transformers.py in __init__(self, model_name_or_path, model_version, tokenizer, max_length, min_length, use_gpu, clean_up_tokenization_spaces, separator_for_single_summary, generate_single_summary)\r\n 87 tokenizer = model_name_or_path\r\n 88 model = AutoModelForSeq2SeqLM.from_pretrained(pretrained_model_name_or_path=model_name_or_path, revision=model_version)\r\n---> 89 self.summarizer = pipeline(\"summarization\", model=model, tokenizer=tokenizer, device=use_gpu, truncation=True)\r\n 90 self.max_length = max_length\r\n 91 self.min_length = min_length\r\n\r\n~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)\r\n 3307 break\r\n 3308 \r\n-> 3309 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)\r\n\r\n~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py in __init__(self, *args, **kwargs)\r\n 2353 def __init__(self, *args, **kwargs):\r\n 2354 kwargs.update(task=\"summarization\")\r\n-> 2355 super().__init__(*args, **kwargs)\r\n 2356 \r\n 2357 self.check_model_type(\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'truncation'\r\n```\r\nand when I put the `truncation` argument into the call\r\n```\r\n summaries = self.summarizer(\r\n contexts,\r\n min_length=self.min_length,\r\n max_length=self.max_length,\r\n return_text=True,\r\n clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,\r\n truncation=True\r\n )\r\n```\r\nI get\r\n```\r\nTypeError Traceback (most recent call last)\r\n\r\n~/Code/haystack/haystack/pipeline.py in run(self, **kwargs)\r\n 121 logger.debug(f\"Running node `{node_id}` with input `{node_input}`\")\r\n--> 122 node_output, stream_id = self.graph.nodes[node_id][\"component\"].run(**node_input)\r\n 123 except Exception as e:\r\n\r\n~/Code/haystack/haystack/summarizer/base.py in run(self, documents, generate_single_summary, **kwargs)\r\n 36 if documents:\r\n---> 37 results[\"documents\"] = self.predict(documents=documents, generate_single_summary=generate_single_summary)\r\n 38 \r\n\r\n~/Code/haystack/haystack/summarizer/transformers.py in predict(self, documents, generate_single_summary)\r\n 131 clean_up_tokenization_spaces=self.clean_up_tokenization_spaces,\r\n--> 132 truncation=True\r\n 133 )\r\n\r\n~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, return_tensors, return_text, clean_up_tokenization_spaces, *documents, **generate_kwargs)\r\n 2438 attention_mask=inputs[\"attention_mask\"],\r\n-> 2439 **generate_kwargs,\r\n 2440 )\r\n\r\n~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)\r\n 14 with self:\r\n---> 15 return func(*args, **kwargs)\r\n 16 return decorate_context\r\n\r\n~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, **model_kwargs)\r\n 502 # add encoder_outputs to model_kwargs\r\n--> 503 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)\r\n 504 \r\n\r\n~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)\r\n 85 }\r\n---> 86 model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)\r\n 87 return model_kwargs\r\n\r\n~/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n\r\nTypeError: forward() got an unexpected keyword argument 'truncation'\r\n\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nException Traceback (most recent call last)\r\n\r\n<ipython-input-4-315444041dc9> in <module>\r\n 10 p_summarizer.add_node(component=es_retriever, name=\"Retriever\", inputs=[\"Query\"])\r\n 11 p_summarizer.add_node(component=summarizer, name=\"Summarizer\", inputs=[\"Retriever\"])\r\n---> 12 res = p_summarizer.run(query=\"Who is the father of Arya Stark??\", top_k_retriever=10)\r\n 13 \r\n 14 pprint(res)\r\n\r\n~/Code/haystack/haystack/pipeline.py in run(self, **kwargs)\r\n 123 except Exception as e:\r\n 124 tb = traceback.format_exc()\r\n--> 125 raise Exception(f\"Exception while running node `{node_id}` with input `{node_input}`: {e}, full stack trace: {tb}\")\r\n 126 queue.pop(node_id)\r\n 127 next_nodes = self.get_next_nodes(node_id, stream_id)\r\n\r\nException: Exception while running node `Summarizer` with input `{'query': 'Who is the father of Arya Stark??', 'documents': [{'text': \"\\n===In the Riverlands===\\nThe Stark army reaches the Twins, a bridge stronghold controlled by Walder Frey, who agrees to allow the army to cross the river and to commit his troops in return for Robb and Arya Stark marrying two of his children.\\nTyrion Lannister suspects his father Tywin, who decides Tyrion and his barbarians will fight in the vanguard, wants him killed. As Tyrion, Bronn, and the prostitute Shae swap stories, Tyrion reveals he was married to a woman his father revealed was a prostitute, and made Tyrion watch as his guardsmen raped her.\\nAs a Stark force approaches, Tyrion is trampled in the rush and regains consciousness to find the battle over. Tywin discovers the Stark host was only 2,000 men, not the 20,000 he was led to expect.\\nRobb, having divided his forces, defeats Jaime Lannister's army with his remaining 18,000 men and captures Jaime.\", 'id': '824a2362-004b-4234-a2fe-d0b82fdca52f', 'score': 11.656843, 'probability': 0.8110895501528345, 'question': None, 'meta': {'name': '450_Baelor.txt'}, 'embedding': None}, {'text': '\\n===On the Kingsroad===\\nCity Watchmen search the caravan for Gendry but are turned away by Yoren. Gendry tells Arya Stark that he knows she is a girl, and she reveals she is actually Arya Stark after learning that her father met Gendry before he was executed.', 'id': 'a04e3059-a941-4aa1-96e4-da0429c1a617', 'score': 11.3836775, 'probability': 0.8058019827683869, 'question': None, 'meta': {'name': '224_The_Night_Lands.txt'}, 'embedding': None}, {'text': '\\n===\\'\\'A Game of Thrones\\'\\'===\\nSansa Stark begins the novel by being betrothed to Crown Prince Joffrey Baratheon, believing Joffrey to be a gallant prince. While Joffrey and Sansa are walking through the woods, Joffrey notices Arya sparring with the butcher\\'s boy, Mycah. A fight breaks out and Joffrey is attacked by Nymeria (Arya\\'s direwolf) after Joffrey threatens to hurt Arya. Sansa lies to King Robert about the circumstances of the fight in order to protect both Joffrey and her sister Arya. Since Arya ran off with her wolf to save it, Sansa\\'s wolf is killed instead, estranging the Stark daughters.\\nDuring the Tourney of the Hand to honour her father Lord Eddard Stark, Sansa Stark is enchanted by the knights performing in the event. At the request of his mother, Queen Cersei Lannister, Joffrey spends a portion of the tourney with Sansa, but near the end he commands his guard Sandor Clegane, better known as The Hound, to take her back to her quarters. Sandor explains how his older brother, Gregor, aka \"Mountain that Rides\" pushed his face into a brazier of hot coals, for playing with one of his wooden toys.\\nAfter Eddard discovers the truth of Joffrey\\'s paternity, he tells Sansa that they will be heading back to Winterfell. Sansa is devastated and wishes to stay in King\\'s Landing, so she runs off to inform Queen Cersei of her father\\'s plans, unwittingly providing Cersei with the information needed to arrest her father. After Robert dies, Sansa begs Joffrey to show mercy on her father and he agrees, if Ned will swear an oath of loyalty, but executes him anyway, in front of Sansa. Sansa is now effectively a hostage in King\\'s Landing and finally sees Joffrey\\'s true nature, after he forces her to look at the tarred head of her now-deceased father.', 'id': '1d2bb694-88fb-44eb-972d-6a72dd0009a1', 'score': 11.194147, 'probability': 0.8020677650524326, 'question': None, 'meta': {'name': '332_Sansa_Stark.txt'}, 'embedding': None}, {'text': \"\\n===Season 2===\\nGendry travels North with Yoren and other Night's Watch recruits, including Arya Stark (disguised as an orphan boy named 'Arry), Lommy Greenhands, Hot Pie and Jaqen H'ghar. During their journey, they are stopped by the Goldcloaks of the City Watch, who demand that Yoren hand Gendry over to them - King Joffrey has ordered that all of his father Robert's bastards be killed, but Yoren turns the Goldcloaks away. Later, Gendry forces Arya to reveal her true identity, and is surprised to learn she is in fact Ned Stark's daughter. After the Goldcloaks get help from Ser Amory Lorch and his men, they ambush the travelling party. In the chaos, Yoren is killed. Gendry's life is then saved by Arya, who convinces the Goldcloaks that Lommy, who was killed during the attack, was in fact Gendry. Gendry and the rest of the recruits are then escorted to Harrenhal, the ruined castle-turned-prison. Ser Gregor Clegane oversees order here, and arbitrarily has many of the prisoners tortured and killed. Gendry is nearly tortured and killed but is saved by the arrival of Lord Tywin Lannister, who chides Clegane's men for their reckless treatment of the prisoners. Thanks to Jaqen H'ghars help, Arya, Gendry and Hot Pie are able to escape Harrenhal.\", 'id': '689dac66-1347-43ea-8456-ab728566f9aa', 'score': 11.098732, 'probability': 0.8001674895900547, 'question': None, 'meta': {'name': '191_Gendry.txt'}, 'embedding': None}, {'text': '\\n====Season 1====\\nArya accompanies her father Ned and her sister Sansa to King\\'s Landing. Before their departure, Arya\\'s half-brother Jon Snow gifts Arya a sword which she dubs \"Needle\". On the Kingsroad, Arya is sparring with a butcher\\'s boy, Mycah, when Sansa\\'s betrothed Prince Joffrey Baratheon attacks Mycah, prompting Arya\\'s direwolf Nymeria to bite Joffrey. Arya shoos Nymeria away so she is not killed, but is furious when Sansa later refuses to support her version of events. Mycah is later killed by Joffrey\\'s bodyguard Sandor \"The Hound\" Clegane, earning him Arya\\'s hatred. Ned arranges for Arya to have sword lessons with the Braavosi Syrio Forel, who later defends her from Ser Meryn Trant after Joffrey ascends to the throne and kills the Stark household. Arya flees the Red Keep, accidentally killing a stable boy in her escape, hiding out as a beggar in the streets of King\\'s Landing. Ned is eventually taken to the Great Sept of Baelor to face judgment; he spots Arya in the crowd, and alerts the Night\\'s Watch recruiter Yoren to her presence. Yoren prevents Arya from witnessing Ned\\'s execution and has her pose as a boy, \"Arry\", to avoid detection as she joins Yoren\\'s recruits traveling north to Castle Black.', 'id': '6947d45a-f420-4608-b396-774972193849', 'score': 10.634479, 'probability': 0.7907264571073125, 'question': None, 'meta': {'name': '43_Arya_Stark.txt'}, 'embedding': None}, {'text': '\\n===In King\\'s Landing===\\nAfter Varys tells him that Sansa Stark\\'s life is also at stake, Eddard \"Ned\" Stark agrees to make a false confession and swear loyalty to King Joffrey Baratheon.\\nArya Stark finds a crowd gathering to watch her father be judged, and climbs onto the statue of Baelor the Blessed. Ned notices Arya and alerts Night\\'s Watch recruiter Yoren. Before Sansa, Cersei Lannister, Joffrey and the Small Council, Ned confesses to treason and swears fealty to Joffrey. Instead of sparing Ned as promised, Joffrey orders him to be executed. Seeing that Arya has been rescued by Yoren, Ned accepts his fate and is beheaded.', 'id': 'eb0b5450-a583-428b-b412-266a70c48e30', 'score': 10.627409, 'probability': 0.7905801782386174, 'question': None, 'meta': {'name': '450_Baelor.txt'}, 'embedding': None}, {'text': \"\\n==== ''A Storm of Swords'' and ''A Feast for Crows'' ====\\nPrior to the Red Wedding, Roose Bolton presents Robb Stark with a piece of Theon's skin, revealing that Ramsay has been flaying him; though disgusted, Robb acquiesces to Theon's further captivity, as Theon's father Balon has recently died and Theon's absence presents a succession crisis for the Ironborn. Following Robb Stark's death, King Tommen Baratheon legitimizes Ramsay as a Bolton. The Lannisters pass off Jeyne Poole as Arya Stark and send her north to be betrothed to Ramsay, with only the Lannisters and Boltons aware she is not the real Arya Stark.\", 'id': '2e1f4f84-036c-4a2e-956b-090b20e32b25', 'score': 10.533444, 'probability': 0.788628898071797, 'question': None, 'meta': {'name': '487_Ramsay_Bolton.txt'}, 'embedding': None}, {'text': '\\n===House Frey===\\n* \\'\\'\\'Walder Frey\\'\\'\\' (seasons 1, 3, 6–7) portrayed by David Bradley. David Bradley Lord Walder Frey, nicknamed the \"Late Lord Frey\", is the head of House Frey, Lord of the Crossing and bannerman to House Tully. He is known for outliving his many wives (now on his 8th) and siring over 100 children (both bastard and trueborn). Because the use of the Twins became a strategic necessity for Robb\\'s host, Walder was able to negotiate marriage contracts for his children to Robb and Arya Stark. But during Season 2 Robb broke his word and married Lady Talisa. For this slight, and willing to take advantage of the war\\'s changing fortunes, he conspires with Tywin Lannister and Roose Bolton to betray Robb Stark at the wedding of his liege Edmure Tully, which he insists in return for support of his men. Frey hosts the infamous \"Red Wedding\" at which Robb Stark, his wife and mother are all murdered, refusing to spare Robb even as Catelyn holds Lady Frey hostage and threatens to slit her throat, which she does. He is subsequently granted Riverrun and its lands (though the title Lord Paramount of the Riverlands passes to Harrenhal and House Baelish) and expresses delight to take another young wife, but his house is irredeemably tarnished by the betrayal and House Tully\\'s vassals refuse to submit to his rule. In Season 6, he is outraged when he hears of the Blackfish recapture\\' of Riverrun and blames his sons Lothar and Black Walder for allowing him to escape. He then orders them to retake the castle using Edmure Tully as a hostage. Though they successfully retake Riverrun with the help of a Lannister host led by Jaime Lannister, Walder is ambushed shortly afterwards by Arya Stark, who slits his throat in revenge for the Red Wedding. In Season 7, Arya uses Walder\\'s face to deceive and poison the rest of his family.\\n* \\'\\'\\'Lothar Frey\\'\\'\\' (seasons 3, 6) portrayed by Tom Brooke in season 3, and by Daniel Tuite in season 6. One of Lord Walder Frey\\'s many sons, nicknamed “Lame Lothar” because of his twisted leg. He and his half-brother Black Walder are sent by their father to Riverrun to propose a marriage between Lord Edmure Tully and Roslin Frey as terms for House Frey rejoining Robb Stark\\'s campaign against the Lannisters. He is one of the first to commence the \"Red Wedding\", stabbing Talisa Stark in the womb several times and killing her and her unborn child. In the sixth season, he is ordered by Walder to retake Riverrun from Brynden Tully. Though they succeed with Lannister help, he is killed by Arya Stark, who subsequently bakes him into a pie.\\n* \\'\\'\\'Black Walder Rivers\\'\\'\\' (seasons 3, 6) portrayed by Tim Plester. One of Lord Walder Frey\\'s many bastard sons, nicknamed “Black Walder” for his dark demeanor. He and his half-brother Lame Lothar are sent by their father to Riverrun to propose a marriage between Lord Edmure Tully and Roslin Frey as terms for House Frey rejoining Robb Stark\\'s campaign against the Lannister. He kills Catelyn Stark at the Red Wedding, after she slits Lady Frey\\'s throat in retaliation for her son\\'s death. In the sixth season, he takes part in the siege of Riverrun. Though the Freys reclaim the castle with the help of a Lannister host, Black Walder is killed shortly afterwards along with Lothar by Arya Stark, who bakes them both into a pie.', 'id': '6a8e1b47-e9ca-441c-b8ca-a4d259b58ece', 'score': 10.513748, 'probability': 0.7882182073903166, 'question': None, 'meta': {'name': '349_List_of_Game_of_Thrones_characters.txt'}, 'embedding': None}, {'text': '\\n==== \\'\\'A Game of Thrones\\'\\' ====\\nArya adopts a direwolf cub, which she names Nymeria after a legendary warrior queen. She travels with her father, Eddard, to King\\'s Landing when he is made Hand of the King. Before she leaves, her half-brother Jon Snow has a smallsword made for her as a parting gift, which she names \"Needle\" after her least favorite ladylike activity.\\nWhile taking a walk together, Prince Joffrey and her sister Sansa happen upon Arya and her friend, the low-born butcher apprentice Mycah, sparring in the woods with broomsticks. Arya defends Mycah from Joffrey\\'s torments and her direwolf Nymeria helps Arya fight off Joffrey, wounding his arm in the process. Knowing that Nymeria will likely be killed in retribution, Arya chases her wolf away; but Sansa\\'s direwolf Lady is killed in Nymeria\\'s stead and Mycah is hunted down and killed by Sandor Clegane, Joffrey\\'s bodyguard.\\nIn King\\'s Landing, her father discovers Arya\\'s possession of Needle, but instead of confiscating it he arranges for fencing lessons under the Braavosi swordmaster Syrio Forel, who teaches her the style of fighting known as \"water dancing\". After her father\\'s arrest, Syrio is killed protecting her and Arya narrowly escapes capture. She later witnesses the public execution of her father before falling under the protection of the Night\\'s Watch recruiter Yoren.', 'id': '1642d35f-2d57-4c42-988f-a2b7afc3ac6c', 'score': 10.344947, 'probability': 0.7846745387990397, 'question': None, 'meta': {'name': '43_Arya_Stark.txt'}, 'embedding': None}, {'text': '\\n== Character description ==\\nGendry was conceived and born in King\\'s Landing after Robert\\'s Rebellion ended and is one of sixteen (twenty in the television series) bastard children of King Robert Baratheon,. He is portrayed as tall and very muscled, having blue eyes and thick black hair, very similar to his biological father Robert and uncle Renly in their youth (Brienne of Tarth once almost mistook him for Renly for a moment). He is stubborn and easily confused.\\nDespite being one of the only four surviving biological children of King Robert (along with Mya Stone, Edric Storm and Bella Rivers), Gendry never knew who his father was. His mother was reported to have been a worker at an alehouse who died when Gendry was still a young boy, and all he remembers of her was that she had blond hair. Later on, Tobho Mott, a master armourer from Qohor, was offered double the customary fee by a \"lord\" with concealed identity to take Gendry in as a smith apprentice, but accepted him for free after being impressed by the boy\\'s physique. Gendry turns out to be a talented apprentice, and likes to spend time polishing a bull head helmet that he proudly made for himself, which earned him the nickname \"Bull\" by Arya Stark.', 'id': 'bb5166b7-2e87-4095-ae0d-5fa5097f197e', 'score': 9.938516, 'probability': 0.7759666295293488, 'question': None, 'meta': {'name': '191_Gendry.txt'}, 'embedding': None}]}`: forward() got an unexpected keyword argument 'truncation', full stack trace: Traceback (most recent call last):\r\n File \"/home/branden/Code/haystack/haystack/pipeline.py\", line 122, in run\r\n node_output, stream_id = self.graph.nodes[node_id][\"component\"].run(**node_input)\r\n File \"/home/branden/Code/haystack/haystack/summarizer/base.py\", line 37, in run\r\n results[\"documents\"] = self.predict(documents=documents, generate_single_summary=generate_single_summary)\r\n File \"/home/branden/Code/haystack/haystack/summarizer/transformers.py\", line 132, in predict\r\n truncation=True\r\n File \"/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/pipelines.py\", line 2439, in __call__\r\n **generate_kwargs,\r\n File \"/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 15, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py\", line 503, in generate\r\n model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)\r\n File \"/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/transformers/generation_utils.py\", line 86, in _prepare_encoder_decoder_kwargs_for_generation\r\n model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)\r\n File \"/home/branden/Code/anaconda3/envs/haystack/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'truncation'\r\n```\r\n\r\n", "Is there a way you could share your environment/reproducible code example so that I can take a look? On a recent version, running this fails:\r\n\r\n```py\r\nfrom transformers import pipeline\r\n\r\nsum = pipeline(\"summarization\")\r\nsum(\"hey\" * 10000)\r\n```\r\nwith the following error:\r\n```\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nIndexError: index out of range in self\r\n```\r\nso I do indeed get an out of range error. However, adding the `truncation` flag:\r\n\r\n```py\r\nsum(\"hey\" * 10000, truncation=True)\r\n```\r\nworks!", "Hmmm ok I need to look more closely at my code first. Closing for now.", "I think the key here was to put the truncation key in the summarization call instead of the pipeline call - \r\n```\r\nsum = pipeline(\"summarization\")\r\nsum(\"hey\" * 10000, truncation=True)\r\n```\r\n\r\ninstead of \r\n```\r\nsum = pipeline(\"summarization\", truncation=True)\r\nsum(\"hey\" * 10000)\r\n```" ]
1,618
1,626
1,620
CONTRIBUTOR
null
# 🚀 Feature request I'd like to be able to set a `max_input_tokens` and configure a `truncation_strategy` in `SummarizationPipeline`. Please let me know if I am missing something that already allows for this! ## Motivation I initialize and call a summarization pipeline as follows ``` model = AutoModelForSeq2SeqLM.from_pretrained(pretrained_model_name_or_path=model_name_or_path, revision=model_version) summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, device=use_gpu) summaries = summarizer( contexts, min_length=self.min_length, max_length=self.max_length, return_text=True, clean_up_tokenization_spaces=self.clean_up_tokenization_spaces, ) ``` Currently when I pass a text that is longer than `"google/pegasus-xsum"`'s 512 token limit, I get the following warning ``` Token indices sequence length is longer than the specified maximum sequence length for this model (768 > 512). Running this sequence through the model will result in indexing errors ``` and my program crashes. I'd like to just be able to set a max input tokens and truncation strategy either when I init or call the `pipeline` (which in turns inits a `SummarizationPipeline`). ## Your contribution If I am missing something or there already exists a way to get around this problem please let me know! If it wouldn't take too much effort to implement this, I might consider opening a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11339/timeline
completed
null
null